Вы находитесь на странице: 1из 1089

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014

ISSN 2091-2730

Table of Content
Topics

Page no

Chief Editor Board

3-4

Message From Associate Editor

Research Papers Collection

6-775

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CHIEF EDITOR BOARD


1. Dr Gokarna Shrestha, Professor, Tribhuwan University, Nepal
2. Dr Chandrasekhar Putcha, Outstanding Professor, University Of California, USA
3. Dr Shashi Kumar Gupta, , Professor, IIT Rurkee, India
4. Dr K R K Prasad, K.L.University, Professor Dean, India
5. Dr Kenneth Derucher, Professor and Former Dean, California State University,Chico, USA
6. Dr Azim Houshyar, Professor, Western Michigan University, Kalamazoo, Michigan, USA

7. Dr Sunil Saigal, Distinguished Professor, New Jersey Institute of Technology, Newark, USA
8. Dr Hota GangaRao, Distinguished Professor and Director, Center for Integration of Composites into
Infrastructure, West Virginia University, Morgantown, WV, USA
9. Dr Bilal M. Ayyub, professor and Director, Center for Technology and Systems Management,
University of Maryland College Park, Maryland, USA
10. Dr Sarh BENZIANE, University Of Oran, Associate Professor, Algeria
11. Dr Mohamed Syed Fofanah, Head, Department of Industrial Technology & Director of Studies, Njala
University, Sierra Leone
12. Dr Radhakrishna Gopala Pillai, Honorary professor, Institute of Medical Sciences, Kirghistan
13. Dr P.V.Chalapati, Professor, K.L.University, India
14. Dr Ajaya Bhattarai, Tribhuwan University, Professor, Nepal

ASSOCIATE EDITOR IN CHIEF


1. Er. Pragyan Bhattarai , Research Engineer and program co-ordinator, Nepal
ADVISORY EDITORS
1. Mr Leela Mani Poudyal, Chief Secretary, Nepal government, Nepal
2. Mr Sukdev Bhattarai Khatry, Secretary, Central Government, Nepal
3. Mr Janak shah, Secretary, Central Government, Nepal
3

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4. Mr Mohodatta Timilsina, Executive Secretary, Central Government, Nepal


5. Dr. Manjusha Kulkarni, Asso. Professor, Pune University, India
6. Er. Ranipet Hafeez Basha (Phd Scholar), Vice President, Basha Research Corporation, Kumamoto, Japan
Technical Members
1. Miss Rekha Ghimire, Research Microbiologist, Nepal section representative, Nepal
2. Er. A.V. A Bharat Kumar, Research Engineer, India section representative and program co-ordinator, India
3. Er. Amir Juma, Research Engineer ,Uganda section representative, program co-ordinator, Uganda
4. Er. Maharshi Bhaswant, Research scholar( University of southern Queensland), Research Biologist, Australia

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Message from Associate Editor In Chief


Let me first of all take this opportunity to wish all our readers a very happy, peaceful and
prosperous year ahead.
This is the Sixth Issue of the Second Volume of International Journal of Engineering Research
and General Science. A total of 152 research articles are published and I sincerely hope that each
one of these provides some significant stimulation to a reasonable segment of our community of
readers.
In this issue, we have focused mainly on the Recent Technology and its challenging approach in Research. We also
welcome more research oriented ideas in our upcoming Issues.
Authors response for this issue was really inspiring for us. We received many papers from more than 17 countires in this
issue and we received many research papers but our technical team and editor members accepted very less number of
research papers for the publication. We have provided editors feedback for every rejected as well as accepted paper so that
authors can work out in the weakness more and we shall accept the paper in near future. We apologize for the
inconvenient caused for rejected Authors but I hope our editors feedback helps you discover more horizons for your
research work.
I would like to take this opportunity to thank each and every writer for their contribution and would like to thank entire
International Journal of Engineering Research and General Science (IJERGS) technical team and editor member for their
hard work for the development of research in the world through IJERGS.
Last, but not the least my special thanks and gratitude needs to go to all our fellow friends and supporters. Your help is
greatly appreciated. I hope our reader will find our papers educational and entertaining as well. Our team have done good
job however, this issue may possibly have some drawbacks, and therefore, constructive suggestions for further
improvement shall be warmly welcomed.

Er. Pragyan Bhattarai,


Assistant Editor-in-Chief, P&R,
International Journal of Engineering Research and General Science
E-mail -Pragyan@ijergs.org
Contact no- +9779841549341

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Heart Disease Prediction Using Classification with Different Decision Tree


Techniques
K. Thenmozhi 1, P.Deepika2
1

Asst. Professor, Department of Computer Science, Dr. N.G.P. Arts and Science College, CBE

Asst. Professor, Department of Computer Science and Applications, Sasurie College of Arts and Science, CBE

ABSTRACT: Data mining is one of the essential areas of research that is more popular in health organization. Data mining plays an
effective role for uncovering new trends in healthcare organization which is helpful for all the parties associated with this field. Heart
disease is the leading cause of death in the world over the past 10 years. Heart disease is a term that assigns to a large number of
medical conditions related to heart. These medical conditions describe the irregular health condition that directly affects the heart and
all its parts. The healthcare industry gathers enormous amount of heart disease data which are not mined to discover hidden
information for effective decision making. Data mining techniques are useful for analyzing the data from many different dimensions
and for identifying relationships. This paper explores the utility of various decision tree algorithms in classify and predict the disease.

KEYWORDS : Data mining, KDD, Classification, decision tree, ID3, C4.5,C5.0, J48
INTRODUCTION
Data mining is one of the most vital and motivating area of research with the objective of finding meaningful information
from huge data sets. Now a day, Data mining is becoming popular in healthcare field because there is a need of efficient analytical
methodology for detecting unknown and valuable information in health data. Data mining tools performs data analysis and may also
uncover important data patterns contributing greatly to Knowledge bases, Business strategies, Scientific and Medical Research. Data
mining is a more convenient tool to assists physicians in detecting the diseases by obtaining knowledge and information regarding the
disease from patients data.
Data mining and KDD (Knowledge Discovery in Databases) are related terms and are used interchangeably. According to Fayyad et
al., the knowledge discovery process are structured in various stages whereas the first stage is data selection where data is collected
from various sources, the second stage is preprocessing of the selected data, the third stage is transformation of the data into
appropriate format for further processing, the fourth stage is Data mining where suitable Data mining technique is applied on the data
for extracting valuable information and evaluation is the last stage

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CLASSIFICATION
Classification is a process that is used to find a model that describes and differentiate data classes or concepts, for the purpose of
using the model to predict the class of objects whose class label is unknown.

TOOLS FOR CLASSIFICATION


Some of the major tools used for constructing a classification model include Decision tree, Artificial Neural Network and Bayesian
Classifier.

DECISION TREE
Berry and Linoff defined decision tree as a structure that can be used to divide up a large collection of records into
successive smaller sets of records by applying a sequence of simple decision rules. With each successive division, the members of the
resulting sets become more and more similar to one another.
Decision tree is similar to the flowchart in which every non-leaf nodes denotes a test on a particular attribute and every
branch denotes an outcome of that test and every leaf node have a class label. The node at the top most labels in the tree is called root
node. Using Decision Tree, decision makers can choose best alternative and traversal from root to leaf indicates unique class
separation based on maximum information gain[4].
Decision trees are produced by algorithms that are used to identify various ways of splitting a data set into segments. These segments
form an inverted decision tree. That decision tree originates with a root node at the top of the tree

ID3
ID3 the word stands for Iterative Dichotomiser 3. ID3 is one of the decision tree model that builds a decision tree from a
fixed set of training instances. The resulting tree is used to classify the future samples.

C4.5
C4.5 is the latest version of ID3 induction algorithm. It is an extension of ID3 algotithm. This builds a decision tree like the
ID3. It builds a decision tree from training dataset using Information Entropy concept. So that C4.5 is often called as Statistical
Classifier. This C4.5 is a widely used free data mining tool.

C5.0
This model is an extension of C4.5 decision tree algorithm. Both C4.5 and C5.0 can produce classifiers expressed as either
decision tree or rulesets. In many applications, ruleset are preferred because they are simpler and easier to understand. The major
differences are tree sizes and computation time. C5.0 is used to produce smaller trees and very fast than C4.5.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

J48
J48 decision tree is the implementation of ID3 algorithm developed by WEKA project team. J48 is a simple C4.5 decision
tree for classification. With this technique, a tree is constructed to model the classification process. Once the tree is build, it is applied
to each tuple in the database and the result in the classification for that tuple.

DECISION TREE TYPES


There are many types of Decision trees. The Difference between them is mathematical model that is used to select the
splitting attribute in extracting the Decision tree rules. Three most commonly used research tests types: 1) Information Gain, 2) Gini
index and 3) Gain ratio Decision Trees.

INFORMATION GAIN
The entropy word stands for the meaning of information gain. This approach selects the splitting attribute that minimizes the
value of entropy, thus maximizing the information gain. To identify the splitting attribute of decision tree, one must calculate the
information gain for each and every attribute. Then they select the attribute that maximizes the Information Gain. It is the difference
between the original information and the amount of information needed.

GINI INDEX
The Gini Index is used to measure the impurity of data. The Gini index is calculated for every attribute that is available in the dataset

GAIN RATIO
To reduce the effect of the bias resulting from the use of Information Gain, a variant known as Gain Ratio. The information
Gain measure is biased toward test with many outcomes. That means, it prefers to select the attributes having a large number of
values. Gain Ratio adjusts the Information Gain for each attribute to allow for the breadth and uniformity of the attribute values.
Gain Ratio = Information Gain / Split Information
Where the split information is a value based on the column sums of the frequency table.

PRUNING
After extracting the decision tree rules, reduced error pruning is pruning the extracted decision rules. Reduced error pruning
is one of the efficient and fastest pruning methods and it is used to produce both accurate and small decision rules. Applying reduced
error pruning provides more compact decision rules and reduces the number of extracted rules.

PERFORMANCE EVALUATION
To evaluate the performance of each combination the sensitivity, specificity and accuracy were calculated. To measure the
stability of performance the data is divided into training and testing data with 10-fold cross validation.
Sensitivity = True Positive/ Positive
Specificity = True Negative/ Negative
Accuracy = (True Positive + True Negative) / (Positive + Negative)

FOCUS ON THE SURVEY:


Atul Kumar Pandey et al. proposed a Novel frequent feature selection method for heart disease prediction[7]. The Novel
feature selection method algorithm which is the Attribute Selected Classifier method including CFS subset evaluator and Best First
8

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

search method followed by J48 Decision Tree then integrating the Repetitive Maximal Frequent Pattern Technique for giving better
accuracy.

Atul Kumar Pandey et al. proposed a prediction model with 14 attributes[8]. They developed that model using j48 Decision
Tree for classifying Heart Disease based on the Clinical features against unpruned, pruned and pruned with reduced error pruning
method. They shown the result that the accuracy of Pruned J48 pruned Decision Tree with Reduced Error Pruning Approach is more
better than the simple Pruned and Unpruned Approach. They proposed the prediction model to the clinical data of heart disease where
training instances 200 and testing instances 103 using split test mode.
Nidhi Bhatla et al. proposed that the observations reveal that the Neural Networks with 15 attributes has outperformed over all other
data mining techniques[2]. Another conclusion from the analysis is that Decision Tree has shown good accuracy with the help of
genetic algorithm and feature subset selection. This Research has developed a prototype Intelligent Heart Disease Prediction system
using data mining techniques namely Decision Tree, Nave Bayes and Neural Network. A total of 909 records were obtained from the
Cleveland Heart Disease database. These records were equally divided into two datasets. That are Training dataset with 455 records
and Testing dataset with 454 records. Various techniques and data mining classifiers are defined in this work which has emerged in
recent years for efficient and effective heart disease diagnosis. In this, Decision tree has performed well with 99.62% accuracy by
using 15 attributes. Moreover, in combination with genetic Algorithm and 6 attributes, Decision tree has shown 99.2% efficiency.
Classification Techniques

Accuracy with
13 attributes

15 attributes

Naive Bayes

94.44

90.74

Decision Tree

96.66

99.62

Neural Network

99.25

100

Chaitrali S. Dangare et al. analyzed prediction system for Heart disease using more number of attributes[3]. This paper added
two more attribute obesity and smoking. They expressed a number of factors that increase the risk of Heart disease. That are , High
Blood Cholesterol, Smoking, Family History, Poor Diet, Hyper Tension ,High Blood Pressure, Obesity and Physical inactivity. The
data mining classification techniques called Decision Tree, Nave Bayes and Neural Network are analyzed on Heart Disease database.
The performance of these techniques are compared based on their accuracy. They used J48 algorithm for this system. J48 algorithm
uses pruning method to built a tree. This technique gives maximum accuracy on training data. And also they used Nave Bayes
classifier and Neural Network for predicting the Heart Disease. They compared the accuracy for both 13 input attribute and 15 input
attribute values.
V.Manikandan et al. proposed that association rule mining are used to extract the item set relations[6]. The data classification is based
on MAFIA algorithms which result in accuracy, the data is evaluated using entropy based cross validation and partition techniques and
the results are compared. MAFIA stands for Maximal Frequent Itemset Algorithm. They used C4.5 algorithm to show the rank of
heart attack with Decision Tree. Finally, the Heart Disease database is clustered using K-means clustering algorithm, which will
remove the data applicable to heart attack from the database. They used a dataset with 19 attributes. And the goal was to have high
accuracy, igh precision and recall metrics

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Techniques

Precision

Recall

Accuracy(%)

K-Mean based on MAFIA

0.78

0.67

74%

0.80

0.85

85%

0.82

0.92

92%

K-Mean based on MAFIA


with ID3
K-Mean based on MAFIA
with ID3 and C4.5

CONCLUSION
Heart Disease is a fatal disease by its nature. This disease makes a life threatening complexities such as heart attack and death. The
importance of Data Mining in the Medical Domain is realized and steps are taken to apply relevant techniques in the Disease
Prediction. The various research works with some effective techniques done by different people were studied. The observations from
the previous work have led to the deployment of the proposed system architecture for this work. Though, various classification
techniques are widely used for Disease Prediction, Decision Tree classifier is selected for its simplicity and accuracy. Different
attribute selection measures like Information Gain, Gain Ratio, Gini Index and Distance measure can be used.

REFERENCES:
[1] Bramer, M., Principles of data mining. 2007: Springer.
[2] Nidhi Bhatla, Kiran Jyoti, An Analysis of Heart Disease Prediction using Different Data Mining Techniques International
Journal of Engineering and Technology Vol.1 issue 8 2012.
[3] Chaitrali S. Dangare and Sulabha S. Apte, Improved Study Of Heart Disease Prediction Using Data Mining Classification
Techniques, International Journal Of Computer Applications, Vol. 47, No. 10, pp. 0975-888, 2012.
[4] Apte & S.M. Weiss, Data Mining with Decision Tree and Decision Rules, T.J. Watson Research Center,

http://www.research.ibm.com/dar/papers/pdf/fgcsap tewe issue_with_cover.pdf,(1997).


[5] Divya Tomar and Sonali Agarwal, A survey on Data Mining Approaches for Healthcare, International Journal of Bio-Science
and Bio-Technology, Vol. 5, No. 5(2013), pp. 241-266.
[6] V.Manikandan and S.Latha, Predicting the Analysis of Heart Disease Symptoms Using Medical Data Mining Methods
,International Journal of Advanced Computer Theory and Engineering, Vol. 2, Issue. 2, 2013.
[7] Atul Kumar Pandey, Prabhat Pandey, K.L. Jaiswal and Ashok Kumar Sen, A Novel Frequent Feature Prediction Model For
Heart Disease Diagnosis, International Journal of Software & Hardware Research in Engineering, Vol. 1, Issue. 1, September 2013.
[8] Atul Kumar Pandey, Prabhat Pandey, K.L. Jaiswal and Ashok Kumar Sen, A Heart Disease Prediction Model using Decision
Tree, IOSR Journal of Computer Engineering, Vol. 12, Issue.6, (Jul. Aug. 2013), pp. 83-86.

10

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[9] Dr. Neeraj Bhargava, Dr. Ritu Bhargava, Manish Mathuria, Decision tree analysis on j48 algorithm for data mining,
International journal of Advanced research in Computer Science and Software Engineering, Vol. 3, Issue. 6, June 2013.
[10] Tina R. Patil, Mrs. S.S. Sherekar, Performance Analysis of Nave Bayes and J48 Classification algorithm for Data
Classification , International Journal Of Computer Science and Applications, Vol. 6, No.2, Apr 2013.

11

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey on Scheduling Algorithms in Cloud Computing


#1

Backialakshmi.M,#2Sathya sofia .A,

#1

PG Scholar , PSNA College Of Engg and Tech, Dindigul

#2

Asst. Professor,Department of CSE, PSNA College OfEngg and Tech, Dindigul


#1

backialakshmidgl@gmail.com

Abstract: Cloud computing is a general term used to describe a new class of network based computing that takes place over the
internet. The primary benefit of moving to Clouds is application scalability. Cloud computing is very beneficial for the application
which are sharing their resources on different nodes. Scheduling the task is quite a challenging in cloud environment. Usually tasks are
scheduled by user requirements. New scheduling strategies need to be proposed to overcome the problems proposed by network
properties between user and resources. New scheduling strategies may use some of the conventional scheduling concepts to merge
them together with some network aware strategies to provide solutions for better and more efficient job scheduling. Scheduling
strategy is the key technology in cloud computing. This paper provides the survey on scheduling algorithms. There working with
respect to the resource sharing. We systemize the scheduling problem in cloud computing, and present a cloud scheduling hierarchy.

Keywords: Scheduling, Cloud computing, Resource allocation, Efficiency, Utility Computing, Performance.
1. INTRODUCTION
The latest innovations in cloud computing are making our business applications even more mobile and
collaborative, similar to popular consumer apps like Facebook and Twitter. As consumers, we now expect that
the information we care about will be pushed to us in real time, and business applications in the cloud are
heading in that direction as well.
Cloud computing models are shifting. In the cloud/client architecture, the client is a rich application running on
an Internet-connected device, and the server is a set of application services hosted in an increasingly elastically
scalable cloud computing platform. The cloud is the control point and system or record and applications can
span multiple client devices. The client environment may be a native application or browser-based; the
increasing power of the browser is available to many client devices, mobile and desktop alike.
Robust capabilities in many mobile devices, the increased demand on networks, the cost of networks and the
need to manage bandwidth use creates incentives, in some cases, to minimize the cloud application computing
and storage footprint, and to exploit the intelligence and storage of the client device. However, the increasingly
complex demands of mobile users will drive apps to demand increasing amounts of server-side computing and
storage capacity.

1.1 Cloud Architecture


The Cloud Computing architecture comprises of many cloud components, each of them are loosely coupled. We
can broadly divide the cloud architecture into two parts:Front End refers to the client part of cloud computing
system. It consists of interfaces and applications that are required to access the cloud computing platforms, e.g.,
Web Browser.Secondly,Back End refers to the cloud itself. It consists of all the resources required to provide
12

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

cloud computing services. It comprises of huge data storage, virtual machines, security mechanism, services,
deployment models, servers, etc.

Fig 1.Cloud Architecture

1.2 Resource Allocation


Resource Allocation is all about integrating cloud provider activities for utilizing and allocating scarce
resources within the limit of cloud environment so as to meet the needs of the cloud application. It requires the
type and amount of resources needed by each application in order to complete a user job. The order and time of
allocation of resources are also an input for an optimal resource allocation.
An important point when allocating resources for incoming requests is how the resources are modeled. There
are many levels of abstraction of the services that a cloud can provide for developers, and many

Fig 2. Schematic Representation

parameters that can be optimized during allocation. The modeling and description of the resources should
consider at least these requirements in order for the resource allocation works properly.Cloud resources can be
seen as any resource (physical or virtual) that developers may request from the Cloud. For example, developers
can have network requirements, such as bandwidth and delay, and computational requirements, such as CPU,
memory and storage.
13

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

When developing a resource allocation system, one should think about how to describe the resources present in
the Cloud. The development of a suitable resource model and description is the first challenge that a resource
allocation must address. An resource allocation also faces the challenge of representing the applications
requirements, called resource offering and treatment. Also, an automatic and dynamic resource allocation must
be aware of the current status of the Cloud resources in real time. Thus, mechanisms for resource discovery and
monitoring are an essential part of this system. These two mechanisms are also the inputs for optimization
algorithms, since it is necessary to know the resources and their status in order to elect those that fulfill all the
requirements.
Modeling of
Resources
Cloud
Resource

Developer
Requirements

Resource
Allocation

Fig 3. Allocation of Resources

3.Scheduling Algorithms
3.1.A green energy-efficient scheduling algorithm using the DVFS technique for cloud datacenters:
Chia-Ming Wu et al, Ruay-Shiung Chang, Hsin-Yu Chan, 2014

The dynamic voltage and frequency scaling (DVFS) technique can dynamically lower down thesupply voltage
and work frequency to reduce the energy consumption
while the performance can satisfy the requirement of a job.There are two processes in it. First is to provide
thefeasible combination or scheduling for a job. Second is to providethe appropriate voltage
and frequency supply for the servers viathe DVFS technique.
This technique can reduce the energy consumption of a server when itis in the idle mode or the light workload.It
satisfies the minimum resource requirement of a joband prevent the excess use of resources.The simulation
results show that this method can reduce the energy consumption by 5%25%.
3.2.A new multi-objective bi-level programming model for energy andlocality aware multi-job scheduling
in cloud computing:
Xiaoli Wang, Yuping Wang, Yue Cui, 2014

This programming model is based on MapReduce to improve energy efficiency of servers. First, the variation of
energy consumption with the performance of servers is taken into account. Second, data locality can be adjusted
dynamically according to current network state; last but not least,considering that task-scheduling strategies
depend directly on data placement policies.
This algorithm is proved much more effective thanthe Hadoop default scheduler and the Fair Scheduler in
improvingservers energy efficiency.
14

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.3.Cost-efficient task scheduling for executing large programsin the cloud:


Sen Su a, Jian Li a, Qingjia Huang a, Xiao Huang a, Kai Shuanga, Jie Wang b,2013

The cost efficient task-scheduling algorithm using two heuristic strategies .The first strategy dynamically maps
tasks to the most cost-efficient VMs based on the concept of Pareto dominance. The second strategy, a
complement to the first strategy, reduces the monetary costs of non-critical tasks. This algorithm is evaluated
with extensive simulations on both randomly generated large DAGs and real-world applications. The further
improvements can be made using new optimization techniques and incorporating penalties for violating
consumer-provider contracts.
3.4.Priority Based Job Scheduling Techniques In Cloud Computing: A Systematic Review:
Swachil Patel, Upendra Bhoi,2013

Job scheduling in cloud computing mainly focuses to improve the efficient utilization of resource that is
bandwidth, memory and reduction in completion time .There are several multi-criteria decision-making
(MCDM) and multi-attribute decision-making (MCDM) which are based on mathematical modeling. This PJSC
is based on Analytical Hierarchy Process (AHP).
A modified prioritized deadline based scheduling algorithm (MPDSA) is proposed using project management
algorithm for efficient job execution with deadline constraint of users jobs. MPDSA executes jobs with closest
deadline time delay in cyclic manner using dynamic time quantum.
There are several issues related toPriority based Job Scheduling Algorithm such as complexity, consistency and
finish time.
3.5. CLPS-GA: A CASE

LIBRARY AND PARETO SOLUTION-BASED HYBRID GENETIC ALGORITHM FOR ENERGY


AWARE CLOUD SERVICE SCHEDULING

Ying Fengb, Lin Zhanga, T.W. Liao,2014.

On the basis of classic multi-objective genetic algorithm, a case library and Pareto solution based hybrid
Genetic Algorithm (CLPS-GA) is proposed to solve the model. The major components of CLPS-GA include a
multi-parent crossover operator (MPCO), a two-stage algorithm structure, and a case library. Experimental
results have verified the effectiveness of CLPS-GA in terms of convergence, stability, and solution diversity.
3.6.Scheduling ScientificWorkflows Elastically for Cloud Computing:
Cui Lin, Shiyong Lu, 2011

It proposes the SHEFT algorithm (Scalable-Heterogeneous-Earliest-Finish-Time algorithm)to schedule


workflows for a Cloud computing environment. SHEFT is an extension of the HEFT algorithm which is applied
for mapping a workflow application to a bounded number of processors.
We schedule these workflows by the HEFT and SHEFT algorithms,andcompare workflow makespan by the two
algorithms as the size of the workflows increases.
3.7. Job scheduling algorithm based on Berger model in cloud environment:
BaominXua, Chunyan Zhao b, EnzhaoHua, Bin Hu c,d, et al., 2011

15

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The Berger model of distributive justice is based on expectation states. It is a series of distribution theories of
social wealth. Based on the idea of Berger model, two-fairness constraints of job scheduling are established in
cloud computing. The job scheduling is implemented in a cloud Sim platform.
The proposed algorithm in this paper is effective implementation of user tasks,and with better fairness.In future
enhancement it deals with build a fuzzyneural network of QoS feature vector of task and parameter vectorof
resource based on the non-linear mapping relationship between QoS and resource.

3.8.Efficient dynamic task scheduling in virtualized data centers with fuzzy prediction:
Xiangzhen Kong a,n, ChuangLin a, YixinJiang a, WeiYan a, XiaowenChu et al.,2011

Thegeneralmodelofthetaskschedulingin
VDCisbuiltby
MSQMS-LQ,
andtheproblemisformulates
an
optimization
problemwithtwoobjectives:averageresponsetime
and
availabilitysatisfactionpercentage.Basedonthefuzzyprediction
systems,anonlinedynamictaskschedulingalgorithmnamed SALAF is proposed. The experimental results show that the
proposed algorithm could efficiently improve the total availability of VDCs while maintaining good
responsiveness performance.
Considering the cost of consolidation, there exists an optimal consolidation ratio in a VDC that may be related
to the hardware resource and the workload, which is an issue in it.
3.9.Policy based resource allocation in IaaS cloud:
AmitNathani a, Sanjay Chaudharya, GauravSomani et al.,2012

Haizea uses resource leases as resource


allocation abstraction and implements these leases by allocating Virtual Machines (VMs). An approximation
algorithmis proposed in which minimize the number of allocatedresources which need to be reserved for a batch
of tasks.When swappingand preemption both fails to schedule a lease, the proposedalgorithm applies the
concept of backfilling.
The results show that it maximizes resource utilization and acceptance of leases compared to the existing
algorithm of Haizea.Backfilling has a disadvantage of requiring more preemption, which increases overall
overhead of the system.
3.10.Honey bee behavior inspired load balancing of tasks in cloud computing environments:
DhineshBabu L.D. a*, P. VenkataKrishnab et al.,2013

HBB-LB aims to achieve well balanced load across virtual machines for maximizing the throughput. It proposes
a load balancing technique forcloud computing environments based on behavior of honey bee foraging strategy.
Honey bee behavior inspired load balancing improvesthe overall throughput of processing and priority based
balancingfocuses on reducing the waiting time for the task on a queue of VM.
A task removed from overloaded VM has to find a suitable under loaded.It has two possibilities, either it finds
the VM set which is a Positive signal or it may not find the suitable VM i.e a negative signal.
HBB-LB is more efficient with lesser number of task migrations when compared with DLB and HDLB
techniques. This algorithm can be extended further by considering the Qos factors in it.

16

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.11.Morpho:A decoupled MapReduce framework for elasticCloud computing:


Lu Lu, XuanhuaShi ,Hai Jin, Qiuyue Wang, Daxing Yuan, Song Wu, 2014

To address the problems of frequently loading andrunning HDFS in virtual clusters and downloading and
uploading
data between virtual clusters and physical machines, Morphouniquely proposes a decoupled MapReduce
mechanism that
decouples the HDFS from computation in a virtual cluster andloads it onto physical machines
permanently.Morpho also achieves high performance by two complementary strategies for data placement and
VM placement, which can provide better map and reduce input locality.
Evaluation is done using two metrics, job
execution time and Cross-rack data transfer
amount .Nearly 62% speedup of job execution time and a significant reduction in network traffic is achieved by
this method.
3.12.CCBKE - Session key negotiation for fast and secure scheduling of scientific applications in cloud
computing:
Chang Liu et al., XuyunZhanga, Chi Yangb, Jinjun Chena,2013

Cloud Computing Background Key Exchange (CCBKE), a novel authenticated


key exchange scheme that aims at efficient security-aware scheduling of scientific applications. This scheme is
designed based
on the commonly-used Internet Key Exchange (IKE) scheme and randomness-reuse strategy. The data set
encryption technique used are block cipher, AES,in Galois Counter Mode (GCM) with 64 k tables, Salsa20/12
and stream cipher.
This scheme improve the efficiency by dramatically reducing time consumption and computation load without
sacrificing the level of security.This scheme canbe extended in future to improve the efficiency of symmetrickey encryption towards more efficient security-aware scheduling.
3.13.A Ranking Chaos Algorithm for dual scheduling of cloud service and computing resource in private
cloud:
YuanjunLailia, Fei Tao a, Lin Zhang a,*, Ying Cheng a, YongliangLuoa, Bhaba R. Sarkerb,2013

The combination of Service Composition


Optimal Selection (SCOS) and Optimal Allocation of Computing Resources (OACR) is known as dual
scheduling. For addressing large-scale Cloud Services and Computing Resources (DS-CSCR) problem, a new
Ranking Chaos Optimization (RCO)
is proposed.
In RCO algorithm ,individual chaos operator was designed, then a new adaptive ranking selection was
introduced for control the state of population in iteration. Moreover, dynamic heuristics were also defined and
introduced to guide the chaos optimization.
Performances in terms of searching ability, time complexity and stability in solving the DS-CSCR problem is
optimal with the use of RCO algorithm but the design of heuristic
function for specific problems in the dynamic heuristic operator iscomplex and hardthough.
17

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.14.Analysis and Performance Assessment of CPU Scheduling Algorithms in Cloud using Cloud Sim:
Monica Gahlawat, Priyanka Sharma,2013

This paper analyzes and evaluates the performance of various CPU scheduling in cloud environment using
CloudSim. Shortest job first and priority scheduling algorithms are beneficial for the real time applications.
Because of these algorithms the clients can get precedence over other clients in cloud environment.
Here it deals only with the three algorithms such as FCFS, SJF and priority scheduling.This survey can also be
extended for other adaptive and dynamic algorithms suited the virtual environment of cloud.
3.15.An Algorithm to Optimize the Traditional Backfill Algorithm Using Priority of Jobs for Task

Scheduling Problems in Cloud Computing:


LalShriVratt Singh, Jawed Ahmed, Asif Khan,2014

This paper proposes an efficient algorithm P-Backfillwhich is based on the traditional Backfill algorithm using
prioritization of jobs for achieving the optimality of scheduling in cloud systems.The dynamicmeta scheduler
will deploy the arriving jobs using P-Backfill algorithm to utilize the cloud resourcesefficiently with less
waiting time.
P-Backfill starts the execution of the jobs according to their priority status. It also uses the pipelining
mechanism in order to execute multiple jobs at a time.The P-Backfill algorithm is more efficient than other
other traditional algorithms such as traditional Backfill, FCFS, SJF, LJF and Round Robin algorithms since it
selects the jobs according to their priority levels.
3.16.Efficient Optimal Algorithm of Task Scheduling in CloudComputing Environment:
Dr. AmitAgarwal, Saloni Jain,2014

An optimized algorithm for task scheduling based on genetic simulated annealing algorithm is proposed. Here
Qos and response time is achieved by executing the high priority jobs (deadline based jobs) first by estimating
job completion time and the priority jobs are spawned from the remaining job with the help ofTask Scheduler.
Three scheduling algorithm First come first serve, Round robin scheduling and is generalized priority algorithm.
In FCFS resource with the smallest waiting queue time and is selected for the incoming task. Round Robin (RR)
algorithm focuses on the fairness. the tasks are initially prioritized according to their size such that one having
highest size has highest rank in general prioritized algorithm. The experimental result shows that general
prioritized algorithm is more efficient than FCFS and
Round Robin algorithm.
3.17 Comparative Based Analysis of Scheduling Algorithms for ResourceManagement in Cloud
Computing Environment:
C T Lin et al.,2013.

The resource scheduling in this paper is based on the parameters like cost, performance, resource utilization,
time, priority, physical distances, throughput, bandwidth, resource availability.
The scheduling algorithm based on cost factor includes deadline distribution algorithm, backtracking, and
improved activity based cost algorithm, compromised time-cost. The algorithm based on the throughput
includes Extended Min-Min,modified ant colony optimization. Earliest deadline, FCFS, Round robin is time
based
18

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The advantage of this comparative study is that as per the requirements of the consumers and service providers
they can select the appropriate class of scheduling algorithms for different types of services required. This study
may further be used for optimization of different algorithms for better resource management in cloud computing
environment.

3.18Fairness As Justice Evaluator In Scheduling Cloud Resources - A survey:


Anuradha1, S. Rajasulochana,2013
Fairness in schedulingimproves the efficiency and provides optimal resource allocation. Fairness constraint
proposed by

Berger model plays an important role in determining the fair allocation of resources by means of justice
evaluation function. An efficient scheduler should provide fair allocation of resources in a way it ensures no
task is starving for resources.
The heuristic algorithms are present for both static mapping and dynamic mapping. QoS based heuristic
algorithms for static mapping are min-min algorithm, max-min algorithm, opportunistic load balancing, and
suffrage heuristics. dynamic scheduling includes immediate mode heuristic algorithms and batch mode heuristic
algorithms.
Backfilling algorithms are used to overcome the problem of starvation and waiting time. Backfilling strategy
may/may not schedule the jobs based on priority which is both its advantage as well as disadvantage.
3.19 A Survey Of Various Qos-Based Task Scheduling Algorithm In Cloud Computing Environment:
Ronak Patel, Hiren Mer,2013

QoS is the collective effort of services performance, which determines the degree of the satisfaction of a user
for the services.It is expressed in completion time, latency, execution price, packet loss rate, throughput and
reliability.
Task scheduling algorithm based on QoS-driven in cloud computing (TS-QoS) compute the priority of the task
according to the special attributes of the tasks, and then sort tasks based on priority.It solves the starvation
problem and follow FCFS principle.
3.20 Resourcemanagementfor allocation infrastructure as a Service (IaaS) in cloud computing: A survey
SunilkumarS.Manvi a, GopalKrishnaShyam et al.,

This paper focuses on some of the important resource management techniques such as resource provisioning,
resource allocation, resource mapping and resource adaptation.The common issues associated with IaaS in
cloud systems are virtualization and multi-tenancy, resource management, network management, data
management, APIs, interoperability.
The performance metrics are used to compare different works under resource management techniques. The
metrics considered are reliability, deployment ease, Quality of Service, delay and control overhead.
4. Experimental Results
19

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

From these various scheduling techniques we choose the effective task scheduling algorithm. The algorithm is
implemented with the help of simulation tool (CloudSim) and the result obtained reduces the total turnaround
time and also increase the performance. This algorithm deals with the parameters like throughput, makespan
and cost.

Fig 4.MakespanVs Jobs

Fig 5. Throughput Vs Jobs

Fig 6. Cost Vs Jobs

Thus the experimental results show that the scheduling algorithms enhance the makespan as well as the
throughput of the resources in the cloud environment.
The cloud service providers are those who provide cloud service to the end users. Each CSP promote various
scheduling techniques based on their compatibility and availability. The comparison of various CSP and the
scheduling algorithm used by their organization is being comprised as below.

20

Cloud Service
Providers

Open
Source

Eucalyptus

Yes

Scheduling Algorithms
Greedy first fit and
Round robin

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Open Nebula

Yes

Amazon EC2

No

Rank matchmaker
scheduling, preemption
scheduling
round robin, weighted
round robin, least
connections,
weighted least
connections
Virtual machine
schedulers PBS and
SGE
Xen ,swam, genetic

Rackspace

Yes

Nimbus

Yes

RedHat

Yes

BFS ,DFS

lunacloud

Yes

Round robin

Fig 7. Comparison of CSPs

5. Conclusion
In this paper, we have studied about the problems in scheduling and also about various kinds of scheduling
algorithms.
The scheduling algorithm for the datacenter should be chosen based on the requirements of datacenter and the
kind of data they store in it. We have analyzed the relation between the data that hits the datacenter as well the
scheduling algorithm which is required to promote resource allocation in the cloud datacenters. This survey has
provided us a crystal clear idea about the wide dimensions of scheduling resources and their functions.
REFERENCES:
[1] Chia-Ming Wu et al, Ruay-Shiung Chang, Hsin-Yu Chan :A green energy-efficient scheduling algorithm using the DVFS
technique for cloud datacenters, Science Direct, Future Generation Computer Systems 37 (2014) 141147.
[2] Xiaoli Wang, Yuping Wang, Yue Cui: A new multi-objective bi-level programming model for energy and locality aware multijob scheduling in cloud computing, Science Direct, Future Generation Computer Systems 36 (2014) 91101.
[3] Sen Su a, Jian Li a, Qingjia Huang a, Xiao Huang a, Kai Shuang a, Jie Wang b: Cost-efficient task scheduling for executing large
programs in the cloud, Science Direct, Parallel Computing 39 (2013) 177188.
[4] Swachil Patel, UpendraBhoi:Priority Based Job Scheduling Techniques In Cloud Computing, International Journal of Scientific
and Technology Research, Volume 2, Issue 11, November 2013, ISSN 2277-8616.
[5] YING FENG , LIN ZHANG , T.W. LIAO:CLPS-GA: A CASE LIBRARY AND PARETO SOLUTION-BASED HYBRID GENETIC ALGORITHM
FOR ENERGY AWARE CLOUD SERVICE SCHEDULING, SCIENCE DIRECT, APPLIED SOFT COMPUTING 19 (2014) 264279.
B

[6]Cui Lin, ShiyongLu:Scheduling Scientific Workflows Elastically for Cloud Computing, IEEE 4th International Conference on
Cloud Computing, 2011.
[7] BaominXu a, Chunyan Zhao b, EnzhaoHua, Bin Hu c,d, et al:Job scheduling algorithm based on Berger model in cloud
environment, Science Direct, Advances in Engineering Software 42 (2011) 419425.
[8] Xiangzhen Kong a,n, ChuangLin a, YixinJiang a, WeiYan a, XiaowenChu et al., :Efficient dynamic task scheduling in virtualized
data centers with fuzzy prediction, Journal of Network and Computer Applications 34 (2011) 10681077.
[9] AmitNathani a, Sanjay Chaudharya, GauravSomani et al.:Policy based resource allocation in IaaS cloud, Science Direct, Future
Generation Computer Systems 28 (2012) 94103
21
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[10]DhineshBabu L.D. a*, P. Venkata Krishnab et al.,:Honey bee behavior inspired load balancing of tasks in cloud computing
environments,Science Direct, Applied Soft Computing 13 (2013) 22922303.
[11]Lu Lu, Xuanhua Shi , Hai Jin, Qiuyue Wang, Daxing Yuan, Song Wu:Morpho: A decoupled MapReduce framework for elastic
cloud computing, Science Direct, Future Generation Computer Systems 36 (2014) 8090.
[12] Chang Liu et al., XuyunZhanga, Chi Yangb, Jinjun Chena,2013:CCBKE - Session key negotiation for fast and secure
scheduling of scientific applications in cloud computing, Science Direct ,Future Generation Computer Systems 29 (2013) 1300
1308.
[13]YuanjunLaili a, Fei Tao a, Lin Zhang a,*, Ying Cheng a, YongliangLuo a, Bhaba R. Sarker b: A Ranking Chaos Algorithm for
dual scheduling of cloud service and computing resource in private cloud, Science Direct ,Computers in Industry 64 (2013) 448463.
[14] Monica Gahlawat, PriyankaSharma:Analysis and Performance Assessment of CPU Scheduling Algorithms in Cloud using
Cloud Sim, International Journal of Applied Information Systems (IJAIS) ISSN : 2249-0868, Volume 5 No. 9, July 2013.

[15] LalShriVratt Singh, Jawed Ahmed, AsifKhan :An Algorithm to Optimize the Traditional Backfill Algorithm Using Priority of
Jobs for Task Scheduling Problems in Cloud Computing, International Journal of Computer Science and Information Technologies,
Vol. 5 (2) , 2014, 1671-1674
[16] Dr. AmitAgarwal, SaloniJain :Efficient Optimal Algorithm of Task Scheduling in Cloud Computing Environment,
International Journal of Computer Trends and Technology (IJCTT) volume 9 number 7 Mar 2014.
[17] C T Lin et al,.:Comparative Based Analysis of Scheduling Algorithms for Resource Management in Cloud Computing
Environment,International Journal of Computer Science and Engineering Vol.-1(1), July (2013) PP(17-23).
[18] Anuradha1, S. Rajasulochana :Fairness As Justice Evaluator In Scheduling Cloud Resources - A survey, International Journal
of Computer Engineering & Science, ISSN: 22316590 ,Nov. 2013.

[19]Ronak Patel, HirenMer:A Survey Of Various Qos-Based Task Scheduling Algorithm In Cloud Computing
Environment,International Journal of Scientific and Technology Research, Volume 2, Issue 11, November 2013 ,ISSN 2277-8616 .
[20]SunilkumarS.Manvi a, GopalKrishnaShyam et al:Resource management for allocation infrastructure as a Service (IaaS) in cloud
computing: A survey, Journal ofNetworkandComputerApplications41(2014)424440

22

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Parameter Optimization and Evaluation of Hydrodynamic Functions in the


Wet Range of Water Availability in Different Soils
R. K.Malik

, Deepak Kumar2

Professor of Hydrology and Water Resources Engineering, Amity School of Engineering and Technology, Amity University
Gurgaon, Haryana, India

Assitant Professor of Mechanical Engineering, Amity School of Engineering and Technology, Amity University Gurgaon, Haryana,
India
E-mail: rkmalik@ggn.amity.edu

Abstract - The knowledge of the soil hydrodynamic functions is essential for modeling the soil water dynamics and different
components of water balance. Major contribution of these components occurs during the wet range of water availability in the soil
profile. The functional form of the most commonly used theoretical hydrodynamic functions of Brooks-Corey and van Genuchten
coupled with Burdine and Mualem hydraulic conductivity models were developed for coarse, medium, and moderately fine-textured
soils. For developing these functional forms, parameterization and fitting performance of the corresponding soil water retention
functions were performed using RETC computer code employing non-linear least-squares optimization. It was observed that for the
wet range of water availability in the loam and silty clay loam soil, the best performance was given by the Brooks-Corey soil water
retention function followed by van Genuchten functions with m = 11/n and m = 12/n. However for this range of water availability,
the van Genuchten functions with m = 11/n gave a slight better performance in sand in comparison to other functions which gave
same performance. It was observed that as the sand content of these soils decreases, the hydraulic conductivity and soil water
diffusivity at particular soil water content also decreased. The hydraulic conductivity predicted by the Mualem-van Genuchten
function were observed to be less than predicted by Mualem-Brooks-Corey function and the same trend was observed for the soil
water diffusivity for these soils.
Key Words: Soil water retention functions-Brooks-Corey, van Genuchten, RETC code, parameterization, fitting performance,
Burdine and Mualem models, hydraulic conductivity, soil water diffusivity.
INTRODUCTION : The knowledge of hydrodynamic functions of soil water retention, hydraulic conductivity and soil water
diffusivity is essential for modeling the different components of the water balance i.e. internal drainage and evaporation from the soil
profile, capillary contribution to it and water storage changes within it as well as solute and contaminant transport to and from the
groundwater. These processes are affected mainly by the texture and degree of wetness of the soil profile. For in-situ estimation of
hydraulic conductivity of the unsaturated soil, direct methods of plane of zero flux [1] constant flux vertical time domain reflectometry
[2] and instantaneous profile method [1] were used but Durner and Lipsius [3] reported that these methods are considerably more
difficult and less accurate and they further suggested the use of indirect method of estimation using soil water retention function
developed from the easily measured soil water retention data. Various soil water retention functions, relation between soil water
content and soil water suction head, have been proposed [4,5,6,7,8,9,10,11,12,13,14] Some of these functions though provided better
predictions but are difficult to incorporate into the statistical pore-size distribution models for developing the analytical hydrodynamic
functions. Abrisqueta et al. [15] reported that there is a wide body of literature in which hydrodynamic behavior of the soils have been
described based on their water retention functions for the entire range of saturation. Leij et al. [16] and Assouline and Tartakovsky[17]
reported that among a variety of soil water retention functions which were evaluated for the entire range of soil water from saturation
to oven- dryness, the functions proposed by Brooks-Corey and van Genuchten are most popular for use in numerical modeling of
23

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

water flow and solute transport within the unsaturated porous media. These two empirical retention functions of with specific number
of parameters fitting the observed soil water retention data to different extents can be embedded into the statistical pore-seize
distribution-based hydraulic conductivity models of either Burdine [18] or Mualem [19] for developing the corresponding predictive
theoretical unsaturated hydraulic conductivity functions having the same parameters as in the corresponding soil water retention
functions and further developing the soil water diffusivity functions. Rossi and Nimmo [13] reported that these soil water retention
functions performed differently in the wet, middle and dry ranges of water content from saturation to oven-dryness in the soil profile.
Major contribution of these processes as stated above occur in the moist (wet) range of soil water and such moist conditions prevail for
most of the time during the periods immediately following each rainfall event and under drip irrigation. So in this study, the
hydrodynamic functions in the wet range of water availability in different soils were evaluated for developing the functional
unsaturated hydraulic conductivity and soil water diffusivity functions for further use in modeling the soil water dynamics.
Materials and Methods
Soil water retention data
The soil water retention data Kalane et al., [20] at the soil water suction heads of 0, 20, 40, 60, 80, 100, 120, 150 and 180 cm (taken as
positive) of different samples from the soil textural classes of sand (coarse texture), loam (medium texture ) and silty clay loam
(moderately fine texture ) collected from different locations in Haryana, India and the corresponding with the soil water contents were
utilized for optimizing the parameters of the soil water retention functions and for evaluating the hydrodynamic functions. According
to USDA textural classification of soils, the textural class of sand has proportions of sand, silt and clay ranging from 86 to 100, 0 to 14
and 0 to 10 percent, respectively while these constituents range from 23 to 52, 28 to 50 and 7 to 27 percent in soil, respectively and the
silty clay loam soil has these ranging from 0 to 20, 40 to 73 and 27 to 40 percent, respectively.
Soil water retention functions
The empirical soil water retention functions proposed by van Genuchten [7] with fixed (m = 11/n and m = 12/n) shape parameters
and Brooks-Corey [4] were used in this analysis. The van Genuchten proposed the empirical sigmoidal- shaped continuous (smooth)
four-parametric power-law function as:
Se = 1 + (VG h)n

(1)

Where Se [= ( h r )/(s r )] is the dimensionless effective saturation,,s and r are the water content at the soil water
suction head h, saturated and residual water contents, respectively. The parameter VG is an empirical constant L1 . In this function,
the four unknown parameters arer , s , VG and n. The dimensionless parameters n and m (fixed with each other) are the parameters
related to the pore-size distribution affecting the shape of the function. For developing the closed-form (analytical) function of the
unsaturated hydraulic conductivity by coupling the van Genuchten soil water retention function with the hydraulic conductivity
models of either of Burdine or Mualem, the conditions of fixed shape parameters m = 1 2/n and m = 11/n need to be satisfied,
respectively. However, Durner [21] reported that these constraints of fixing the shape parameters eliminated some of the flexibility.
Brooks Corey proposed the empirical four-parametric power-law soil water retention function as:
Se = (BC h) B C (2)
24

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Where BC is an empirical parameter L1 representing desaturation rate of soil water is related to the pore-size distribution
and whose inverse is regarded as the reciprocal of the height of the capillary fringe. The parameter BC is the pore-size distribution
index affecting the slope of this function and characterizes the width of the pore-size distribution. In this function, the four unknown
parameters arer ,s ,BC and BC .
Estimation of hydraulic conductivity functions
Based on the statistical pore-size distribution in the soil medium, the relative hydraulic conductivity function is defined by a
mathematical expression [22] as:

Kr Se =

se

h () d

s h () d
r

(3)

The parameter l is the tortuosity factor which characterizes the combined effects of pore-connectivity and flow path and and
are the constants. Eq. (3) reduces to the Burdine model when = 2 and = 1 and to the Mualem model when = 1 and = 2. Kr
(Se) (= K (Se)/ Ks) is the dimensionless relative unsaturated hydraulic conductivity and K s LT 1 is the saturated hydraulic
conductivity.
Coupling of the Brooks-Corey soil water retention function with the Burdine and Mualem models yielded the corresponding
Se - based hydraulic conductivity functions, respectively as:
+1 +(2/ BC )

K(Se) = K s Se
K(Se) = K s

+2 +(2/ BC )

(4)

(5)

Van Genuchten coupled his soil water retention function Se (h) with the Mualem model and its integration led to the
derivation of the unsaturated hydraulic conductivity in the form of an Incomplete Beta Function for a general case of independent
parameters m and n as:
2

K (Se ) = K s Se I (m + 1/n ,1 1/n) (6)


1/

Where I (m + 1/n ,1 1/n)) is the Incomplete Beta Function and = Se

. Under the condition m = 11/n, the Eq. (6) when

integrated, the unsaturated hydraulic conductivity reduced to the closed-form as:


1

K (Se) = K s Se 1 (1 Sem )m (7)


The Burdine-based hydraulic conductivity function with independent m and n parameters is expressed as:
K (Se ) = K s Se I (m + 2/n ,1 2/n) (8)
1/

Where I (m + 2/n ,1 2/n) is the Incomplete Beta Function and = Se

. The integration of Eq. (8) under the constraint m =

12/n led to the analytical form of based unsaturated hydraulic conductivity as:
25

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730
1/m m

K(Se) = K s Se 1 (1 Se

(9)

Estimation of soil water diffusivity functions


The soil water diffusivity D (Se ) [L2 T 1 ] was derived by multiplying the K (Se ) by the inverse of the soil water capacity C (Se ) [L1 ].
The C (Se ) is the first derivative of the soil water retention function i.e. d/dh. For the Brooks-Corey soil water retention function, C
(Se ) was derived as:
BC +1 / BC

C (Se ) = BC BC (s r ) Se

(10)

Multiplying Eqs. (4) and (5) by the inverse of Eq. (10) resulted in the Burdine and the Mualem-based Brooks-Corey soil water
diffusivity functions, respectively as:
D (Se ) = K s BC BC (s r )
D (Se ) = K s BC BC (s r )

l+1/ BC

Se

(11)

l+1+1/ BC

Se

(12)

For the soil water retention function, the soil water capacity C (Se ) was derived as:
1/ m

C (Se ) = m n VG S r Se

1/m m

1 se

(13)

Multiplying Eqs. (7) and (9) by the inverse of Eq. (13) yielded the Mualem and Burdine- based van Genuchten soil water diffusivity
functions, respectively as:
D (Se ) =K s

(1m)
VG m( S r )

1/m

Se

1/m m

1 Se

D (Se ) =K s

1/m m

+ 1 Se
(1m)

VG 1+m ( s r

2 (14)
(1/m)

Se

1/m m

1 Se

1 (15)

For using the value of tortuosity factor (l), Wosten and van Genuchten [23] reported that this value where is from soil to soil
and fits may not be reasonable especially for medium and fine-textured soils. But in this analysis, the average values of tortuosity
factor (l) equal to 2.0 and 0.5 as proposed by Burdine and Mualem were used for Burdine and Mualem-based predictive unsaturated
hydraulic conductivity and soil water diffusivity functions, respectively. The values of m for the Burdine and Mualem-based of
conductivity and diffusivity functions were calculated by fixing m = 12/n and m = 11/n, respectively. The saturated hydraulic
conductivity values as determined experimentally by Kalane et al. [20] for these soils were used.
Parameterization and evaluation of fitting performance
For estimation of unknown parameters of soil water retention functions, RETC (RETention Curve) computer code van Genuchten et
al. [24] was used by utilizing the observed soil water retention data only and these were represented by a vector b equal
tor , s , VG , n for van Genuchten function and equal to r , s , BC , BC , for Brooks-Corey function. In this code, these parameters
are optimized iteratively by minimizing the residual sum of squares (RSS) of the observed and fitted soil water retention data (h) by

26

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

taking RSS as the objective function O (b) using weighted non-linear least-squares optimization approach based on the MarquardtLevenbergs maximum neighborhood method [25] as:
O (b) =

N
i=1

wi i i

(16)

Where i and i are the observed and the corresponding fitted soil water contents, respectively. N is the number of the soil
water retention data points and equal to 9 in this analysis. The weighting factors wi which reflects the reliability of the measured
individual data were set equal to unity in this analysis as the reliability of all the measured soil water retention data was considered
equal. A set of appropriate initial estimates of these unknown parameters was used so that the minimization process converges after
certain iterations to the optimized values of these parameters. For evaluating the fitting performance, goodness of fit of the observed
and fitted data was estimated by the coefficient of determination (r 2 ) characterizing the relative magnitude of the total sum of squares
associated with the fitted function as:
r2 =

i i

i i 2 (17)

Where i is the mean of observed soil water retention data.


Results and Discussion
The optimized values of saturated water contents (Table 1) were observed to be 0.40, 0.46 and 0.52 cm3 / cm3 for the sand, loam and
silty clay loam soils, respectively for both the soil water retention functions of Brooks-Corey and van Genuchten and on comparison
with the experimentally determined saturated water contents Kalane et al. [20] a complete perfect match was found. It was also
observed that as the fineness of the soil texture increases, the predicted residual soil water contents increased from 0.02 to 0.25 cm3 /
cm3 by Brooks-Corey function, from 0.04 to 0.35 cm3 / cm3 by van Genuchten function with constraint m = 11/n and from 0.03 to
0.33 cm3 / cm3 by van Genuchten with fixed shape parameter m = 12/n for these soils. The residual water contents predicted by
Brooks-Corey function were observed to be less in comparison to predicted by van Genuchten function. Among the van Genuchten
functions, the van Genuchten with fixed m = 12/n predicted less residual soil water contents in comparison with fixed m = 1 1/n.
The residual water contents predicted by these functions ranged from 0.02 to 0.04, 0.16 to 0.25 and 0.25 to 0.35 cm3 / cm3 for sand,
loam and silty clay loam soils, respectively.
It is seen from Table 1 that as the clay content of these soils increases, the values of BC and VG decreased and the Brooks-Corey
function predicted BC values of 0.1062, 0.0452 and 0.0321 for sand, loam and silty clay loam soils, respectively indicating more
height of the capillary fringe (inverse of BC ) in the silty clay loam followed by loam and sand soils. These BC values were
observed to be higher than the values of VG for these soils. Among the van Genuchten functions, VG values of 0.0712, 0.0254 and
0.0184 predicted with fixed m = 1 1/n were found to be lower than function with m = 12/n.
The values of BC were observed (Table 1) to be 0.5969, 0.4228 and 0.4225 for sand, loam and silty clay loam soils indicating that as
the sand content of these soils decreases, these values also decreased which indicated that the slope of the water retention function of
Brooks-Corey was observed to be more in sand in comparison to loam and silty clay loam soils. This showed that the porous medium
of sand has comparatively more uniform pore-size distribution. Kosugi et al. [26] also reported that theoretically C value approaches
infinity for a porous medium with a uniform pore-size distribution, whereas its value approaches a lower limit of zero for soils with a
27

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1. Optimized values of the parameters of the soil water retention functions for different soils
Sand (Coarse texture)
Optimized values of parameters
Soil water retention function

( / )
Brooks-Corey

( / )

( 1/ cm )

n/
()

0.02

0.40

0.1062

0.5969

Van Genuchten
Fixed

m = 1- 1/n

0.04

0.40

0.0712

1.8595

Fixed

m = 1- 2/n

0.03

0.40

0.0936

2.6773

0.15

0.46

0.0452

0.4228

Loam (Medium texture)


Brooks-Corey
Van Genuchten
Fixed

m = 1- 1/n

0.25

0.46

0.0254

2.3899

Fixed

m = 1- 2/n

0.23

0.46

0.0450

2.2528

0.25

0.52

0.0321

0.4225

Silty clay loam (Moderately fine-texture)


Brooks-Corey
Van Genuchten
Fixed

m = 1- 1/n

0.35

0.52

0.0184

2.4185

Fixed

m = 1- 2/n

0.33

0.52

0.0309

2.1910

wide range of pore sizes. They reported C values in the range 0.3 to 10.0 while Szymkiewicz [27] reported that these values
generally ranged from 0.2 to 5.0. Zhu and Mohanty [28] also reported that the soil water retention of Brooks and Corey was
successfully used to describe the retention data for the relatively homogeneous soils, which have a narrow pore-size distribution with a
value for BC equal to 2. Nimmo [29] reported that a medium with many large pores will have a retention function (curve) that drops
rapidly to at low soil water content even at low suction head and conversely, a fine-pored medium will retain even at high suction so
will have a flatter retention curve. In these functions the hydrodynamic behavior of the soil media are described by the combined
effects of two parameters (BC , BC ) in the Brooks-Corey function and by three parameters (VG , n, m) in the van Genuchten function.
It was also observed (Table 1) that the values of the parameter n decreased as the sand content of these soils increases with constraint
m = 11/n while this trend was observed to be reverse for m = 12/n.

28

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

It was observed (Table 2) that the van Genuchten soil water retention function with m = 1 1/n gave a slight better fitting in
comparison to Brooks-Corey function and van Genuchten function with m = 12/n for the wet range availability in the sand (coarse
texture) soil as the value of r 2 is slightly better for van Genuchten function with m = 11/n but the RSS values are same for all these
functions. For the loam (medium texture) and silty clay loam (moderately fine texture) soils, the best performance was given by the
Brooks-Corey function in these soils as indicated by the highest values of r 2 of 0.9973 and 0.9957 in loam and silty clay soils,
respectively with least RSS value of 9x 105 for both these soils. Among the van Genuchten function, the better fit was given by the
function with m = 11/n indicated by the corresponding higher r 2 values and lower RSS values for these soils. Mualem [30] reported
that there is no single function that fits every soil. Nimmo [31] and Ross et al. [32] also reported that the Brooks-Corey and van
Genuchten functions are successful at high and medium water contents but often gave poor results at the low water contents.
Mavimbela and van Rensburg [33] also parameterized the soil water retention functions of Brooks-Corey and van Genuchten using
RETC code and reported that these functions fitted the measured soil water retention data with r 2 of no less than 0.98.
Table 2. Residual sum of squares (RSS) and coefficient of determination ( ) of the fitting performance of soil water retention
functions
Sand

Loam

Soil water

(Coarse texture)

(Medium texture)

Silty clay loam


(Moderately fine texture)

retention function

RSS

RSS

RSS

Brooks-Corey

0.9997

0.9973

0.9957

0.9998

18

0.9951

10

0.9956

0.9997

37

0.9896

18

0.9916

Van Genuchten
Fixed m = 1- 1/n
Fixed m = 1- 2/n

Mace et al. [34] reported that. The function developed van Genuchten based on the theoretical expression of Mualem predicted
hydraulic conductivity more accurately than the van Genuchten function based on the theory of Burdine. So in this study, though both
the Burdine and Mualem-based hydrodynamics functions have been evaluated but only the Mualem-based hydrodynamic functions
have been shown in the form of graphs considering its preference for accuracy and use. The values of the optimized parameters of
these soil water retention functions were used in the corresponding unsaturated hydraulic conductivity and diffusivity functions of
Brooks-Corey and van Genuchten for describing the hydrodynamic behavior of these soils. Figs. 1 and 2 depicted the behavior of the
hydraulic conductivity and soil water diffusivity functions in relation to soil water content as derived by coupling the Brooks-Corey
and van Genuchten functions with Mualem model. It is evident from these Figs.1 and 2 that as the sand content of these soils
decreases, the hydraulic conductivity and soil water diffusivity at particular soil water content also decreased. So at specific water
content, the hydraulic conductivity and soil water diffusivity were observed to be more in sand and followed by in loam and silty clay
loam soils. The hydraulic conductivity and soil water diffusivity based on the coupling of the van Genuchten function with the
Mualem model were predicted less in comparison to those predicted by Brook-Corey function when coupled with Mualem model.

29

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

However, general case that for the analytical soil water dynamics, the use of Brooks-Corey function tends to be easier and on the other
hand numerical simulation of unsaturated flow, the use of van Genuchten function is mostly adopte d.

Hydraulic Properties: K vs. Theta

Hydraulic Properties: K vs. Theta


500

500

400

400

300

300

200

200

100

100

0
0.0

0.1

0.2

0.3

0.4

0.0

0.1

Water Content [-]

0.2

0.3

0.4

Water Content [-]

(a)

(b)
Hydraulic Properties: K vs. Theta

Hydraulic Properties: K vs. Theta

1.0

7
6

0.8

0.6

4
3

0.4

0.2

1
0
0.15

0.20

0.25

0.30

0.35

0.40

0.45

0.50

0.0
0.30

0.35

0.40

Water Content [-]

0.45

0.50

0.55

Water Content [-]

(c)
(d)
Hydraulic Properties: K vs. Theta
Hydraulic Properties: K vs. Theta

1.0

0.8

6
5

0.6

0.4

3
2

0.2

0.0
0.25

0.30

0.35

0.40

0.45

Water Content [-]

0.50

0.55

0
0.25

0.30

0.35

(e)

30

0.40

Water Content [-]

(f)

www.ijergs.org

0.45

0.50

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1. Hydraulic conductivity as a function of water content based on (a) Mualem-Brooks-Corey function for sand, (b) Mualem-van
Genuchten function for sand, (c) Mualem-Brooks-Corey function for loam, (d) Mualem-van Genuchten function for loam, (e)
Mualem-Brooks-Corey function for silty clay loam, (f) Mualem-van Genuchten function for silty clay loam.

Hydraulic Properties: log D vs. Theta

Hydraulic Properties: log D vs. Theta

2
2

1
0

-2

-1

-4

-2
0.0

0.1

0.2

0.3

0.0

0.4

0.1

0.2

0.3

0.4

Water Content [-]

Water Content [-]

(a)

(b)

Hydraulic Properties: log D vs. Theta

Hydraulic Properties: log D vs. Theta

2
2

1
0

0
-2

-1
-2
0.1

0.2

0.3
Water Content [-]

0.4

0.5

-4
0.25

0.30

0.35

Water Content [-]

(c)

31

0.40

(d)

www.ijergs.org

0.45

0.50

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014

Hydraulic
Properties: log D vs. Theta
ISSN 2091-2730

Hydraulic Properties: log D vs. Theta

6
2

4
1

2
0

0
-1

-2
-2
0.2

0.3

0.4

0.5

0.6

-4
0.30

Water Content [-]

0.35

0.40

0.45

0.50

0.55

Water Content [-]

(e)

(f)

Fig. 2. The soil water diffusivity as a function of water content based on (a) Mualem-Brooks-Corey function for sand, (b) Mualem-van
Genuchten function for sand, (c) Mualem-Brooks-Corey function for loam, (d) Mualem-van Genuchten function for loam, (e)
Mualem-Brooks-Corey function for silty clay loam, (f) Mualem-van Genuchten function for silty clay loam.

Conclusion
For the wet range of water availability in the loam and silty clay loam soil, the best performance was given by the Brooks-Corey soil
water retention function followed by van Genuchten functions with m = 11/n and m = 12/n but van Genuchten function with m =
11/n gave slight better performance in sand in comparison to other functions. At a particular soil water content, Mualem-based
hydraulic conductivity and soil water diffusivity as predicted by the theoretical by the Brooks-Corey and van Genuchten functions
decreased with the decrease in sand content of these soils. The Mualem-van Genuchten function predicted less hydraulic conductivity
and water diffusivity in comparison to those predicted by the Mualem-Brooks-Corey function.

References
[1] Rose, C.W., Stern, W.R. and Drummond, J.E. Determination of hydraulic conductivity as function of depth and water content insitu Water Resour. Res. 3:1-9.1965.
[2] Parkin, G.W., Elrick, D.E., Kachanoski, R.G. and Gibson, R.G. Unsaturated hydraulic conductivity measured by TDR under a
rainfall simulator Water Resour. Res. 31: 447-454. 1995.
[3] Durner, W. and Lipsius, K. Encyclopedia of Hydrological Science. (Ed. M G Anderson)John Wiley & Sons Ltd. 2005.
[4] Brooks, R.H., and Corey, A.T. Hydraulic properties of porous media Hydrology Paper, No.3,
Collins, Colorado. 1964.

Colorado State University, Fort

[5] Campbell, G.S. A simple method for determining unsaturated conductivity from moisture retention data Soil Sci. 117(6): 311314.1974.
[6] Haverkamp, R., Vauclin, M., Touma, J., Wierenga, P. and Vachaud, G. A comparison of numerical simulation models for onedimensional infiltration Soil Sci. Am. J. 41(2) : 285-294.1977.
[7] Van Genuchten, M.Th., A closed-form equation for predicting the hydraulic conductivity of unsaturated soils Soil Sci. Soc. Am.
J., 44: 892-898.1988.
32

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[8] Hutson, J.L. and Cass, A. A retentivity function for use in soil-water simulation models J. Soil Sci. 38: 105-113.1987.
[9] Russo, D. 1988. Determining soil hydraulic properties by parameter estimation: On the selection of a model for the hydraulic
properties Water Resour. Res., 24(3):453-459.1988.
[10] Ross, P.J. and Smettem, K.R.J. Describing Soil hydraulic properties with sums of simple function Soil Sci. Soc. Amer. J.
57.26-29.1993.
[11] Zhang, R. and van Genuchten, M.Th. New models for unsaturated soil hydraulic properties Soil Sci. 158: 77-85. 1994.
[12] Fredlund, D.G., Xing, A. and Huang, S. Predicting the permeability function for unsaturated soils using the soil-water
characteristic curve Canadian Geotechnical J. 31(3): 521-532.1994.
[13] Rossi,C. and Nimmo, J.R. Modeling of soil water retention from saturation to oven dryness Water Resour.Res.30 (3): 701708.1994.
[14] Kosugi,K.Lognormal distribution model for unsaturated soil hydraulic properties Water Resour. Res.32 (9): 2697-2703.1996.
[15] Abrisqueta,J.M., Plana,V., Ruiz-Canales,A. and Ruiz-Sanchez,M.C.
undisturbed loam soil Spanish.J.Agri.Res. 4(1): 91-96.2006.

Unsaturated hydraulic conductivity of disturbed and

[16] Leij,F.J., Russell,W.B., and Lesch, S.M. Closed-form expressions for water retention and conductivity data Ground Water.
35(5): 848-858.1997.
[17] Assouline, S. and Tartakovskly, D. M. Unsatruated hydraulic conductivity function based on a soil
Water Resour. Res. 37(5): 1309-1312.2001.

fragmentation process

[18] Burdine, N.T. Relative permeability calculations from pore- size distribution data Trans. Amer. Inst. Mining Metallurgical, and
Petroleum Engrs. 198: 71-78. 1953.
[19] Mualem, Y. A new model for predicting the hydraulic conductivity of unsaturated porous media Water Resour. Res. 12 (3):
513-522.1976.
[20] Kalane, R.L., Oswal, M.C. and Jagannath. Comparison of theoretically estimated flux and observed values under shallow water
table J. Ind. Soc. Soil Sci. 42. (2): 169-172.1994.
[21] Durner,W.1994. Hydraulic conductivity estimation for soils with heterogeneous pore structure Water Resour. Res. 30 (2) : 211223.1994.
[22] Zhang, Z.F., Ward, A.L. and Gee,G.W. Describing the unsaturated hydraulic properties of anisotropic soils using a tensorial
connectivity-tortuosity concept Vadose Zone J.2(3):313-321.2003.
[23] Wosten,J.H.M. and van Genuchten, M.Th. Using texture and other soil properties to predict the unsaturated soil hydraulic
functions Soil Sci. Soc. Amer. J. 52: 1762-1770.1988.
[24] Van Genuchten, M.Th.,Leij, F.J. and Yates, S.R.The RETC code for quantifying the hydraulic functions of unsaturated soils
Res. Rep. 600 2-91 065, USEPA, Ada.O.K. 1991.
[25] Marquardt, D.W.An algorithm for least-squares estimation of non-linear parameters J. Soc. Ind. Appl. Math. 11: 431-441.1963.
[26] Kosugi, K., Hopmans, J.W. and Dane, J.H. Water Retention and Storage-Parametric Models In Methods of Soil Analysis. Part
4.Physical Methods. (Eds. E.J.H. Dane and G.C. Topp) pp. 739-758. Book Series No.5. Soil Sci. Soc. Amer., Madison, USA. 2002.
[27] Szymkiewicz, A. Chapter 2 :Mathmatical Models of Flow in Porous Media In Modeling Water Flow in Unsaturated Porous
Media Accounting for Nonlinear Permeability and Material Heterogeneity. Springer. 2013.
[28] Zhu, J. and Mohanty, B.P. Effective hydraulic parameter for steady state vertical flow in heterogeneous soils Water Resour.
Res. 39 (8) : 1-12.2003.
[29] Nimmo, J.R. Unsaturated zone flow processes In (Eds. Anderson, M.G. and Bear, J. Encyclopedia of Hydrological Science.
Part 13-Groundwater: vol4 : 2299-2322. Chichester, UK, Wiley.2005.
33
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[30] Mualem, Y. Hydraulic conductivity of unsaturated soils: predction and formulas. In Methods of soil analysis Part- 1.Physical
and mineralogical methods. (2nded.) (Eds. A. Klute). Amer. Soc Agronomy, Inc. and Soil Sci. Soc. Amer. Inc Madison, WI, USA,
799-823.1986.
[31] Nimmo, J.R.Comment on the treatment of residual water content In A consistent set of parametric models for the two-phase
flow of immiscible fluids in the subsurface by Luckner, L. et al. Water Resour. Res. 27: 661-662.1991.
[32] Ross, P.J., William, J. and Bristow, K.L. Equations for extending water-retention curves to dryness Soil Sci. Soc. Amer. J., 55:
923-927.1991.
[33] Mavimbela, S.S.W. and van Rensburg, L.D. 2013. Estimating hydraulic conductivity of internal drainage for layered soils in
situ Hydrol. Earth Syst. Sci. 17: 4349-4366. 2013.
[34] Mace, A., Rudolph, D.L. and Kachanoski, R, G., Suitability of parametric models to describe the hydraulic properties of an
unsaturated coarse sand and gravel Ground Water. 36(3): 465-475.1998

34

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Seismic Behavior of soft Storey Building : A Critical Review


Devendra Dohare1, Dr.Savita Maru2
1P.G. student Ujjain Engineering College, Ujjain, M.P, India.
2 Professor & HOD of Civil Department Ujjain Engineering College, Ujjain, M.P, India

1Devhare@gmail.com
2Savitamaru@yahoo.com
Soft first storey is a typical feature in the modern multi-storey constructions in urban India. Though multi-storeyed
buildings with soft storey floor are inherently vulnerable to collapse due to earthquake, their construction is still widespread in the
developing like India. Functional and Social need to provide car parking space at ground level and for offices open stories at different
level of structure far out-weighs the warning against such buildings from engineering community. With the availability of fast
computers, so that software usage in civil engineering has greatly reduced the complexities of different aspects in the analysis and
design of projects. In this paper an investigation has been made to study the seismic behaviour of soft storey building with different
arrangement in soft storey building when subjected to static and dynamic earthquake loading. It is observed that , providing infill
improves resistant behaviour of the structure when compared to soft storey provided.
Abstract:-

Keywords: Soft storey, Static and dynamic analysis, Seismic loads.


I. INTRODUCTION
Due to increasing population since the past few years so that car parking space for residential apartments in populated cities is a matter
of major problem. So that constructions of multi-storeyed buildings with open first storey is a common practice in all world. Hence
the trend has been to utilize the ground storey of the building itself for parking or reception lobbies in the first storey. These types of
buildings having no infill masonry walls in ground storey, but all upper storeys infilled in masonry walls are called soft first storey or
open ground storey building. Experience of different nations with the poor and devastating performance of such buildings during
earthquakes always seriously discouraged construction of such a building with a soft ground floor This storey known as weak storey
because this storey stiffness is lower compare to above storey. So that easily collapses by earthquake.
Due to wrong construction practices and ignorance for earthquake resistant design of buildings in our country, most of the existing
buildings are vulnerable to future earthquakes. So, prime importance to be given for the earthquake resistant design. The Indian
seismic code IS 1893 (Part1): 2002 classifies a soft storey as one in which the lateral stiffness is less than 70 percent of that in the
storey above or less than 80 percent of the average lateral stiffness of the three storeys above
II. GENERAL BEHAVIOUR OF SOFT STOREY
Stability of earth is always disturbed due to internal forces and as a result of such disturbance, vibrations or jerks in earth's crust takes
place, which is known as an earthquake.

Earthquake produces low high waves which vibrate the base of structure in various manners and directions, so that lateral force is
developed on structure. In such buildings, the stiffness of the lateral load resisting systems at those stories is quite less than the stories
above or below.

Image source: EQTip21 NICEE


35

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Such building act as an Inverted Pendulum which swing back and forth producing high stresses in columns and if columns are
incapable of taking these stresses or do not posses enough ductility, they could get severely damaged and which can also lead to
collapse of the building. This is also known as inverted pendulum. Soft stories are subjected to larger lateral loads during earthquakes
and under lateral loading. This lateral force cannot be well distributed along the height of structure. This situation causes the lateral
forces to concentrate on the storey having large displacement. The lateral force distribution along the height of a building is directly
related to mass and stiffness of each storey. The collapse mechanism of structure with soft storey under both earthquake and gravity
loads. Therefore dynamic analysis procedure is accurate distribution of the earthquake and lateral forces along the building height,
determining modal effects and local ductility demands efficiently.
III. REVIEW OF LITERATURE
A significant amount of research work on seismic behaviour of soft storey building has been done by many investigators research area
Such as
[1] Suchita Hirde and Ganga Tepugade(2014), Discussed the performance of a building with soft storey at different level along
with at ground level. The nonlinear static pushover analysis is carried out. Concluded it is observed that plastic hinges are developed
in columns of ground level soft storey which is not acceptable criteria for safe design. Displacement reduces when the soft storey is
provided at higher level.
[2] Hiten L. Kheni and Anuj K. Chandiwala (2014), Investigate many buildings that collapsed during the past earthquake
exhibited exactly the opposite strong beam weak column behaviour means columns failed before the beams yielded mainly due to soft
storey effect. For proper assessment of the storey stiffness of buildings with soft storey building, different models were analysed using
software. Concluded the displacement estimates of the codal lateral load patterns are observed to be smaller for the lower stories and
larger for the upper stories and are independent of the total number stories of the models.
[3] Dhadde Santosh(2014), Investigate nonlinear pushover analysis is conducted to the building models using ETABS and
evaluation is carried for non-retrofitted normal buildings and retrofitting methods are suggested like infill wall, increase of ground
story column stiffness and shear wall at central core. Concluded storey drift values for soft storey models maximum values compare
to other storeys and the values of storey drift decreases gradually up to the top.
[4] Rakshith Gowda K.R and Bhavani Shankar(2014), Investigate the soft storeys are provided at different level for different
load combinations and ETABS is used for modeling and analysis RC buildings. Concluded the inter storey drift was observed to be
maximum in vertically irregular structure when compared with that of regular structure.
[5] Mr.D.Dhandapany(2014), Investigate the seismic behaviour of RCC buildings with and without shear wall under different soil
conditions. Analyzed using ETABS software for different soil conditions (hard, medium, soft). The values of Base shear, Axial force
and Lateral displacement were compared between two frames. Concluded The design in STAAD is found to be almost equal results to
compare in ETABS for all structural member.
[6] Susanta Banerjee, Sanjaya K Patro and Praveena Rao(2014), Analysis response parameters such as floor displacement, storey
drift, and base shear. Modelling and analysis of the building are performed by nonlinear analysis program IDARC 2D. Concluded
lateral roof displacement and maximum storey drift is reduced by considering infill wall effect than a bare frame.
[7] D. B. Karwar and Dr. R. S. Londhe(2014), Investigate the behaviour of Reinforced Concrete framed structures by using
nonlinear static procedure (NSP) or pushover analysis in finite element software SAP2000.and the Comparative study made for
different models in terms of base shear, displacement, performance point. Concluded base shear is minimum for bare frame and
maximum for frame with infill for G+8 building.
[8] Miss Desai Pallavi T(2013), Investigate the behaviour of reinforced concrete framed structures by using Staad Pro. Modelling
four structure and compare stiffness this models. Concluded provide the stiffer column in first storey.
[9] Amit and S. Gawande(2013), Investigate the seismic performance and design of the masonry infill reinforced concrete structure
with the soft first storey under a strong ground motion.
[10] Nikhil Agrawal(2013), Analysis the performance of masonry infilled reinforced concrete (RC) frames including open first
storey of with and without opening. The increase in the opening percentage leads to a decrease on the lateral stiffness of infilled frame.
Concluded Infill panels increase stiffness of the structure.
[11] A.S.Kasnale and Dr. S.S.Jamkar(2013), Investigate the behaviour of five reinforced RC frames with various arrangement of
infill when subjected to dynamic earthquake loading. Concluded providing infill wall in RC building controlled the displacement,
storey drift and lateral stiffness.
36

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[12] Dande P. S. and, Kodag P. B.(2013), Investigate the behaviour of RC frames with provided strength and stiffness to the
building frame by modified soft storey provision in two ways, (i) By providing stiff column & (ii) By providing adjacent infill wall
panel at each corner of building frame. Concluded the walls in upper storeys make them much stiffer than open ground storey.
Difficult to provide such capacities in the columns of the first storey.
[13] Narendra Pokar and Prof. B. J. Panchal(2013), Investigate the behaviour of RC frames with Testing of scaled models is
essential to arrive at optimal analytical model and special design provisions for such structures. Structure is modeled and analyzed
using SAP platform including seismic effect. Concluded both steel and RCC model gives nearest result for full scale model.
[14] N. Sivakumar and S. Karthik(2013), Investigate the behaviour of the columns at ground level of multi-storeyed buildings
with soft ground floor subjected to dynamic earthquake loading. ETABS used for modelling of the six and nine storey structure, line
element was used for columns and beams and concrete element was used for slabs. Concluded reducing the drift as well as the
strength demands on the first storey columns so that provides stiffer columns in the first storey.
[15] Dr. Saraswati Setia and Vineet Sharma(2012), Analysis seismic response of R.C.C building with soft storey. Equivalent static
analysis is performed for five different models by using the computer software such as STAA Pro. Concluded minimum displacement
for corner column is observed in the building in which a shear wall is introduced in X-direction as well as in Z-direction.
[16] P.B.Lamb and Dr R.S. Londhe(2012), Analysis multistoried building with soft first storey, located in seismic zone IV. It is
intended to describe the performance characteristics such as stiffness, shear force, bending moment, drift. Concluded shear wall and
cross bracings are found to be very effective in reducing the stiffness irregularity and bending moment in the columns.
[17] V. Indumathy and Dr.B.P. Annapurna (2012), Investigate the four storied one bay infilled frame with soft storey at ground
floor and window openings in higher floors. Shape of opening - square opening showed lower lateral deformation compared to
rectangular opening and on other hand rectangular opening oriented horizontally exhibit lower lateral deformation than vertical
orientation. Concluded square opening showed lower lateral deformation compared to rectangular opening and on other hand
rectangular opening oriented horizontally exhibit lower lateral deformation than vertical orientation.
[18] M.Z. Kabir and P. Shadan(2011), Investigate the effect of soft story on seismic performance of 3D-panel buildings. Results
verified numerically with finite element model using ABAQUS program and 3D-panel system has considerable resistance. Concluded
applying several ground motions final cracks is appeared at the end of columns and beam-column connections. However, upper stories
had no crack during shaking table test.
[19] G.V. Mulgund and D.M. Patil(2010), Investigate
the behaviour of RC frames with various arrangement of infill when
subjected to dynamic earthquake loading and result of bare and infill frame are compared. Concluded masonry infill panels in the
frame substantially reduce the overall damage.
[20] A. Wibowo and J.L. Wilson, (2009), Analysis an analytical model has been made to predict force-displacement relationship of
the tested frame. The experimental investigation the load deflection behaviour and collapse modelling of soft storey building with
lateral loading. Concluded the large drift capacity of the precast soft storey structure was attributed to the weak connections which
allowed the columns to rock at each end.
[21] Sharany Haque and Khan Mahmud Amanat (2009), Investigate the effect of masonry infill in the upper floors of a building
with an open ground floor subjected to seismic loading. The number of panels with infill is varied from bare frame condition (zero
percent infilled panels) and 10, 30, 50 and 70 percent of panels with infill on the upper floors and Comparison of base shear.
Concluded the design shear and moment calculated by equivalent static method may at least be doubled for the safer design of the
columns of soft ground floor.
[22] Seval Pinarbasi and Dimitrios Konstantinidis(2007), Investigate the hypothetical base-isolated building with a soft ground
story. Comparison is made with how soft-story flexibility affects the corresponding fixed-base building. Concluded performance of a
soft-story building, is also effective in particularly reducing the seismic demand (i.e., interstory drift) on the soft-story level, which is
the primary cause of catastrophic collapse in these types of buildings.
[23]s Dr. Mizan Dogan and Dr. Nevzat Kirac(2002), Investigate the quake results, it is observed that partitioning walls and beam
fillings enable buildings to gain great rigidity. Also solutions were Investigate d for making the soft storeys in the present
constructions and in the ones to be built resistant to quake.
[24] Jaswant N. Arlekar, Sudhir K. Jain and C.V.R. Murty(1997), Investigate the behaviour of reinforced concrete framed
structures by using ETABS . The nine models of building compare stiffness. Concluded such buildings will exhibit poor performance
during a strong shaking. solution to this problem is in increasing the stiffness of the first storey.

37

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IV. CONCLUSION
RC frame buildings with soft story are known to perform poorly during in strong earthquake shaking. Because the stiffness at lower
floor is 70% lesser than stiffness at storey above it causing the soft storey to happen. For a building that is not provided any lateral
load resistance component such as shear wall or bracing, the strength is consider very weak and easily fail during earthquake. In such
a situation, an investigation has been made to study the seismic behaviour of such buildings subjected to earthquake load so that some
guideline could be developed to minimize the risk involved in such type of buildings. It has been found earthquake forces by treating
them as ordinary frames results in an underestimation of base shear. Investigators analysis numerically and use various computer
programs such as Staad Pro, ETABS, SAP2000 etc. Calculation shows that, when RC framed buildings having brick masonry infill on
upper floor with soft ground floors subjected to earthquake loading, base shear can be more than twice to that predicted by equivalent
earthquake force method with or without infill or even by response spectrum method when no infill in the analysis model.
REFERENCES:
[1] Suchita Hirde and Ganga Tepugade(2014), Seismic Performance of Multistorey Building with Soft Storey at Different Level with RC Shear
Wall, International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161
[2] Hiten L. Kheni,and Anuj K. Chandiwala(2014), Seismic Response of RC Building with Soft Stories, International Journal of Engineering Trends
and Technology (IJETT) Volume 10 Number 12 - Apr 2014.
[3] Dhadde Santosh(2014), Evaluation and Strengthening of Soft Storey Building, International Journal of Ethics in Engineering & Management
Education.
[4] Rakshith Gowda K.R and Bhavani Shankar(2014), Seismic Analysis Comparison of Regular and Vertically Irregular RC Building with Soft
Storey at Different Level, International Journal of Emerging Technologies and Engineering (IJETE).
[5] Mr.D.Dhandapany(2014), Comparative Study of and Analysis of Earthquake G+5 Storey Building with RC Shear Wall, Int.J.Engineering
Research and Advanced Technology,Vol 2 (3),167-171.
[6] Susanta Banerjee, Sanjaya K Patro and Praveena Rao(2014),Inelastic Seismic Analysis of Reinforced Concrete Frame Building with Soft Storey,
International Journal of Civil Engineering Research. ISSN 2278-3652 Volume 5, Number 4 (2014),
[7] D. B. Karwar and Dr. R. S. Londhe(2014), Performance of RC Framed Structure by Using Pushover Analysis, International Journal of Emerging
Technology and Advanced Engineering
[8] Miss Desai Pallavi T(2013),, Seismic Performance of Soft Storey Composite Coloumn , International Journal of Scientific & Engineering
Research, Volume 4, Issue 1, January-2013 ISSN 2229-5518.
[9] Amit and S. Gawande(2013), Seismic Analysis of Frame with Soft Ground Storey, IJPRET, 2013; Volume 1(8): 213-223
[10] Nikhil Agrawal(2013), Analysis of M asonry Infilled RC Frame with & without Opening Including Soft storey by using Equivalent Diagonal
Strut Method, International Journal of Scientific and Research Publications, Volume 3, Issue 9, September 2013 1 ISSN 2250-3153.
[11] A.S.Kasnale and Dr. S.S.Jamkar(2013), Study Of Seismic Performance For Soft Basement Of RC framed, International Jounral of Engineering
Sciences& Research Technology.
[12] Dande P. S. and, Kodag P. B.(2013), Influence of Provision of Soft Storey in RC Frame Building for Earthquake Resistance Design,
International Journal of Engineering Research and Applications
[13] Narendra Pokar and Prof. B. J. Panchal(2013),Small Scale Modlling on Effectof Soft Storey, International Journal of Advanced Engineering
Technology
[14] N. Sivakumar and S. Karthik(2013), Seismic Vulnerability of Open Ground Floor Columns in Multi Storey Buildings, International Journal of
Scientific Engineering and Research (IJSER)
[15] Dr. Saraswati Setia and Vineet Sharma, Seismic Response of R.C.C Building with Soft Storey, International Journal of Applied Engineering
Research, ISSN 0973-4562 Vol.7 No.11 (2012).
[16] P.B.Lamb and Dr R.S. Londhe(2012), Seismic Behaviour of Soft First Storey, IOSR Journal of Mechanical and Civil Engineering (IOSRJMCE) ISSN: 2278-1684 Volume 4, Issue 5 (Nov. - Dec. 2012), PP 28-33
[17] V. Indumathy and Dr.B.P. Annapurna (2012), NonLinear Analysis of Multistoried Infilled Frame with Soft Storey and with Window Openings
of Different Mortar Ratios, Proceedings of International Conference on Advances in Architecture and Civil Engineering (AARCV 2012), 21st 23rd
June 2012
[18] M.Z. Kabir and P. Shadan(2011),Seismic Performance of 3D-Panel Wall on Piloti RC Frame Using Shaking Table Equipment, Proceedings of
the 8th International Conference on Structural Dynamics, EURODYN 2011 Leuven, Belgium, 4-6 July 2011
[20] G.V. Mulgund and D.M. Patil(2010), Seismic Assesement of Masonry Infill RC Framd Building with Soft Ground Floor, International
Conference on Sustainable Built Environment (ICSBE-2010) Kandy, 13-14 December 2010.
[20] A. Wibowo and J.L. Wilson, (2009), Collapse Modelling Analysis of a Precast Soft-Storey Building in Melbourne, Australian Earthquake
Engineenring Society 2009 Conference Newcastle, New South Wales, 11-13 December 2009

38

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[21] Sharany Haque and Khan Mahmud Amanat(2009), Strength and Drift Demand of Columns of RC Framed Buildings with Soft Ground Story,
Journal of Civil Engineering (IEB), 37 (2) (2009) 99-110
[22] Seval Pinarbasi and Dimitrios Konstantinidis(2007),Seimic Isolation for Soft Storey Buildings, 10th World Conference on Seismic Isolation,
Energy Dissipation and Active Vibrations Control of Structures, Istanbul, Turkey, May 28-31, 2007
[23] Dr. Mizan Dogan and Dr. Nevzat Kirac(2002),, Soft Storey Behaviour in Earthquake and Samples of Izmit Duzce , ECAS 2002
Uluslarararas Yap ve Deprem Mhendislii Sempozyumu, 14 Ekim 2002, Orta Dou Teknik niversitesi, Ankara, Trkiye.
[24] Jaswant N. Arlekar, Sudhir K. Jain and C.V.R. Murty(1997), Seismic Response of RC Frame Buildings with Soft First Storeys, Proceedings of
the CBRI Golden Jubilee Conference on Natural Hazards in Urban Habitat, 1997, New Delhi

39

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Effect of water temperature during evaporative cooling on Refrigeration


system
Mohit Yadav, Prof. S.S Pawar
M.Tech (Thermal engineering)Bhabha Engineering & Research Institute, Bhopal, India

Professor (ME Deptt.), Bhabha Engineering & Research Institute, Bhopal, India
mohit.cseian@gmail.com

ABSTACT: Refrigerator is mainly a composition of four devices Compressor, Condenser, Expansion device and evaporator which
have some limitations [1]. Temperature range of working is also a limitation for Refrigerator which affects their performance. Here we
provide more effort to reduce the limitations related to working temperature range and try to modify the size of refrigerator with the
effect of evaporative cooling.Since condenser rejects latent heat of refrigerant to atmosphere due to higher temperature of refrigerant
at condenser. So due to this rejection of heat it provides the cooling in evaporator [2]. Co-efficient of performance of refrigeration
system mainly depends on temperature difference between the condenser and that medium where heat is to be rejected. More
temperature difference, more heat rejection so more cooling on account of same work to refrigeration system. But if the temperature
difference is less, less heat rejection will be there so less cooling by giving same amount of work which decreases the Co-efficient of
performance of the system. [3]
KEY-WORD- Co-efficient of performance (C.O.P), Evaporative cooling, Percentage increment in C.O.P
INTRODUCTION-

Refrigeration system is used to provide cooling by the use of mainly four components Compressor, Condenser, expansion device and
evaporator. These four components are operated with a refrigerant which works as heat carrier in this system. It extracts the heat from
evaporator in the form of latent heat and rejects that heat to atmosphere through the condenser [4]. Therefore heat rejection capacity
depends on difference between refrigerant temperature at condenser and atmospheric temperature [5]. Quantity of heat rejection also
affects the quantity of heat absorption through evaporator. It is clear that more heat rejection will result more heat absorption.
Atmospheric temperature varies according toEnvironmental condition i.e. during the summer atmospheric temperature becomes
higher which decreases the temperature difference between refrigerant and atmosphere, results less heat rejection whichdecreases the
overall performance of refrigeration system[6].
In this paper, by experiment on Ice plant test we proved that lower the condenser temperature means higher the performance of
refrigeration system. Performance of refrigeration system can be improved by provide the evaporative cooling effect on condenser by
spray of water on condenser which add the evaporative cooling effect on condenser. In last publication of December 2013 we have
proved that by evaporative cooling on condenser the C.O.P is increased. by 39.04% which is great achievement. Same if uses
evaporative cooling effect on air conditioning unit that will reduce the size and operating cost of air conditioning unit

40

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.1. Refrigeration Test Rig

Methodology
If T1, T2 temperature of surrounding and temperature of evaporator in case of refrigerator than co-efficient of
performance is given byT2
(C.O.P)Ref. = -----------(A)
T1 T2
Equation (A) defines that C.O.P of refrigeration system depends on T1 and T2. It means C.O.P of refrigeration
system will be higher if T1 is lower or T2 is higher. Since lower temperature of evaporator is desirable so we
should not increase the temperature of evaporator i.e T2. On other hand, we can not reduce the temperature of
surrounding i.e T1.
So evaporative cooling of condenser is best option of reduce the temperature of water up to wet bulb
temperature of air.
For analyses the effect of evaporative cooling we have used following steps

Steps 1)

Fill the tank of ice plant with 10 kg of water and notedown the initial temperature of water and Wattmeter
reading .Then start the compressor for 50 minutes.
2) Note down the reading, temperatures after
compressor, after condenser, after expansion, and after evaporator water temperature, suction pressure
andexhaust pressure when test rig utilized the 0.1Kwh power.
41

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3) After that spray the water of temperature 18oC on condenser and again read the temperatures at previous
locations when test rig utilized the 0.1Kwh power.
4) Now repeat this procedure for spray water of temperature 23o C and 30o C.
Results Here Following abbreviations are used:
P1 = Discharge Pressure
P2 = Suction pressure
T1 = Temperature after compressor,oC
T2= Temperature after condenser,oC
T3 = Temperature after expansion device,oC
T4 = Temperature after evaporator,oC
T5 = Water temperature,oC
A) Refrigeration effect with spray water temperature of 18O C

P1
P2
T1
T2
T3
T4
T5

Before
133 psi
2.2 psi
65
43
6
24
19

After 10 Minutes
133 psi
2.2 psi
54
36
3
2
6

B) Refrigeration effect with spray water temperature of 23O C


Before
133.3 psi
2.2 psi
65
40
15
16
18

P1
P2
T1
T2
T3
T4
T5

After 10 Minutes
133.3 psi
2.2 psi
60
34
4
3
7

C) Refrigeration effect with spray water temperature of 30O C


Before

After 10 Minutes

P1

133.3 psi

133.3 psi

P2

2.2 psi
65
40
16
30

2.2 psi
60
35
9
16

T1
T2
T3
T4
42

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

T5

23

15

TEMPERATURE VARIATION FOR DIFFERENT SPRAY WATER TEMP.


25

20

15

10

0
18 Deg. Celsius

23 Deg. Celsius

T1(Before)-T1(After)

T2(Before)-T2(After)

T4(Before)-T4(After)

T5(Before)-T5(After)

30 Deg. Celsius
T3(Before)-T3(After)

Fig2. Variation of temperature for different spray water temperature

REFERENCES:

1.
2.
3.
4.
5.
6.
7.
8.

43

A text book of Refrigeration and air conditioning by R.S Khurmi and J.K Gupta, Edition 2006, Page 43
Engineering Thermodynamics, Fourth Edition by P.K Nag, Page 583
Air conditioning and refrigeration by Rex Miller and Mark R. Miller, Page 56
Refrigeration and air conditioning by G.F Hundy, Page 16
Engineering Thermodynamics, Fourth Edition by P.K Nag, Page 583
A text book of Refrigeration and air conditioning by R.S Khurmi and J.K Gupta, Edition 2006, Page 44
Ngenharia Trmica (Thermal Engineering), Vol. 5 No 02 December 2006 p. 09-15
Effect of evaporative cooling on refrigeration system by Mohit Yadav, IJESRT Dec; 2013

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review on Sensor parameter analysis for forward collision detection


system
Miss. Vaishali B.Gadekar1, Mrs. Savita R, Pawar2
1

M.E.Student, Department of Electronics and Telecommunication, MIT Academy of Engineering, Alandi, Pune.
Asst. Professor, Department of Electronics and Telecommunication, MIT Academy of Engineering, Alandi, Pune.
Email:vaishaligadekar17@gmail.com, srpawar@etx.mae.ac.in

Abstract Automobile crash safety is becoming one of the important criteria for customer vehicle selection. Today,
passive safety systems like seat belts, airbags restraints systems have become very popular for occupant protection during
collisions. Even the active safety systems like ABS, ESP, parking assist camera etc are becoming regular fitments on
many vehicle models. Also many technologies are evolving for collision detection, warning & avoidance as well.
Different sensors, which comprise of RADAR, LIDAR / LASER or Camera, are used in forward collision warning (FCW)
to avoid the accidents. In this project scope, study is carried out on sensing parameters for different types of sensors on
Indian road environment in context of collision avoidance systems to benefit the overall road safety in India. The
analyses of the parameters will support towards selection of best sensing configuration, to achieve optimal system
performance. Such a study would also provide insights into the functionality limitations of different types of sensing
systems.
KeywordsCollision avoidance System, forward collision warning System, Radar Sensor, LiDAR sensor, Camera
sensor, Azimuth Elevation Field of View.

INTRODUCTION: When early automobiles were involved in accidents, there was very little or no protection available
for the vehicles occupants. However, over a period of time automotive engineers designed safe vehicles to protect drivers
and passengers. Advances such as improved structural design, seat belts and air bags systems helped decrease the number
of injuries and deaths in road accidents. Recently collision avoidance systems (CAS) are evolving to avoid vehicle
collisions or mitigate the severity of vehicle accident. These systems assist drivers in avoiding potential collisions [1]. In
order for a CAS to provide a positive and beneficial influence towards the reduction of potential crashes, it is critical that
the CAS system has the ability to correctly identify the vehicle, pedestrian & object targets in the Host vehicles path [1].
The solution to this problem relies primarily on the CAS systems sensing system ability to estimate the detection range,
relative speed, radius-of-curvature,etc. between the Host vehicle and all other appropriate targets (i.e.: roadside objects,
pedestrians, vehicles, etc). The in-path target identification & discriminating them from out of path objects (deal with
nuisance object) is technically very complex and challenging task in collision avoidance system [1].
The range, range rate, and angular information of other vehicles and/or objects around the host vehicle can be measured
by sensors radar, lidar, and/or cameras in real time.CAS process all the information in real time to keep track of the most
current vehicle-to-vehicle kinematic conditions. When a potential collision threat is identified by the system, appropriate
warnings are issued to the driver to facilitate collision avoidance. If the driver fails to react in time to the warnings to
avoid the imminent collision, an overriding system can take over control to avoid or mitigate the collision in an
emergency situation. Therefore collision avoidance systems can assist drivers in two ways, warning and/or overriding,
according to the dynamic situation. In such situations some of critical sensing parameters are: [2]

44

Azimuth field of view: The required range for the field of view of sensor.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Elevation Field of View (FOV): By determining a suitable value for the elevation FOV parameter helps sensor to keep track of
objects which are within range and azimuth FOV and account for road tilt (5% grade), road variation, sensor misalignment, and
vehicle pitch.
Operating Range: Sensor is required to detect/track stopped objects at a range that provides time for driver reaction.
Range Rate: Needs to be large to avoid aliasing or dropping target tracks.

FORWARD COLLISION DETECTION SENSORS:


1.

RADAR SENSOR: RADAR which stands for Radio Detection and Ranging is a system that uses electromagnetic waves for
detecting, locating, tracking and identifying moving and fixed objects at considerable distances. In this technology the distance
from the object is calculated through the echoes that are sent back from the object. Radar transmitter transmits electro-magnetic
waves through a directional antenna in any given direction in a focused manner. A part of the transmitted energy is absorbed by
the atmosphere. Some of the energy travels further through the atmosphere and a fraction of it is scattered backward by the targets
and is received by the radar receiver. The amount of received power depends upon radar parameters like transmitted power, radar
wavelength, horizontal and vertical beam widths, scattering cross section of the target atmospheric characteristics etc.In the
Forward Collision Warning System, Doppler Effect based radar transmits and detects electromagnetic waves and the time taken
for detection after transmission helps to determine the distance from the lead vehicle or obstacle. Although the amount of signal
returned is tiny, radio signals can easily be detected and amplified [5]. Radar radio waves can be easily generated at any desired
strength, detected at even tiny powers, and then amplified many times. Thus radar is suited to detecting objects at very large
ranges where other reflections, like sound or visible light, would be too weak to detect. The determination of the position of an
object is done through the Time-of-flight and angle measurement. In process of Time-of-flight measurements, electromagnetic
energy is sent toward objects and the returning echoes are observed. The measured time difference and the speed of the signal
allow calculating the distance to the object. The Speed measurement is made through the Doppler Effect. The base of Doppler
Effect is change of wavelength due to the changing gap between waves. In the automobile industry there are two kinds of
RADAR: short range and long range RADAR. Short range RADAR (24 GHz) reaches approximately a range of 0.2-20 m, while
long range RADAR (76-77 GHz) reaches a distance between 1-200 m. The characteristics from RADAR change a lot depending
on short range or long range [6].

2.

LIDAR SENSOR: LIDAR (Light Detection and Ranging; or Laser Imaging Detection and Ranging) is a technology that
determines distance to an object or surface using laser pulses. As in the similar radar technology, which uses radio wave instead
of light, determination of the range to an object is done by measuring the time delay between transmission of a pulse and
detection of the reflected signal. The main difference between lidar and radar is that much shorter wavelengths of the
electromagnetic spectrum are used, usually in the ultraviolet, visible, or near infrared. Lidars provide range, range rate, azimuth
and elevation measurements. Laser based ranging is a time-of-flight measurement of a light beam from a light source to a target
and back. In these systems, the laser scanner device encompasses a transmitter and receiver. When the beam hits an object, part of
the incident beam energy is reflected, indicated by the red arrows representing a hemispherical radiation pattern. The receiver is
located near the laser; it is an optical system that captures the energy radiated back from the target object. The received signal is
further processed to compute the distance from the Lidar to the object. In the path from the transmitter to the target object, the
beam is spreading with a small angle .This spreading causes a decrease in intensity as the distance increases and is referred to as
geometric loss. The medium through which the light travels might absorb or scatter the light which introduces a path loss that
increases with the distance to the target[6].

Figure 1 : Operating principle of Lidar

45

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3. CAMERA SENSOR: Vision systems use one or several cameras together with a microprocessor to perform image
processing. Since they operate in the visible light region, their capabilities are similar to that of our own eyes. In this
type of FCWS, a camera based sensor is used to detect car/obstacle in front of the host vehicle. Camera based FCWS
are typically used for medium range, medium field of view detection. The camera system can be charge coupled
device (CCD) based or Complementary Metal Oxide Semiconductor (CMOS) based [5].
The two main types of system are:

Single camera systems - using either a monochrome or a color camera. One use in automotive applications for single
camera systems is to monitor the lane markings in lane-keeping aid systems.
Stereo camera systems - A stereo camera system provides a 3D image by combining the images from two (or more)
cameras. In such a system, range can be measured through triangulation. Because of the 3D information obstacle detection
is easier. Generally, shape or pattern recognition is not needed to the same extent as for a single camera system.

The performance of a vision system depends on among other thing the optics, the size of the sensor, number of pixels, and the dynamic
range. The update frequency of many vision sensors is 25 Hz
The most important measurements provided by the automotive sensor are range, range rate, azimuth angle, and elevation angle as
shown in following figure 2.

Figure 2: Measurements provided by the radar sensor are range, elevation angle and azimuth angle.

Commonly used Sensor Performance Comparison Matrix:

Performance Factors

Sensor
Radar

Lidar

Vision based

Rain
Snow
Fog
Environmental
Hail

46

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

influence

Dust
Day-night
operation
Metallic
object

Object Type
Non-Metallic
object
Object Discrimination capability
Range

120-200 m

50-150 m

50-70m

Good Performance
Degradation with
condition
Poor Performance

Table 1: Sensor Performance Comparison Matrix

METHODOLOGY
The following methodology has been used to analysis the performance of the sensing parameter. The following flow
diagram describes the steps of analysis which is described in the methodology.

47

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Start

Objective

Sensor Study
(Advantages and limitation of different sensors )

Test Scenario

Test Object

Sensor Parameter Study (DOE)

Critical Sensor Parameter Identification

Robustness Study of critical parameter

Sensor configuration definition based on Robustness


Study

End

Figure 3: Flow chart -Project flow diagram

48

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Study of different sensor types:


The first task of this study is to get the knowledge about the different types of sensor i.e. RADAR, LiDAR and Camera/
Vision sensor. Hence studied has been carried out by considering key performance of sensing parameter with respect to
different Lighting Condition, Environmental condition i.e. weather factor affect and also with respect to different Road
types like concrete , asphalt, gravel , sand etc by considering different infrastructure types ( like curve road, straight
road ,U-turn, gradient type )
Selection criteria with possible combination:
The selection criteria is based on the range rate ,object discrimination capability of the sensor under different weather
condition (like dense fog, heavy rain, snow, sensitivity to light condition) and operating range.
CAE assessment using software tool: Evaluation of the different objects / vehicles detection data received from CAE or
testing for different types of sensor performance study: For CAE study the software will be used in which it will create
different type of load cases (traffic scenario) and by considering the typical traffic / accident scenario and object detection
in different environment conditions, it will be Identify the risks where sensing systems can fail to meet the performance
requirements and also Derive the specification for sensing configurations for forward collision avoidance systems.
Selection of sensing system configuration and design parameter based on CAE assessment.
Robustness Study of design parameter of chosen sensing system for parameter robustness.

CONCLUSION
[1] During this study different automotive sensors are studied with their advantages disadvantages and limitations.
[2] Performance assessment criteria for sensor configuration have been established in this study.
[3] CAE simulation has been carried out for doing assessment of sensor performance under different environment and working
condition..
[4] CAE simulation will be finalize for define application.
[5] For define sensor configuration robustness study will be carried out for various sensor parameter.

REFERENCES:
[1] Yizhen Zhang, Erik K. Antonsson and Karl Grote, A New Threat Assessment Measure for Collision Avoidance Systems, California Institute
of Technology2006, Intelligent Transportation Systems Conference (ITSC), IEEE Conference Publication 17-20. pp-1 Sept. 2006.
[2] P.L.Zador, S.A. Krawchuk, R.B. Voas, Automotive Collision Avoidance System (ACAS) Program, U.S Department of Transportation,
National Highway Traffic Safety Administration (NHTAS), and, pp 22-35, August 2005.
[3] Kristofer D. Kusano and Hampton C. Gabler,, Safety Benefits of Forward Collision Warning, Brake Assist, and Autonomous Braking Systems
in Rear-End Collisions IEEE transaction on intelligent transportation system , VOL. 13, NO. 4,, pp- 1547-50, DECEMBER 2012.
[4] Jonas Jansson Collision Avoidance Theory with Application to Automotive Collision Mitigation Department of Electrical Engineering
University, SE581 83,Sweden,pp27-32, 2005
[5] Steven H. Bayless; Adrian Guan; Patrick Son, P.E.; Sean Murphy; Anthony J. Shaw, Developments in Radar, LIDAR, and other Sensing
Technologies, and Impact on Vehicle Crash Avoidance/Automation and Active Traffic Management ,Intelligent Transportation Society of
America (ITS America) Technology Scan Series 2012-2014.
[6] R. H. Rasshofer and K. Gresser ,Automotive Radar and Lidar Systems for Next Generation DriverAssistance Functions, BMW Group
Research and Technology, Hanauer Str. 46, 80992 Munich, Germany, Advances in Radio Science, 3, 205209, 2005

49

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of Access Control Techniques: R3 and RBAC


Priyanka Jairath*, Rajneesh Talwar#
M.Tech Student *( Deptt. Of Computer Science), Principal CGC- COE # Punjab Technical University
prpjairath1@gmail.com * , rtmtechguidance@gmail.com # and +91 9780608772*

Abstract Cloud computing could be a new approach of computing that leverages the economical pooling of on-demand, self
managed, virtual infrastructure. Multicloud designs deploying and evolving our application to include new clouds. This paper provides
a survey regarding however multi cloud design will scale back the protection risks and by exploitation multiple distinct clouds at an
equivalent time helps in disaster recovery, geo-presence, and redundancy. Though respectable progress has been created, a lot of
analysis has to be done to deal with the multi-faceted security issues that exist among cloud computing.

Keywords Cloud Computing, Access Control, R3, RBAC, Algorithm, Comparison.


INTRODUCTION

Cloud computing is that the evolution of associate existing IT infrastructure that has a long-dreamed vision of computing as a
utility. The emergence of cloud technologies over last many years had important impacts on several aspects of IT business. In step
with the survey conducted regarding cloud computing, most of medium and tiny firms use cloud computing services attributable to
varied reasons that embrace reduction of price in infrastructure and quick access to their application. Cloud computing has been
represented in terms of its delivery and preparation models. though cloud computing emerges from existing technologies, its
computing (delivery and deployment) models and characteristics raise new security challenges attributable to some incompatibility
problems with existing security solutions.

Figure 1: A cloud Environment


A multi-cloud strategy can even improve overall enterprise performance by avoiding "vendor lock-in" and mistreatment
completely different infrastructures to fulfil the requirements of various partners and customers. A multi-cloud approach offers not
solely the hardware, software system and infrastructure redundancy necessary to optimize fault tolerance, however it can even steer
traffic from completely different client bases or partners through the quickest doable components of the network. Some clouds are
higher suited than others
for a specific task. For instance, an explicit cloud would possibly handle massive numbers of requests per unit time requiring little
knowledge transfers on the typical, however a distinct cloud would possibly perform higher for smaller numbers of requests per unit
time involving massive knowledge transfers on the typical.
NIST defines 3 main service models for cloud computing:
1. Software package as a Service (SaaS) The cloud supplier provides the cloud shopper with the aptitude to deploy associate
degree application on a cloud infrastructure [1].
2. Platform as a Service (SaaS) The cloud supplier provides the cloud shopper with the aptitude to develop and deploy
applications on a cloud infrastructure victimization tools, runtimes, and services supported by the CSP [1].
3. Infrastructure as a Service (SaaS) The cloud supplier provides the cloud shopper with basically a virtual machine. The
cloud shopper has the power to provision process, storage, networks, etc., and to deploy and run discretional software
package supported by the software system pass by the virtual machine [1].
NIST conjointly defines four readying models for cloud computing: public, private, hybrid, and community clouds. Confer with
the agency definition of cloud computing for his or her descriptions [1].
50

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

One of the foremost appealing factors of cloud computing is its pay-as-you-go model of computing as a resource. This
revolutionary model of computing has allowed businesses and organizations in would like of computing power to buy as several
resources as they have while not having to place forth an oversized capital investment within the IT infrastructure. Different blessings
of cloud computing are unit large measurability and accrued flexibility for a comparatively constant value. As an example, a cloud
user will provision a thousand hours of procedure power on one cloud instance for a similar value as one hour of procedure power on a
thousand cloud instances [2].
Despite the various blessings of cloud computing, several massive enterprises area unit hesitant to adopt cloud computing to
interchange their existing IT systems. Within the Cloud Computing Services Survey done by IDC IT cluster in 2009, over eighty
seven of these surveyed cited security because the best issue preventing adoption of the cloud [3]. For adoption of cloud computing to
become a lot of widespread, it's vital that the safety issue with cloud computing be analyzed and addressed, and projected solutions be
enforced in existing cloud offerings.
The organization of the remainder of this paper is as follows. The second section discusses the framework with that it will be able
to address the safety problems in cloud computing, and therefore the third section elaborates on every of the sections in my
framework. Finally, the fourth section of this paper discusses conclusions and future work to be worn out the realm of cloud
computing security.

Role based mostly Access management


In role-based access management (RBAC) model, roles are mapped to access permissions and users are mapped to applicable
roles. As an example, users are assigned membership to the roles supported their responsibilities and qualifications within the
organisation. Permissions are assigned to qualified roles rather than individual users. Moreover, in RBAC, a job will inherit
permissions from alternative roles; therefore there's a data structure of roles. In RBE theme, the owner of the info encrypts the info in
such how that solely the users with applicable roles as fixed by a RBAC policy will rewrite and look at the info. The role grants
permissions to users United Nations agency qualify the role and might conjointly revoke the permissions from existing users of the
role. The cloud supplier (who stores knowledge/ the info/the information}) won't be able to see the content of the data if the provider
isn't given the suitable role. RBE theme is in a position to upset role hierarchies, whereby roles inherit permissions type alternative
roles. A user is in a position to affix a job once the owner has encrypted the info for that role. The user will be able to access that
information from then on, and therefore the owner ought not to re-encrypt the info. A user is revoked at any time during which case;
the revoked user won't have access to any future encrypted information for this role. With our new RBE theme, revocation of a user
from a job doesn't have an effect on alternative users and roles within the system.
Security Analysis: We've got shown that our theme is semantically secure beneath the overall Decisional Diffie- dramatist Exponent
assumption (GDDHE) introduced in [15] by process a particular GDDHE drawback.

I.

BASIC R3
In the basic R3 theme, we have a tendency to think about ideal conditions, wherever the information owner and every one of the
cloud servers within the cloud share a synchronised clock, and there arent any transmissions and queuing delays once corporal
punishment scan and write commands.

A. Intuition
The data owner can initial generate a shared secret key to the CSP. Then, when the information owner encrypts every file
with the acceptable attribute structure and time slice, the information owner uploads the come in the cloud. The CSP can replicate the
file to numerous cloud servers. Every cloud server can have a replica of the shared secret key.
Let us assume that a cloud server stores Associate in Nursing encrypted file F with A and TSi. Once a user queries that cloud
server, the cloud server initial uses its own clock to work out this time slice. Presumptuous that this time slice is TSi+k, the cloud
server can mechanically re-encrypt F with TSi+k, without receiving any command from the information owner. Throughout the
method, the cloud server cannot gain the contents of the cipertext and also the new cryptography keys. Solely users with keys
satisfying A and TSi+k are going to be ready to rewrite F.
B. Protocol Description
We divide the outline of the fundamental R3 theme into 3 components: information owner formatting, information user scan
information and Data owner write information. Well deem the subsequent functions.
1) Setup() (PK;MK; s) : At TS0, the information owner publishes the system public key PK, keeps the system
Algorithm 1: Basic R3 (synchronized clock with no delays)
while Receive a write command W (F; seqnum) at TSi
51

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

do
Commit the write command so as at the tip of TSi
while Receive a scan command R(F) at TSi do
Re-encrypt file with TSi
master key MK secret, and sends the shared secret key s to the cloud.[3]
2) GenKey(PK;MK; s; PKAlice;A; T ) (SKAlice; ) : Once {the information/the info/the information} owner desires to grant
data user Alice attributes A with valid fundamental measure T , the data owner generates SKAlice and mistreatment the
system public key, the system passkey, the shared secret key, Alices public key, Alices attributes and eligible time.
3) Encrypt(PK;A; s; TSt; F) (CtA) : At TSt, the information owner encrypts file F with access structure A, and produces
ciphertext CtA mistreatment the system public key, access structure, the system secret key, time slice, and plaintext file.
4) Decrypt(PK;CtA ; SKAlice; 1_j_ni ) F : At TSt, user U, United Nations agency possesses version t attribute secret keys on
all attributes in CCi, recovers F mistreatment the system public key, the user identity secret key, and also the user attribute
secret keys.
5) REncrypt(CtA ; s; TSt+k) Ct+k
A : once the cloud server needs to come back back a data user with the file at TSt+k, it updates the ciphertext from CtA to
Ct+kA victimisation the shared secret key.
1) Information owner initialization: the info owner runs the Setup operate to initiate the system. Once the info owner needs to
upload file F to the cloud server, it initial defines associate degree access management A for F, and so determines this time
slice TSi. Finally, it runs the write operate with A and TSi to output the ciphertext. once {the information|the info|the
information} owner needs to grant a collection of attributes in a very amount of your time to data user Alice, it runs the
GenKey operate with attributes and effective times to come up with keys for Alice.
2) Information user scan information: once data user Alice needs to access file F at TSi, she sends a scan command R(F) to the
cloud server, wherever F is that the file name. On receiving the scan command R(F), the cloud server runs the REncrypt
operate to re-encrypt the file with TSi. On receiving the ciphertext, Alice runs the decode operate victimisation keys
satisfying A and TSi to recover F.
3) Information owner write data: once the info owner needs to put in writing file F at TSi, it'll send a write command to the
cloud server within the kind of: W (F; seqnum), wherever seqnum is that the order of the write command. This seqnum is
critical for ordering once the info owner problems multiple write commands that got to happen in just once slice. On
receiving the write command, the cloud server can commit it at the top of TSi. Formula one shows the actions of the cloud
server.
Algorithm 2: Extended R3 (asynchronized clock with delays)
while Receive a write command W(F; ti+1; seqnum) do
if Current time is before ti+1 + nine then
Build Window i for file F
Commit the write command in Window i at ti+1 + nine
else
Reject the write command
Inform the info owner to send write command earlier
while Receive a scan request R(F; TSi) do
if Current time is later than ti+1 + prosecutor then
Re-encrypt the move into Window i with TSi
else
Hold on the scan command till ti+1 + prosecutor.[3]

II.

ROLE BASED ACCESS CONTROL

In role-based access control (RBAC) model, roles are mapped to access permissions and users are mapped to appropriate
roles. For instance, users are assigned membership to the roles based on their responsibilities and qualifications in the organization.
Permissions are assigned to qualified roles instead of individual users. Moreover, in RBAC, a role can inherit permissions from other
52
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

roles; hence there is a hierarchical structure of roles. Since being first formalized in 1990s, RBAC has been widely used in many
systems to provide users with flexible access control management, as it allows access control to be managed at a level that
corresponds closely to the organizations policy and structure.
In traditional access control systems, enforcement is carried out by trusted parties which are usually the service providers. In
a public cloud, as data can be stored in distributed data centres, there may not be a single central authority which controls all the data
centres. Furthermore the administrators of the cloud provider themselves would be able to access the data if it is stored in plain format.
To protect the privacy of the data, data owners employ cryptographic techniques to encrypt the data in such a way that only users who
are allowed to access the data as specified by the access policies will be able to do so. We refer to this approach as a policy based
encrypted data access. The authorized users who satisfy the access policies will be able to decrypt the data using their private key, and
no one else will be able to reveal the data content. Therefore, the problem of managing access to data stored in the cloud is
transformed into the problem of management of keys which in turn is determined by the access policies.
The main review contributions of this paper are
(i) A new role-based encryption (RBE) scheme with efficient user revocation that combines RBAC policies with encryption
to secure large scale data storage in a public cloud,
(ii) A secure RBAC based hybrid cloud storage architecture which allows an organization to store data securely in a public
cloud, while maintaining the sensitive information related to the organizations structure in a private cloud,
(iii) A practical implementation of the proposed RBE scheme and description of its architecture and
(iv) Analysis of results demonstrating efficient performance characteristics such as efficient encryption and decryption
operations on the client side as well as superior characteristics of the proposed RBE scheme such as constant size cipher text and
decryption key as well as efficient user revocation. Given these characteristics, the proposed RBE system has the potential to be a
suitable candidate for developing practical commercial cloud data storage systems.

Figure 2: RBE Architecture


In the RBE scheme has the following four types of entities. SA is a system administrator that has the authority to generate the
keys for users and roles, and to define the role hierarchy. RM is a role manager 2 who manages the user membership of a role. Owners
are the parties who want to store their data securely in the cloud. Users are the parties who want to access and decrypt the stored data
in the cloud. Cloud is the place where data is stored and it provides interfaces so all the other entities can interact with it.
We can define the following algorithms for RBE scheme:
Setup () takes as input the security parameter and outputs a master secret key mk and a system public key pk. mk is kept secret by
the SA while pk is made public to all users of the system. Extract (mk, ID) is executed by the SA to generate the key associated with
the identity ID. If ID is the identity of a user, the generated key is returned to the user as the decryption key. If ID is the identity of a
role, the generated key is returned to the RM as the secret key of the role, and an empty user list RUL which will list all the users who
are the members of that role is also returned to the RM.
ManageRole (mk, IDR, PRR) is executed by the SA to manage a role with the identity ID R in the role hierarchy.
PRR is the set of roles which will be the ancestor roles of the role. This operation publishes a set of public parameters pub R to cloud.

53

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

AddUser (pk, skR, RULR, IDU ) is executed by the role manager RM of a role R to grant the role membership to the user IDU , which
results in the role public parameters pubR and role user list RULR, being updated in cloud.
RevokeUser (pk, skR, RULR, IDU ) is executed by a role manager RM of a role R to revoke the role membership from a user ID U ,
which also results in the role public parameters pub R and role user list RULR, being updated in cloud.
Encrypt (pk, pubR) is executed by the owner of a message M. This algorithm takes as input the system public key pk, the role public
parameters pub R, and outputs a tuple_C, K_, where C will b a part of the ciphertext, and K K is the key that will be used to encrypt
the message M.(Note the ciphertext consists of C and the encrypted M).
We assume that the system uses a secure encryption scheme Enc, which takes K as the key space, to encrypt messages. The
ciphertext of the message M will be in the form of _C, EncK (M)_ which can only be decrypted by the users who are the members of
the role R. When this operation finishes, a ciphertext is output and uploaded to cloud by the owner.
Decrypt (pk, pubR, dk, C) is executed by a user who is a member of the role R. This algorithm takes as input the system public key pk,
the role public parameters pubR, the user decryption key dk, the part C from the ciphertext downloaded from cloud, and outputs the
message encryption key K K. The key K can then be used to decrypt the ciphertext part EncK (M) and obtain the message M [4].

ACKNOWLEDGMENT
The authors gratefully acknowledge the support of CGC Landran College, for partial work reported in the paper.

CONCLUSION
In this paper, we analyzed and study the R3 scheme, a new method for managing access control based on the cloud servers
internal clock. Our technique does not rely on the cloud to reliably propagate re-encryption commands to all servers to ensure access
control correctness. Secondly, we studied a new RBAC scheme that achieves efficient user revocation and have more privacy and
security of Information stored in cloud. But all data remains in public cloud.
III.

FUTURE SCOPE

We review the two techniques namely R3 and RBAC of cloud based architecture. The future holds of this cloud storage
architecture which allows an organization to store data securely in a public cloud, while maintaining the sensitive information related
to the organizations structure in a private cloud. In future, we can construct an effective data access control scheme for multiauthority cloud storage systems. Where we can prove that existing scheme can be made more secure in the random oracle model. The
new scheme should be a promising technique, which can be applied in any remote storage systems and online social networks etc.
REFERENCES
[1] P. Samarati and S. D. C. di Vimercati, Data protection in outsourcing scenarios: Issues and directions, in Proc.
ASIACCS, Apr. 2010, pp. 114.
[2] Lan Zhou, Vijay Varadharajan, and Michael Hitchens, Achieving Secure Role-Based Access Control on Encrypted Data
in Cloud Storage, IEEE transactions on information forensics and security, vol. 8, no. 12, December 2013,pp. 1947-1960.
[3] Qin Liuyz, Chiu C. Tanz, Jie Wuz, and Guojun Wangy, Reliable Re-encryption in Unreliable Clouds,
conference/journal, pp. 1-5.
[4] C. Delerable, Identity-based broadcast encryption with constant size ciphertexts and private keys, in ASIACRYPT (Lecture
Notes in Computer Science), vol. 4833. New York, NY, USA: Springer-Verlag, 2007, pp. 200215

54

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An Investigative and Synoptic Review on Helium Liquefaction Using a


Commercial 4 K Gifford- McMahon Cryocooler
Mahesh N1, S A Mohan Krishna2
1

PG Student, Vidyavardhaka College of Engineering, Mysore, India

Associate Professor, Department of Mechanical Engineering, Vidyavardhaka College of Engineering,


Mysore, Karnataka, India
1

maheshgowda360@gmail.com, 2mohankrishnasa@vvce.ac.in

Abstract: The review paper describes liquefaction of helium using a commercial cryocooler with 1.5 W cooling power
at 4.2 K, equipped with heat exchangers for precooling the incoming gas. Measurements of the pressure dependence of the
liquefaction rate are considered. Also liquefaction rate and temperature can be observed by placing resistors in series
inside the liquefaction container. There by taking voltage reading we will get liquefaction rate. The assembly of Gifford McMahon cryocooler with heat exchangers and helium liquefaction container is usually accomplished. The pressure
gauge has to be connected to the container in order to examine the oscillation which gives the liquefaction rate. Also
resistors are placed inside the container to measure the temperature and pressure of helium at different stages in container.
This paper furnishes detailed information about the methodology and experimental technique about the utility of GiffordMcMahon cryocooler for Helium liquefaction.

Key words: Cryocooler, Gifford-McMahon, Liquefaction, Heat Exchangers, Pressure Gague and Resistors.

1. INTRODUCTION
Liquefaction is the process of converting a compressed gas into a liquid under reliable conditions. Liquid
helium is required as a working medium in almost all low-temperature laboratories. Usually a large-scale
liquefier serves as a central facility for helium liquefaction, for distribution of liquid helium to many cryostats in
large transport dewars. With the availability of small closed-cycle cryocoolers a different scheme has become
possible, where helium liquefaction may be performed nearby or even in the cryostat, thus allowing operation
independent from cryogenic liquids support. Two variants of cryocoolers are available on the market, the pulse
tube and Gifford-McMahon (GM) types. Without the aid of cryoliquids and Joule-Thomson stages an effective
liquefaction rate of 542 ml/h has been achieved using GM cryocooler [1].

The work was motivated by a development of a versatile source for ultra-cold neutrons, which will employ
superthermal production in a converter of superfluid Helium. First tests will be performed with a converter few
55

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

litres in volume, which might later be upgraded to tens of liters. This volume has to be filled with helium and
cooled down to 0.5 K. Liquefaction at the rate observed in the present study will enable us to do experiments
independent of cryoliquids and Joule-Thomson stages. Liquid helium is required as a working medium in any
low-temperature laboratory applications. Usually a large-scale liquefier serves as a central facility, for
distribution of liquid helium to many cryostats in large transport dewars. With the availability of small closedcycle cryocoolers a different scheme has become possible, where helium liquefaction may be performed nearby
or even in the cryostat, thus allowing operation independent from cryogenic liquids support. The main
application is a SQUID magnetometer.
2. LITERATURE REVIEW
After the introduction of the pulse tube cooler by Gifford and Longsworth in the mid 1960s essential
improvements of this refrigerator type have been achieved in the past decade by two types of modifications:
adding a buffer volume via an orifice valve to the warm end of the pulse tube led to phase shift between
pressure and velocity with resulting improvements in cooling performance.
Thummes et al [2] reported a liquefaction rate of 127 ml/h obtained with a pulse tube cooler with 170
mW net cooling power at 4.2 K. A temperature of 3.6 K and a net cooling power of 30 mW at 4.2 K thus was
first obtained with a three-stage pulse tube cooler by Matsubara. A regenerative tube at the warm end of the
third stage pulse tube was used in their system. They obtained a lowest temperature of 2.75 K.Thummes
achieved the lowest temperature of 2.75K using two stage pulse tube cooler and the process and performance of
two configuration of 4K pulse tube coolers and GM cryocoolers are by C .wang in 1997.
C. Wang, G. Thummes et al [3] investigated a two-stage double-inlet pulse tube cooler in the year 1996 for
cooling below 4 K is designed and constructed by the aid of numerical analysis. The hot end of the second stage
pulse tube is connected to the phase shifting assembly at room temperature without the use of a regenerative
tube.

56

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3. EXPERIMENTAL TECHNIQUES
Fig 3.1 shows a sketch of the setup. The helium gas is supplied by a standard helium gas cylinder equipped
with a pressure-reducing valve. The mass flow is controlled by a needle valve and monitored by a mass flow
meter calibrated for helium with accuracy better than 1.5% in the range of 0100 g/h. The helium flow can be
opened and closed with a valve. The cryocooler is integrated into a DN 320 top flange of a cylindrical vessel
from stainless steel. A cylindrical vessel with diameter 290 mm from silver-coated copper connected to the first
stage of the cooler serves as a heat screen to protect the colder parts of the liquefier from ambient temperature
radiation. The estimated total heat load to the first stage additional to the unknown flow along the cooler itself is
7.4 W.
The gas first passes through a cold trap, which is connected to the top flange of the heat screen and thus
kept at a temperature close to the first stage of the cold head. The cold trap consists of a copper cylinder filled
with copper mesh, which also serves to freeze out gas impurities. An additional heat exchanger assures
precooling to the temperature T1 of the first stage. It consists of a stainless steel capillary with outer diameter 2
mm and wall thickness 0.25 mm, soft soldered to a copper sheet on a length of 0.5 m and fixed to the first stage
with a hose clamp. The same capillary was chosen for the subsequent heat exchangers, ensuring turbulent flow
of the helium gas for good radial heat transfer across the capillary wall.

57

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 3.1 Schematic of helium liquefaction rig (not to scale). Fifty litre helium cylinder (C), pressure reducer
(PR), flow meter (FM), needle valve (NV), cut-off valve (V), inlet pressure gauge for measuring Pin (P1), top
flange of heat screen (HS), cold trap (1), heat exchanger on first stage of cryocooler (2), heat exchanger
between first and second stage of cryocooler (3), condenser spiral on second stage (4), storage volume (5),
pressure gauge (P2) for measuring the pressure bath above the bath.
The helium is finally liquefied in a condenser. It consists of a stainless steel capillary spiral, which is
hard soldered on a length of 1165 mm to a copper disk, screwed to the bottom plate of the second stage. Liquid
helium is collected in a copper bottle with volume 400 cm3, which is thermally connected to the second stage. A
2 - 0.25 stainless steel capillary soldered into the top flange of the bottle serves to measure the pressure P bath
above the liquid. The Pbath and input pressure Pin monitored with calibrated piezoelectric silicon membrane
pressure transducers. Different heat exchangers between the first and the second stage were investigated. Most
experiments were performed using a spiral made from the 2 0.25 stainless steel capillary, onto which pieces of
a 3 0.5 copper tube, with length 30 mm each, were hard soldered. Between each of these, a gap of 12 mm is
left. The longest spiral is equipped with 46 such pieces, the medium size spiral has 36 pieces and the shortest
58

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

one 20. Thus the thermal contact length is 138, 108, and 60 cm, respectively. On the inner side of the spirals,
the copper pieces were milled to produce flat surfaces with width 1.8 mm. For good thermal contact with the
stainless steel tube of the cold head, the spirals were tightened by 9 hose clamps Care has to be taken not to
tighten the clamps too strongly, which may result in blocking the displacer in the cold head. Three spirals are
mounted simultaneously, only a single one being used at a time in the experiments described below. Two
Temperature measurements were performed with three calibrated Cernox resistance temperature sensors (Lake
Shore, model CX-1030-CU). They were attached to the heat screen (T1), the condenser (T2), and the bottom of
the bottle (Tbath).
Cooling down the copper heat screen to below 40 K took about 7.5 hours due to its large heat capacity.
After this time the temperature T2 of the condenser was 2.8 K. These values are close to the lowest temperatures
of T1 = 32 K and T2 = 2.4 K reached with this apparatus. Opening the cut-off valve commenced the filling of
the bottle with helium. At the beginning a large mass flow was set in order to attain quickly the desired pressure
Pin or Pbath. The mass flow was measured for different values of input pressure Pin in the range of 0.72.58 bar.
In most experiments the bottle was filled for a single value of pressure to a total mass m. A sudden rise in
pressure indicated that the bottle was full, at which moment the cut-off valve was closed. In several experiments
we let the cold head continue operation in order to measure the time for the subsequent cooling of the liquid to
4.2 K.

4. INVESTIGATIVE SUMMARY
The investigation of heat exchangers showed a monotonous increase of the liquefaction rates obtained as a
function of the length of thermal contact of the gas with the tube of the cold head. Using the longest spiral heat
exchanger, which has 28% more contact length than the second longest one, still increased the liquefaction rate
by just about 8%. The longer spiral can increase the performance in terms of liquefaction rate. Also the
commercial GM cryocooler can easily be converted into a most reliable and most powerful liquefier unit for
59

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

application in low temperature laboratories without cryo liquids. This GM design exhibits a best higher
liquefaction efficiency than the pulse tube cryocooler. It is even competitive to most complex design by its
simple design and construction [4].
The technological differences of GM and pulse tube cryocoolers raised questions on the suitability of GM
cryocoolers for helium liquefaction. Due to high thermal resistance between the incoming gas and the
regenerative material, a typical GM type cooler with 1.5 W may provide at best a liquefaction rate of 2 litres per
day.
REFERENCES:
[1] P. Schmidt-Wellenburg , O. Zimmer, Helium liquefaction with a commercial 4K GM cryocooler, 10 August
2006.

[2] C. Wang, Numerical analysis of 4K pulse tube coolers, 10 January 1997.

[3] C. Wang, G. Thummes and C. Heiden, A two stage pulse tube cooler operating below 4K, 22 October 1996.

[4] Wang, C, Ju, Y and Zhou Y, Experimental investigation of a two stage pulse tube refrigerator. Cryogenics,
1996

60

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Structural Analysis on Unconventional Section of Air-Breathing Cruise


Vehicle
BASHETTY SRIKANTH1, K.DURGA RAO2, Dr.S.SRINIVAS PRASAD3
1

PG Scholar, MLR Institute of Technology, Hyderabad-500 043, India.


(Corresponding author, Email: srikanth.aeforu@gmail.com, 9052361566)
2

Associate Professor, Department of Aeronautical Engineering, Malla Reddy College of Engineering &Technology, Hyderabad-500
017, India. (Email: durgaraok9@gmail.com)
3
Professor, Department of Aeronautical Engineering, MLR Institute of Technology, Hyderabad-500 043, India.
(Email:sanakaprasad@yahoo.com)

Abstract - High temperatures encountered during hypersonic flight lead to high thermal stresses and a significant reduction in
material strength and stiffness of airframe of air-breathing hypersonic cruise vehicle. Thermo-structural analysis on hypersonic vehicle
airframe is one of the challenging problems since the properties, yield strength and ultimate tensile strength varies with temperature.
Thermal analysis, structural analysis and coupled thermo-structural analysis has been carried out on an unconventional section of
hypersonic air-breathing cruise vehicle subjected to high temperatures and flight loads. The unconventional section having constant
cross section designed for housing the fuel tank and intake-cowl opening mechanism of cruise vehicle has been modeled using
commercial software CATIA V5. Analytical stresses have been calculated due to flight loads at control surface deflection of 0 and
15 at Mach 6, 590C temperature. Linear relation between E, G and K with temperatures has been considered in the computational
and analytical calculations. Computational analysis has been carried out using commercial software ANSYS Workbench in the present
study. Static analysis has been carried out on section subjected to maximum bending moment of 6900 N-m to verify the structural
integrity of the section. The results obtained from computational analysis are in good agreement with analytical results. This study
provides the material combination which ensures structural safety of airframe of cruise vehicle at all operating conditions.
Keywords: Hypersonic cruise vehicle, Airframe, Temperature, Flight loads, Thermal analysis, Structural analysis, Yield strength,
Ultimate tensile strength.

1. INTRODUCTION
The perceived advantages of hypersonic technology for space and missile applications have made many countries to initiate ambitious
programs in recent times. At present many advanced countries are pursuing the development of hypersonic cruising vehicles. India is
the second country to have planned an autonomous flight of the hypersonic air-breathing vehicle, the first being the USA which
demonstrated the flights through X-43 and X-51 programs. The main objective in every airborne vehicle is to have a structure which
can withstand various loads (i.e. both ground and air loads) in the lowest possible weight so that it functions effectively and efficiently.
For these reason, the aerospace field has become evolutionary in almost every field of technology including thermal, propulsion,
structural, metallurgical, aerodynamics, etc. The need to have the best possible configuration both aerodynamically and structurally
with the best suitable material along with the desired speed factor, efficiency, etc. has led to many innovations in terms of materials,
structural, aerodynamic configurations and propulsion efficiency etc.

2. OBJECTIVE OF THE PROJECT


To discretize the airframe of hypersonic air-breathing cruise vehicle into six sections and consider an unconventional section having
uniform cross section, designed for housing the fuel tank and intake-cowl opening mechanism of hypersonic air-breathing cruise
vehicle for the analysis. To calculate analytical stresses due to flight loads at control surface deflection of 0 and 15 at Mach 6, 590C
temperature. To carry out computational analysis for finding the deformation and stresses over the airframe of hypersonic cruise
vehicle with Al alloy and Ti alloy material combination subjected to flight loads and high temperatures. To prove the material
combination of Al alloy considered for Bulkheads, top panel, side panels and Ti alloy considered for bottom panel with suitable
thermal protection system ensures structural safety of airframe of cruise vehicle at all operating conditions.

3. PROBLEM STATEMENT
61

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The airframe of hypersonic air-breathing cruise vehicle has been discretized into six sections as shown in figure 1. An unconventional
section P4 having uniform cross section designed for housing the fuel tank and intake-cowl opening mechanism of hypersonic airbreathing cruise vehicle is considered for the analysis. The length of the cruise vehicle has been conceptualized as 5.6m. The
maximum width and height of the cruise vehicle is 0.8m and 0.4m respectively.

Figure 1: Design of hypersonic air-breathing cruise vehicle airframe


The fore body consists of two ramps, 11.86 and 13.75 and is designed for the cruise mach no.6 at 32km altitude. The configuration
cross-section is flat on the bottom with chamfer on the sides and elliptical curvature on the top. There are chamfers on the top and
bottom sides. The after body of length 1.035m consists of single expansion nozzle. The nose tip of the vehicle is blunt and spherical in
shape.
Section

P1

P2

P3

P4

P5

P6

Length (mm)

330

477

873

885

2000

1035

Table 1: Length of sections of cruise vehicle


3.1 DESIGN OF P4 SECTION OF CRUISE VEHICLE
Section P4 has constant cross section throughout its length is designed for housing the fuel tank, air bottles and intake cowl opening
mechanism. This section is 885mm long and lies between stations 1680mm and 2565mm as shown in figure 4.1. It has two end
bulkheads, four stringers and four panels. The two end bulkheads are initially joined together by four stringers with help of fasteners.
Then all four panels of thickness 3mm are placed over them and held by rivets/screws. The solid model of the section P4 is shown in
figure 4.3. The section has been modelled in CATIA V5 and imported to Ansys Workbench.

Figure 2: Section P4 of cruise vehicle


3.2 INPUT LOAD DATA
62

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The current project is design of unconventional section for structural and thermal loads. The bottom portion of the section experiences
high temperature as it acts as intake for Scramjet engine. The vehicle is cruised at free flight condition using the propulsion produced
by Scramjet engine. At this free flight condition, flight loads and thermal loads act on the cruise vehicle which are as discussed below.
FLIGHT LOADS
Flight load analysis of cruise vehicle has been done at 2 instances of flight i) Control surface is not deflected where Maximum bending
moment is 6300 N-m and ii) Control surface is deflected to 15 where Maximum bending moment is 6900 N-m.
THERMAL LOADS
As the total operational environment of the cruise vehicle is in high Mach regime which indicates that high temperatures are
inevitable. To withstand these high temperatures, proper material selections and design has to be done. The Table 2 shows the
temperatures applied at different panels of thickness 3mm each.
Panel

Top panel

Top slope panel

Vertical panel

Bottom slope

Bottom panel

Temperature ()

110

85

145

225

590

Table 2: Temperatures at different panels

4. MATERIAL PROPERTIES
A number of engineering materials have been studied for design. Following are two candidate materials with their limitation on
mechanical properties at elevated temperatures. Because of light weight and high specific strength at operating temperature of
airframe, Al alloy has been considered as material for Bulkheads, top panel, side panels and Ti alloy for bottom panel.
S.No

Properties

Al alloy

Ti alloy

1
2

Density
Temperature

2700 kg/m
300C

4560 kg/m3
700C

Thermal conductivity

239 W/m-K

17 W/m-K

Coeff. Of Thermal Expansion

Ultimate Tensile Strength, MPa

Youngs modulus, MPa

24e-6 K

-1

333 at 300C

9.4e-6 K-1
265 at 600C

407 at 250C

750 at 400C

53567 at 300C

84000 at 600C

60786 at 250C
122000 at 400C
Table 3: Mechanical properties of materials

5. DESIGN METHODOLOGY
Modelling of the section P4 has been carried out using powerful tool CATIA V5. Section P4 has constant cross section throughout its
length is designed for housing the fuel tank, air bottles and intake cowl opening mechanism. This section is 885mm long and lies
between stations 1680mm and 2565mm as shown in figure 1. It has two end bulkheads, four stringers and four panels. The two end
bulkheads are initially joined together by four stringers with help of fasteners. Then all four panels of thickness 3mm are placed over
them and held by rivets/screws.
S.No
1
2
3
4
5
6
7
63

Part

Width (mm)

Thickness (mm)

Length (mm)

Bulkhead
65
25
Around the cross section
Stringer
25
6.5
885
Top panel
586.42
3
885
Top slope panel
191
3
885
Vertical panel
100
3
885
Bottom slope panel
141.36
3
885
Bottom panel
600
3
885
Table 4: Dimension specifications of P4 section of cruise vehicle
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Geometrical configurations of bulkhead and stringer are given in figure 3. It shows the recess made on it to accommodate the panels
of thickness 3mm.

Figure 3: Geometrical configurations of bulkhead and stringer


The orthographical representation of section P4 of hypersonic air-breathing cruise vehicle is shown in figure 4 below.

Figure 4: Arrangement of bulkheads and stringers of section

6. ANALYTICAL METHOD
In this section of work, the stresses induced in the section due to bending moment load were found out. For this method the section is
approximated as free-free beam. Hence, the theory of symmetrical bending of beams can be applied to calculate the stresses and
deformation. In this flight load analysis, two cases are considered as discussed in the previous section.
ASSUMPTIONS
a) The section has been assumed as monocoque shell.
b) Thermal barrier coating (TBC) technology is available. A temperature reduction of 250C across coating is achieved.
6.1 SYMMETRICAL BENDING
Symmetrical bending arises in beams which have either singly or doubly symmetrical cross-sections as in our case the cross-section is
symmetrical about y-axis as shown in figure 5. The direct stress due to bending moment in the beam is given by the equation
z = M/Z [10]
Z = I/y
Where z is direct stress, M is bending moment, Z is section modulus, I is moment of inertia, y is the position of neutral axis,
Moment of Inertia

64

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

For calculating the moment of inertia of the shelled section, it was divided into individual simple shells of standard geometry shapes.
The moment of inertia of all these individual entities about their centroids was computed and then transferred to the centroid of the
total section.

Figure 5: Geometrical configuration of cross section


Moment of inertia calculated for the panels is as shown below:
S.No
1
2
3
4
5

Panel
Top panel
Top slope panel
Vertical panel
Bottom slope panel
Bottom panel

I (mm4)
763541.67
10522462.22
2083333.33
1757281.14
781250

Ix (mm4)
742926041
74774862.22
2723333.33
37097581.14
338281250

Total moment of inertia of complete section is Ixx = 971590908.8mm4


Direct stress due to bending moment
For Control surface not deflected ( = 0), the calculated direct stress is z = 1.54 MPa.
For Control surface deflected ( = 15), the calculated direct stress is z = 1.68 MPa.
Factor of safety on Ultimate tensile strength in both the cases is greater than 5.
6.2 THERMAL STRESS
Thermal stress is a decrease in the quality of a material that occurs due to excessive changes in temperature. It occurs as a result of a
non uniform distribution of temperature in different parts of the body and some restriction on the possibility of thermal expansion or
contraction.
Th = ET [11]
T = T Ta
Where Th = Stress due to temperature expansion (Pa), E = Youngs Modulus (N/m2), = Coefficient of thermal expansion (K-1)
T = Temperature difference (K), T = Max Temperature, Ta = Ambient Temperature.
Approximated thermal stress is calculated for the maximum temperature that is developed on the section. The maximum temperature
developed on the bottom panel of the section is 590C.
Case i: Without HiMAT (Highly maneuverable aircraft
technology) 1200 PLUS paint on bottom panel

65

Case ii: With HiMAT (Highly maneuverable aircraft


technology) 1200 PLUS paint on bottom panel

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

T = 590C = 863K
At 32km altitude Ta = -44.5C = 228.5K
T = 634.5K
= . MPa
FOS = 265/576.75 = 0.45

T = 340C = 613K
At 32km altitude Ta = -44.5C = 228.5K
T = 384.5K
= MPa
FOS = 750/442= 1.69

Case i: Factor of safety on Ultimate tensile strength is 0.45 which is very much low hence it is suggested to use suitable thermal
protection system such as HiMAT (Highly maneuverable aircraft technology) 1200 PLUS paint on the bottom panel, which can
reduce the temperature, by 250C for 2mm coat. With the use HiMAT paint maximum temperature developed on the bottom panel of
the section is 340C. Hence, with this reduction in temperature, the strength available with the same material is 750MPa.
Case ii: For this case, factor of safety on Ultimate tensile strength is > 1.5 which is within the limits. Hence, it is suggested to use
suitable thermal protection system for the bottom panel.

7. COMPUTATIONAL ANALYSIS
Finite element analysis for the given section was dealt completely in FEM software ANSYS Workbench.
Importing the geometry and meshing
Start the ANSYS Workbench and select the required analysis system. Add the materials in the engineering data. Select the geometry
and import the P4 section model which was created using CATIA V5. Catia model has been saved as stp file before importing to the
Ansys Workbench. Select the model then it will be directed to the Ansys mechanical window where the problem will be solved.
Imported model can be as shown in figure 6. Select the geometry, enter the element size as 0.03 and give smoothing as high to get a
fine meshed model. Mesh has been created using the automatic meshing method. Meshed model can be as shown in figure 7.

Figure 6: Section P4 model of cruise vehicle

Figure 7: Meshed model

7.1 STATIC ANALYSIS


Static analysis of cruise vehicle has been done at 2 instances of flight i) Control surface is not deflected where Maximum bending
moment is 6300 N-m and ii) Control surface is deflected to 15 where Maximum bending moment is 6900 N-m. Apply the Al alloy
material to the bulkheads, top panel and side panel and Ti alloy material to the bottom panel of the model. Apply the fixed support on
both ends of cross section to constrain the model in ALL DOF. Select the Moment, M = 6300 N-m for the case i and M = 6900 N-m
for case ii and apply it on the model. Solve the problem to find the total deformation, von-Mises stress.

66

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 8: Deformation of the model at = 0

Figure 9: Deformation of the model at = 15

Figure 10: Von-Mises stress of the model at = 0

Figure 11: Von-Mises stress of the model at = 15

7.2 STEADY-STATE THERMAL ANALYSIS


Steady state thermal analysis has been performed for two cases: i) Without HiMAT (Highly maneuverable aircraft technology) 1200
PLUS paint on bottom panel and ii) With HiMAT (Highly maneuverable aircraft technology) 1200 PLUS paint on bottom panel.
Apply the Al alloy material to the bulkheads, top panel and side panel and Ti alloy material to the bottom panel of the model. Apply
the thermal loads on the panels as given in Table 2. For case ii change the temperature on bottom panel from 590C to 340C. Apply
the convection 1 on the model except bottom panel and enter the value of film coefficient as 25 W/m 2 C and ambient temperature as
44.5 C. Now apply the convection 2 on the bottom panel and enter the value of film coefficient as 22 W/m2 C and ambient
temperature as 44.5 C. Solve the model to find the temperature distribution over the unconventional section.

Figure 12: Temperature distributions for case i

Figure 13: Temperature distributions for case ii

Now static structural analysis system is selected to find the thermal stresses due to temperature distributions. Apply the All DOF
boundary condition to the cross section and solve the model to find the thermal stress and deformation of the unconventional section.

67

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 7.14: Deformation of the model for case i

Figure 7.15 Deformation of the model for case ii

Figure 7.16 Von-Mises stress distribution for case i

Figure 7.17 Von-Mises stress distribution for case ii

7.3 COUPLED THERMO-STRUCTURAL ANALYSIS


Coupled thermo-structural analysis has been carried out for two cases: i) Control surface is not deflected where Maximum bending
moment is 6300 N-m and ii) Control surface is deflected to 15 where Maximum bending moment is 6900 N-m. After finding out the
temperature distribution over the unconventional section with HiMAT paint on bottom panel in thermal analysis switch to static
structural analysis system. Apply the All DOF boundary condition to the cross section and Select the Moment, M = 6300 N-m for the
case i and M = 6900 N-m for case ii and apply it on the model. Solve the problem to find the total deformation, von-Mises stress.

Figure 18: Deformation of the model at = 0

Figure 19: Deformation of the model at = 15

Figure 20: Von-Mises stress of the model at = 0


68

Figure 21: Von-Mises stress of the model at = 15


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

8. RESULTS AND DISCUSSIONS


8.1 STATIC ANALYSIS RESULTS
Load Case

Mass (kg)

Maximum
Deformation (m)

Maximum
Stress (MPa)

Ultimate Tensile
Strength (MPa)

Factor of
Safety

control surface not


deflected ( = 0)

31.56

8.35e-7

1.6

265 at 600C

165

control surface deflected


( = 15)

31.56

9.15e-7

1.75

265 at 600C

151

1800000
1600000
1400000
1200000
1000000
800000
600000
400000
200000
0

case i
case ii

10

12

Figure 22: Stress variation for both cases


Observation: Factor of safety on Ultimate tensile strength for the model is greater than 5 in both the cases.
8.2 THERMAL ANALYSIS RESULTS
Case

Mass
(kg)

Maximum
Deformation (m)

Maximum
Stress (MPa)

Ultimate Tensile
Strength (MPa)

Factor of
Safety

Without HiMAT paint

31.56

0.0038

607

265 at 600C

0.436

With HiMAT paint

31.56

0.003

463

750 at 400C

1.61

70000000
60000000

50000000
40000000
Case i

30000000

Case ii

20000000
10000000
0
0
69

www.ijergs.org

10

12

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 23: Stress variation for both cases


Observation: Factor of safety on Ultimate tensile strength for case-i is less than 1.5 and for case-ii is greater than 1.5.
8.3 COUPLED THERMO-STRUCTURAL ANALYSIS RESULTS
Load Case

Mass
(kg)

Maximum
Deformation (m)

Maximum
Stress (MPa)

Ultimate Tensile
Strength (MPa)

Factor of
Safety

control surface not deflected ( = 0)

31.56

0.00307

485.2

750 at 400C

1.545

control surface deflected ( = 15)

31.56

0.0034

485.3

750 at 400C

1.54

Observation: Factor of safety on Ultimate tensile strength for the material in both the cases is greater than 1.5.
8.4 RESULT VALIDATION
FLIGHT LOAD ANALYSIS
Load case

Analytical Stress (MPa)

Computational Stress (MPa)

% Error

control surface not


deflected ( = 0)

1.54

1.6

4.28

control surface
deflected ( = 15)

1.68

1.76

4.76

Observation: In both the load cases there is a good agreement between Analytical and Computational results.
THERMAL LOAD ANALYSIS
Ti alloy

Analytical Stress
(MPa)

Computational Stress (MPa)

% Error

Without HIMAT
paint

577

607

5.19

With HIMAT paint

442

463

4.75

Observation: In both the cases there is a good agreement between Analytical and Computational results.

CONCLUSION
Because of light weight and high specific strength at operating temperature of airframe, Al alloy has been considered as material for
Bulkheads, top panel, side panels and Ti alloy for bottom panel with suitable thermal protection system. Mass of airframe of the
section of hypersonic air-breathing cruise vehicle is found to be 31.56 kg. Maximum deformation and von mises stress of
unconventional section of hypersonic cruise vehicle is 3.4 mm and 485.3 MPa which are in safe limit. Maximum deformation and
maximum obtained stresses of unconventional section are within the design limit so our model is safe at the given operating
conditions.

FUTURE SCOPE OF WORK


1.
2.
3.
70

The study can be extended by analyzing the other sections of Cruise vehicle.
The more accurate analysis of this section can be done by including the rivets at the intersection of joints.
Harmonic and modal analysis can be carried out to find natural frequency and mode shape of the component.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES
[1] K. P. J. Reddy, Hypersonic Flight and Ground Testing Activities in India 16th Australasian Fluid Mechanics Conference Crown
Plaza, Gold Coast, Australia 2-7 December 2007.
[2] Randall T. Voland, Randall T. Voland, Lawrence D. Huebner, Charles R. McClinton, NASA Langley Research Center, Hampton,
VA, USA, X-43A HYPERSONIC VEHICLE TECHNOLOGY DEVELOPMENT.
[3] Shayan Sharifzadeh, Patrick Hendrick, Dries Verstraete, Francois Thirifay, 9th National Congress on Theoretical and Applied
Mechanics, Brussels, 9-10-11 May 2012, Structural Design and Optimisation of a Hypersonic Aircraft Based on Aero-Elastic
Deformations.
[4] William L. Ko and Leslie Gong Thermo structural Analysis of Unconventional Wing Structures of a Hyper-X Hypersonic Flight
Research Vehicle for the Mach 7 Mission NASA Dryden Flight Research Center, Edwards, California.
[5] Maj Mirmirani, Chivey Wu Andrew Clark, Sangbum Choi Airbreathing Hypersonic Flight Vehicle Modeling and Control,
Review, Challenges, and a CFD-Based Example Multidisciplinary Flight Dynamics & Control Lab, California State University,
Los Angeles.
[6] Alexander Kopp, Parametric studies for the structural pre-design of hypersonic Aerospace vehicles. German Aerospace Center
DLR, Robert-Hooke-Strasse 7, 28359 Bremen, Germany.
[7] Dr. Paul A. Bartolotta from NASA Glenn Research centre at Lewis field PRATT & WHITNEY FAA William J.huges technical
centre: Metallic Properties Development and Standardization (MMPDS).
[8] School of Aerospace, civil and mechanical engineering University College, university of New South Wales. Australian defence
force academy hypersonic literature review on Wave rider, May 2007.
[9] Configuration design of Air-breath hypersonic vehicles by J.umakanth, S.Panner Selvam, Defence research and development
laboratories, Hyderabad and K. Sudhakar, P.M Mujumdar, Department of aerospace engineering, IIT Bombay, Mumbai.
[10] Peery, D., Aircraft Structures, McGraw-Hill, New York, 1950.
[11] Megson, T., Aircraft structural for engineering students, 4th ed., Elsevier, UK, 2007.
[12] Raymer, D., Aircraft Design: A conceptual approach, 4th ed., AIAA Education Series, AIAA, Reston, VA, 2006.
[13] http://www.nasa.gov

71

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Minutiae Extraction and Variation of Fast Fourier Transform on Fingerprint


Recognition
Amandeep Kaur1, Ameeta2, Babita3
1

Department of ECE, PCET, Lalru Mandi. PTU Jalandhar, Punjab

Asst. Professor, Department of ECE, PCET, Lalru Mandi. PTU Jalandhar, Punjab

Asst. Professor,Department of Computer Science and Engineering , PCET, Lalru Mandi PTU Jalandhar, Punjab
Email:-amankaurpcet@gmail.com

AbstractA fingerprint is the feature pattern of one finger. It is believed with strong evidences that each fingerprint is unique. Each
person has his own fingerprints with the permanent uniqueness. So fingerprints have being used for identification and forensic
investigation for a long time. The fingerprint recognition problem can be grouped into two sub-domains: one is fingerprint verification
and the other is fingerprint identification In addition, different from the manual approach for fingerprint recognition by experts, the
fingerprint recognition here is referred as AFRS (Automatic Fingerprint Recognition System), which is program-based. This paper
presents the variation of Fast Fourier Transform on finger print recognition by fast fingerprint minutiae extraction and recognition
algorithm which improves the clarity of the ridge and valley structures of the input finger print images based on the frequency and
orientation of the local ridges and thereby extracting correct minutiae .This research work has combined many methods to build a
minutia extractor and a minutia matcher. Simulation results are obtained with MATLAB going through all the stages of the fingerprint
recognition is built. It is helpful to understand the procedures of fingerprint recognition. And demonstrate the key issues of fingerprint
recognition

Keywords Region of Interest (ROI); Fast Fourier Transform (FFT).FFR, FAR , SMCBA ,OCR,FKP
I. INTRODUCTION
Biometric templates are unique to an individual. Unlike password, pin number, or smart card, they cannot be forgotten, misplaced lost
or stolen. The person trying to access is identified by his real id (represented by his unique biometric signature). Fingerprint scanning
has a high accuracy rate when users know how to use the system. Fingerprint authentication is a good choice for in house systems
where training can be provided to users and where the device is operated in a controlled environment. Small size of fingerprint
scanners, ease of integration can be easily adapted for appliances (keyboards, cell phones, etc). Relatively low costs make it an
affordable, simple choice for workplace access security. Fingerprint identification is the oldest method among all the biometric
techniques, and has been used in various applications. Every person has unique, fingerprints which can be used for identification.
Steps for fingerprint identification are: scanning (capture, acquisition), extraction (process), comparison, and final match/non-match
decision. A series of ridges and furrows on the surface of the finger made a fingerprint. We create pattern of ridges and furrows as
well as the minutiae points to get unique fingerprint. Fingerprint based identification has been one of the most successful biometric
techniques used for personal identification. Each individual has unique fingerprints. A fingerprint is the pattern of ridges and valleys
on the finger tip. A fingerprint is thus defined by the uniqueness of the local ridge characteristics and their relationships. Minutiae
points are these local ridge characteristics that occur either at a ridge ending or a ridge bifurcation. A ridge ending is defined as the
point where the ridge ends abruptly and the ridge bifurcation is the point where the ridge splits into two or more branches. Automatic
minutiae detection becomes a difficult task in low quality fingerprint images where noise and contrast deficiency result in pixel
72

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

configurations similar to that of minutiae. This is an important aspect that has been taken into consideration in this project for
extraction of the minutiae with a minimum error in a particular location.

Fig. 1: Ridge Endings and Bifurcations


II. LITERATURE REVIEW
I. Al Rassan, I et al. 2013. Securing Mobile Cloud Computing Using Biometric Authentication (SMCBA). This paper proposes and
implements a new user authentication mechanism of mobile cloud computing using fingerprint recognition system to enhance mobile
cloud computing resources
II. Baris Coskun et a.l 2014 Recognition of Hand-Printed Characters on Mobile Devices This paper explores the challenges of
performing this character identification and presents a novel scale/rotational invariant algorithm applied to the recognition of these
hand-drawn letters. The results of the algorithm are compared to those obtained using a popular Optical Character Recognition (OCR)
application, Tesseract, that is often integrated with iPhone Apps for this purpose
III R. Wildes I.et al. 2014 Biometric Authentication Using Kekre's Wavelet Transform
This paper proposes an enhanced method for personal authentication based on finger Knuckle Print using Kekre's wavelet transform
(KWT). Finger-knuckle-print (FKP) is the inherent skin patterns of the outer surface around the phalangeal joint of one's finger. It is
highly discriminable and unique which makes it an emerging promising biometric identifier. Kekre's wavelet transform is constructed
from Kekre's transform. The proposed system is evaluated on prepared FKP database that involves all categories of FKP. The total
database of 500 samples of FKP. This paper focuses the different image enhancement techniques for the pre-processing of the
captured images
IV J. G. Daugman.et al 2014 Finger-knuckle-print verification based on vector consistency of corresponding interest points
This paper proposes a novel finger-knuckle-print (FKP) verification method based on vector consistency among corresponding
interest points (CIPs) detected from aligned finger images.. Experimental results show that the proposed approach is effective in FKP
verification.
III. OBJECTIVE:- The objective of this Paper is to investigate the current techniques for fingerprint recognition. This target can be
mainly decomposed into image pre-processing, feature extraction and feature match. For each sub-task, some classical and up-to-date
methods in literatures are analyzed. Based on the analysis, an integrated solution for fingerprint recognition is developed for
demonstration. For the program, some optimization at coding level and algorithm level are proposed to improve the performance of
fingerprint recognition system with the variation of FFT on stored images. These performance enhancements with the variation of FFT
are analyzed and shown by experimental results conducted upon a variety of fingerprint images
IV. AN OVERVIEW OF THE METHOD/METHODOLOGY:73

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.

System Design ( System Level Design & Algorithm Level Design )

2. Fingerprint Image Preprocessing ( Fingerprint Image Enhancement, Binarization & Image Segmentation)
3. Minutia Extraction

(Fingerprint Ridge Thinning & Minutia Marking)

4 .Minutia Post-processing (False Minutia Remove & Unify Minutia Representation Feature Vectors )
5. Minutia Match (Alignment Stage &Match Stage)
6. Experimentation Results
Evaluation Indexes
Experiment Analysis
1. System Design:System Level Design:-A fingerprint recognition system constitutes of fingerprint acquiring device, minutia extractor and minutia

matcher as shown in figure


Figure 2: Simplified Fingerprint Recognition System
Algorithm Level Design:-To implement a minutia extractor, a three-stage approach is widely used by researchers. They are preprocessing, minutia extraction and post-processing stage

74

Figure 3:Minutia Extractor


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 4: Minutia Matcher


The minutia matcher chooses any two minutia as a reference minutia pair and then match their associated ridges first. If the ridges
match well, two fingerprint images are aligned and matching is conducted for all remaining minutia.
Fingerprint Image Enhancement:- Fingerprint Image enhancement is to make the image clearer for easy further operations. Since
the fingerprint images acquired from sensors or other media are not assured with perfect quality, those enhancement methods, for
increasing the contrast between ridges and furrows and for connecting the false broken points of ridges due to insufficient amount of
ink, are very useful for keep a higher accuracy to fingerprint recognition .Two Methods are adopted in my fingerprint recognition
system: the first one is Histogram Equalization; the next one is Fourier Transform.
V. ANALYSING FINGERPRINT PARAMETERS AND RESULTS
1.Finger print Image Pre-processing 2.Fingerprint Image Binarization 3.Fingerprint Image Segmentation

Figure 5: Finger Print Loading

75

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fingerprint scanning is the recognition and identification first step to record different characteristics for identification purposes. This
process identifies an individual person through quantifiable physiological characteristics. There are two types of finger-scanning
technology.
1. First is an optical method, which starts with a visual image of a finger.
2. The second uses a semiconductor generated electric field to image a finger.
Fingerprint image enhancement is done using histogram equalization.

Figure 6: Histogram Equalization


Fast Fourier Transform:- A fast Fourier transform (FFT) is an algorithm to compute the discrete Fourier transform (DFT) and its
inverse. Fourier analysis converts time (or space) to frequency and vice versa; an FFT rapidly computes such transformations by
factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, fast Fourier transforms are widely used for many
applications in engineering, science, and mathematics .FFT at different levels can be applied for 0.1 to 0.9.and percentage for finger
print matching will be calculated for each finger. Different tables are maintained for each finger with three images and applied FFT
from 0.1 to 0.9 levels.
Fingerprint Enhancement by Fourier Transform:We divide the image into small processing blocks (32 by 32 pixels) and perform
the Fourier transform according to:

(1)

for u = 0, 1, 2, ..., 31 and v = 0, 1, 2, ..., 31.


In order to enhance a specific block by its dominant frequencies, we multiply the FFT of the block by its magnitude a set of times.
Where the magnitude of the original FFT = abs(F(u,v)) = |F(u,v)|.
76

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Get the enhanced block according to

(2) ,
where F-1(F(u,v)) is done by:

(3)

for x = 0, 1, 2, ..., 31 and y = 0, 1, 2, ..., 31.


The k in formula (2) is an experimentally determined constant, which we choose k=0.45 to calculate. While having a higher "k"
improves the appearance of the ridges, filling up small holes in ridges, having too high a "k" can result in false joining of ridges. Thus
a termination might become a bifurcation. Figure 3.1.2.1 presents the image after FFT enhancement.

Figure 7:Binarization enhancement by histogram equalization of adaptive binarization after FFT.

77

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 8:Enhancement by FFT

Figure 9: Orientation Flow Estimate


Direction calculates the local flow orientation in each local window with size (block size x block size) direction (gray scale
fingerprint image, block size, graphical show disable flag) return p ROI bound return z ROI area.

78

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 10: Region of Interest


H breaks of adaptive binarization is done after FFT. Fingerprint shown in figure remove H breaks between curves and generate clear
fingerprint image after this step.

Figure 11: Minutie Extraction


The performance of minutiae extraction algorithm relies heavily on the quality of the input fingerprint images. Fingerprint matching
techniques:-There are two categories for fingerprint matching techniques:
79

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Minutiae based: In this category first find minutiae points and then map their relative placement on the finger. It is difficult to extract
the minutiae points accurately when the fingerprint is of low quality .Also this method does not take into account the global pattern of
ridges and furrows and correlation based.

Figure 12: Remove spurious minutia


2. Correlation based: This technique affected by fingerprint image translation and rotation and require the precise location of a
registration point. This method is able to overcome some of the difficulties of the minutiae based approach.
FINGERPRINT MATCHING
Fingerprint algorithms consist of two main phases, enrolment and identification or verification. The enrolment phase, first determines
the global pattern of the print, so it can be categorized in a large bucket during improve matching performance, the minutia points are
then transformed by a, typically proprietary, algorithm into a template. The template is stored and used for future identification. An
additional step in the enrolment process could be to search for existing matches. This leads to an interesting advantage fingerprint
authentication has over password authentication. As well as being proof of being a particular person, fingerprint identification can also
be used prove somebody is not a particular person or persons, such as on a terrorist watch list, or previously having applied a benefit.
Finger Print sample recognition process:-The recognition process at real time application. In this figure left fingerprint is at real time
and right fingerprint is stored in database. This figure shows the different vector position at different points on finger. If the pattern
stored in database match with the real time pattern then it will recognize otherwise it will not recognize the person. The identification
phase, first determines a pattern bucket, and then submits the minutia or template, depending on the design, which can be compared to
the saved template. The comparison is done with a statistical analysis, since an exact match is not expected. Matches may be found by
rotating or translating the image, to compensate for the finger not being placed in an identical location on each use. Evaluation indexes
for fingerprint recognition:-Two indexes are well accepted to determine the performance of a fingerprint recognition system: one is
FRR (false rejection rate) and the other is FAR (false acceptance rate).
80

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 13: Fingerprint matching


For an image database, each sample is matched against the remaining samples of the same finger to compute the False Rejection Rate.
If the matching g against h is performed, the symmetric one (i.e., h against g) is not executed to avoid correlation. All the scores for
such matches are composed into a series of Correct Score. Also the first sample of each finger in the database is matched against the
first sample of the remaining fingers to compute the False Acceptance Rate. If the matching g against h is performed, the symmetric
one (i.e., h against g) is not executed to avoid correlation. All the scores from such matches are composed into a series of Incorrect
Score.A fingerprint database is used to test the experiment performance. Here is the diagram for Correct Score and Incorrect Score
distribution:

Red line: Incorrect Score


Green line: Correct
Scores

Figure 14: Fingerprint matching

81

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

It can be seen from the above figure that there exist two partially overlapped distributions. The Red curve whose peaks are mainly
located at the left part means the average incorrect match score is 25. The green curve whose peaks are mainly located on the right
side of red curve means the average correct match score is 35. This indicates the algorithm is capable of differentiate fingerprints at a
good correct rate by setting an appropriate threshold value .

Blue dot line: FRR curve


Red dot line: FAR curve

Figure 15: FAR and FRR curve


The above diagram shows the FRR and FAR curves. At the equal error rate 25%, the separating score 33 will falsely reject 25%
genuine minutia pairs and falsely accept 25% imposturous minutia pairs and has 75% verification rate. The high incorrect acceptance
and false rejection are due to some fingerprint images with bad quality and the vulnerable minutia match algorithm.
Fast Fourier Transform variation: Three samples of each finger are taken first. Each sample is passed through all the verification stage
and minutiae extraction with the variation of FFT.
Step 1:- FFT is set to 0.1 first and matching percentage results of all the first finger samples are calculated. Then again with FFT 0.1
matching percentage of second finger are calculated. This process is carried out for all the samples of ten finger and matching
percentage is calculated.
Step 2:- FFT is set to 0.2 and matching percentage is calculated. Similarly FFT is varied from 0.2 to 0.9 and results are calculated. It is
observed that with the variation of FFT on samples, more accurate results are obtained i.e more matching precision results are
obtained.
Few Sample percentage are shown in table and graph is also plotted for finger 1 with FFT 0.1to 0.9 .Some matching percentage is
selected for authentication of user. Same procedure is repeated for all fingers, matching percentage is calculated and corresponding
graphs are plotted.

82

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table1. FFT and Percentage Matching Table for First Finger

FFT
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9

F1F2(%) F1F3(%) F2F3(%) F1F1(%)


31.2
67.2
67.6
100
29.3
34.4
66.5
100
31.1
56.3
54.5
100
33
35.7
78.8
100
23.4
77.3
47.7
100
45.5
50.5
45.3
100
45
34.2
62.7
100
55.5
38.9
34.5
100
38.6
54.3
70.9
100

FFT: Fast Fourier Transform


F1,F2,F3: Samples of Finger

120
100
80

FFT
F1F2(%)

60

F1F3(%)
40

F2F3(%)
F1F1(%)

20
0
1

Figure 16: Matching Percentage of Finger First


Table2. FFT and Percentage Matching Table for Second Finger

FFT
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
83

F1F2(%) F1F3(%) F2F3(%) F1F1(%)


31.2
67.2
67.6
100
29.3
34.4
66.5
100
31.1
56.3
54.5
100
33
35.7
78.8
100
23.4
77.3
47.7
100
45.5
50.5
45.3
100
45
34.2
62.7
100
55.5
38.9
34.5
100
38.6
54.3
70.9
100

FFT: Fast Fourier Transform


F1,F2,F3: Samples of Finger

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

120
100
80

FFT
F1F2(%)

60

F1F3(%)
40

F2F3(%)
F1F1(%)

20
0
1

Figure 17: Matching Percentage of Finger Second


VI. CONCLUSSION AND FUTUREWORK
We have presented the overview of the finger print technology which includes primarily the scanner, the classification of fingerprint
image in the database, the matching algorithms. Fingerprint recognition is used in various industries for its attendance and security
purposes. Variation of FFT value on samples produces more accurate results. This application can be implemented in any of security
concern areas. If done correctly it can be a very powerful method of identification.
REFERENCES
[1] Lin Hong, Yifei Wan and Anil Jain. Fingerprint Image Enhancement Algorithm and Performance Evaluation. East Lansing,
Michigan.
[2] Fingerprint Minutiae Extraction, Department of Computer Science National Tsing Hua University Hsinchu, Taiwan 30043
[3] Handbook of Fingerprint Recognition by David Maltoni (Editor), Dario Maio, Anil K. Jain, Salil Prabhakar
[4] L. Hong, A. Jain, S. Pankanti and R. Bolle, Fingerprint Enhancement, Pattern Recognition, 202-207, 1996 ICRTEDC-2014 246
[5] A. K. Jain, L. Hong, S. Pantanki and R. Bolle, An Identity Authentication System Using Fingerprints, Proc of the IEEE, vol, 85,
no.9,1365-1388, 1997
[6] A. Ross, & A. K. Jain, Information Fusion in Biometrics, Pattern Recognition Letters, 24(13), 2003, 2115-2125.
[7] W. Yunhong, T. Tan, & A. K. Jain, Combining Face and Iris Biometrics for Identity Verification, Proceedings of Fourth
International Conference on AVBPA, Guildford, UK, 2003, 805-813.
[8] S. C. Dass, K. Nandakumar, & A. K. Jain, A Principled Approach to Score Level Fusion in Multimodal Biometric Systems, Proc.
of Audio- and Video-based Biometric Person Authentication (AVBPA), Rye Brook, NY, 2005.
[9] G. Feng, K. Dong, D. Hu, & D. Zhang, When Faces Are Combined with Palmprints: A Novel Biometric Fusion Strategy,
International Conference on Bioinformatics and its
Applications, Hong Kong, China, 2004, 701-707.
[10] J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Transactions on Pattern
Analysis and Machine Intelligence, 15(11), 1993, 11481161
84
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Quantitative Determination of Swertiamarin in Swertia chirayita by HPTLC


LAIQA ANJUM, ZAINUL A. ANSARI, ARADHNA YADAV, MOHD. MUGHEES, JAVED AHMAD & ALTAF AHMAD*

Department of Botany, Faculty of Science, Jamia Hamdard, New Delhi 110062, India
*Address for correspondence Email: ahmadaltaf@rediffmail.com
Abstract: A simple, rapid, and accurate High-Performance Thin-layer chromatographic (HPTLC) method has been developed for
the determination of Swertiamarin in aerial parts of Swertia chirayita procured from Nepal & collected from Gangtok and Kumaon
respectively. The methanolic extracts of Swertia chirayita samples were applied on TLC aluminum plate pre-coated with Silica gel 60
GF254 and developed using a solvent system containing Ethyl acetate : Methanol : Water (7.7 : 1.3 : 0.8, v/v/v) as a mobile phase. The
densitometric detection of spots was carried out using a UV detector at 245 nm in absorbance mode. This system was found to give
compact spot for swertiamarin (Rf value = 0.530.02). The calibration curve was found to be linear in the range of 200 to 1000
ng/spot. The limit of detection and quantification were found to be 50 and 200 ng/spot, respectively for swertiamarin. The highest and
lowest concentration of swertiamarin in Swertia chirayita was found to be present in sample from Nepal and Gangtok, Sikkim
respectively. The method was validated in accordance with ICH guidelines.

Key words: Swertia chirayita, secoiridoid glycoside, Swertiamarin, HPTLC, Gentianaceae.

Introduction
Swertia chirayita (Roxb.) H. Karst. is an important medicinal plant, belongs to the family Gentianaceae [1]. Swertia chirayita
commonly known as Chirata/Chirayita is an erect annual or perennial herb, found in Himalaya and Meghalaya at an altitude of 12001300 m.The plant has been used in variety of health disorders, including asthma, colic, colon cancer, constipation, diarrhea, dyspepsia,
liver ailments, nausea
of metal disorders

[6]

[3-4]

, constipation, diarrhea, dyspepsia, liver ailments, nausea

[4-5]

. This plant has also been reported to possess antidiabetic

, epilepsy, ulcer, melancholia and certain types


[7]

, anti-inflammatory

[8]

hepatoprotective

[9]

antibacterial [10], antioxidant [11] antimalarial, anthelmintics and antipyretic properties [12].

SWERTIAMARIN IS A SECOIRIDOID GLYCOSIDE

[13-14]

PRESENT IN MEMBERS OF GENTIANACEAE

[15]

. IT HAS CNS-

DEPRESSANT EFFECT AND ANTICHOLINERGIC ACTIVITY [16-17]. IT IS A REPRESENTATIVE CONSTITUENT OF MANY


CRUDE DRUGS MARKETED IN JAPAN AND OTHER COUNTRIES AND THESE CRUDE DRUGS ARE NORMALLY
EVALUATED BY THEIR SWERTIAMARIN CONTENT

[18].

ALTHOUGH HPTLC METHODS FOR THE ESTIMATION OF

SWERTIAMARIN IN SWERTIA CHIRATA (WALL) CLARK, E. LITTORALE BLUME, AND MARKETED FORMULATIONS
CONTAINING ENICCOSTEMA LITTORALE BLUME HAVE BEEN REPORTED
85
www.ijergs.org

[14, 19]

. PRESENT STUDY DEALS WITH THE

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

QUANTITATIVE DETERMINATION OF SWERTIAMARIN IN SWERTIA CHIRAYITA (ROXB. EX FLEMING) H. KARST


COLLECTED FROM THREE DIFFERENT PLACES.

MATERIALS & METHODS:

Plants of Swertia chirayita were collected from three different regions of Himalayas they were procured from Nepal,
Gangtok, Sikkim and Kumaon, Uttarakhand respectively. These plant samples were identified by Prof. Javed Ahmad, Department of
Botany, Jamia Hamdard. The plant material was cleaned and dried in the shade for a week at room temperature it was then powdered
to 40 mesh and stored at 25 C.
Standard compound swertiamarin (99%) was purchased from Chromadex (Life Technologies, India). All the solvents and reagents
used in the experiments were of analytical grade.

Preparation of Mobile Phase & Standard Solutions


Mobile phase was prepared by mixing Ethyl acetate: Methanol: Water in the ratio of

7.7:1.3:0.8 respectively. A stock

solution of swertiamarin (1 mg/mL) was prepared by dissolving 2.0 mg of the standard swertiamarin accurately weighed in 2 mL
methanol in a volumetric flask. Standard solution of 200 g /mL was prepared from the stock solution by transferring 200 L of stock
solution, and diluting to volume with methanol (800 L). Appropriate quantities of this standard solution were spotted to obtain
swertiamarin in the range of 200-1000 ng/spot.

Preparation of Sample Solutions


Powdered samples of S. chirayita aerial parts (1g, accurately weighed) were extracted with methanol (225mL) for 24h at
room temperature. The combined extracts were filtered through Whatman Filter paper No. 42. Extracts obtained were concentrated on
rotary evaporator (R-200/205/V (Buchi) in vacuum to 10 mL.

Instrumentation and chromatographic conditions


HPTLC was performed on 2010 mm aluminium backed plate coated with 0.2 mm layers of silica gel 60 F 254 (E-Merck,
Germany). Standard solutions of swertiamarin and the samples were applied to the plates as 5 mm wide band, 10 mm apart and 10 mm
from the bottom and sides using a CAMAG Linomat V sample applicator fitted with a CAMAG Microlitre syringe (CAMAG,
Germany). Linear ascending development of the plates to a distance of 80 mm was performed with Ethyl acetate: Methanol: Water
86

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

(7.7:1.3:0.8 ) v/v/v, as mobile phase in a 2020 twin-trough glass chamber previously saturated with mobile phase vapor for 15 min at
25 2C. The plate was dried completely and was scanned at 245nm using a CAMAG TLC scanner in absorbance mode, using the
deutiriium lamp. The slit dimensions were 4 mm0.1 mm and the scanning speed was 20 mm s -1. Visualization of the spots was
performed by spraying the plates with anisaldehyde reagent (anisaldehyde-glacial AcOH-MeOH-H2SO4 (0.5:10:85:5, v/v/v/v). After
spraying the plates with the reagent, under 366-nm UV light this standard emits green light. A calibration curve in this study was
plotted between the amount of analyte (swertiamarin) versus average response (peak area and regression equation was obtained Y=
8.349X+507.3 over the concentration range of 200-1000 ng/spot with respect to the peak area with a regression coefficient of 0.9961.

Method Validation
The developed method is validated as per the International Conference on Harmonization (ICH) guidelines Q 2 (R1) [20].
By determining linearity, precision, accuracy, limits of detection (LOD), limits of quantification (LOQ) and recovery. Linearity of the
method was evaluated by constructing calibration curves at nine concentration levels. Calibration curves were plotted over a
concentration range of 200-1000 ng/spot. Aliquots of standard working solution of swertiamarin were applied onto the plate.
The calibration curves were developed by plotting peak area versus concentrations. The intraday and inter-day precisions of the
proposed method was determined by estimating the corresponding responses 3 times on the same day and on 3 different days over a
period of one week for chosen concentration of standard solution of swertiamarin for the proposed method. The results were reported
in terms of relative standard deviation (% RSD). The accuracy of the method was determined by calculating recovery of swertiamarin
by the standard addition method. Known amounts of standard solution of swertiamarin (50, 100 and 150 ng) were added to prequantified sample solution. Accuracy was expressed as percent recovery. The limit of detection (LOD) and Limit of quantification
(LOQ) were calculated using the following equations as per International Conference on Harmonization (ICH) guidelines

[20].

LOD= 3.3 /S
LOQ= 10 /S
Where = Standard deviation of the response; S=Slope of calibration curve

The specificity of the method was ascertained by analyzing standard compound and sample. The spot for swertiamarin in the sample
was confirmed by comparing Rf and spectra of spot with that of standard. The peak purity of the swertiamarin was assessed by
comparing the spectra at three different levels i.e. peak start, peak apex and peak end positions of the spot.

Standard solution of 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5 L was applied to TLC plate in six replicates (n=6) to obtain final concentration
range of 200-1000 ng/spot. The data of peak versus standard concentration was treated by linearsquare regression samples chosen for
the study were 200, 400 and 800 ng/spot.
87

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RESULTS AND DISCUSSION


Initial trial experiments were conducted to select mobile phase for accurate analysis of the standards. Of the various mobile phases
tried, mobile phase consisting of Ethyl acetate: Methanol: Water (7.7:1.3:0.8) v/v/v gave a sharp and well defined peak of
swertiamarin at Rf value of 0.53 in standard (Figure 1) and samples respectively(Figure 2). Well defined spots were obtained when the
chamber was saturated with the mobile phase for 15 min at room temperature. For determination of linearity curves of area vs
concentration, different amounts of stock solution of swertiamarin was applied for HPTLC plate analyses.

Figure 1: TLC densitogram showing swertiamarin standard

Figure 2: TLC densitogram resolution of swertiamarin in a sample


88

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

This method was validated for linearity, precision, specificity and accuracy. Calibration was linear in the concentration range of 2001000 ng. The linear regression equation was found to be Y= 8.349X+507 for swertiamarin, while the correlation coefficient (r2) was
0.9961 with high reproducibility and accuracy (Table 1). The proposed method, when used for estimation of swertiamarin after
spiking with 50, 100 and 150% of additional standard afforded recovery ranging from 98.12-100.2% for swertiamarin was obtained as
listed in (Table 2). The RSD of recovery of swertiamarin of was ranged from 0.13-0.39 (Table 2). The intra and inter day precision, as
coefficient of variation (CV%) and accuracy of the assay determined at swertiamarin concentration of 200, 400 and 800 ng/spot has
been summarized in (Table 3). The intra day precision (n=3) was1.35 The inter day precision over three different days was 2.19
The intra-day and inter day accuracy were in the range of 99.37-100.48 and 98.72-99.51 respectively.
The repeatability of the method was studied by assaying six samples of swertiamarin at same concentration under the same
experimental conditions the values were within the acceptable range and so we concluded that the method was accurate, reliable and
reproducible.
The robustness of the method was established by introducing small changes in mobile phase composition and chromatograms
were run. The plates were prewashed by methanol and activated at 505C for 2,5,7 in prior to chromatography.

Limit of detection and limit of quantification were calculated by method as described in validation section and was found to be 50 ng
and 200 ng/spot respectively, which indicates the ample sensitivity of the method.
The specificity of the proposed method was determined by comparing the sample and standard peak for its R f and UV spectra. The
peak purity of swertiamarin was assessed by comparing the spectra at three different levels i.e. Peak start, peak apex and peak end
position.

A good resolved single spot of swertiamarin was observed at Rf value 0.530.02 in the chromatogram of the samples. the
swertiamarin content in different samples of Swertia chirayita was observed and calculated (Table 4)
This HPTLC method has been developed for the determination of swertiamarin in S. chirayita collected from from different locations
of Himalaya. The proposed method is simple, precise, specific accurate, less time consuming and cost effective. This method has been
developed with high precision and economic considerations. Statistical analysis showed that the method is suitable for the analysis of
swertiamarin. This method can be used for authentication of this plant and is also suitable for quality control and standardization of
drugs derived from S. chirayita. It was found that plant sample collected from Dora, Himachal Pradesh contained highest amount of
swertiamarin, so this region could be considered as a suitable region for the cultivation of S. chirayita.

89

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1: Linear Regression Data for Calibration Plot


Parameters

Observation for swertiamarin

Linearity range (ng/spot)

200-1000
2

Correlation coefficient (r )

0.9961

Regression equation

Y= 8.349X+507.3

Slope SD

8.3490.156

Intercept SD

507.3 0.968

LOD ng/spot

50

LOQ ng/spot

200

Table 2: Recovery Studies of Swertiamarin

Excess drug
added to analyte

Theoretical
content (ng)

% Recovery

% RSD

100

99.53

0.39

100

150

98.12

0.13

150

200

100.2

0.21

200

250

99.59

0.31

Table 3: Precision Study of Swertiamarin


Amount Applied
ng/spot

Intraday Precision
% RSD, n=3

200
400
600

Interday Precision
% RSD, n=3

0.52
0.32
0.55

0.73
0.45
0.25

Table 4: Swertiamarin Content in Sample Extract (%w/w of sample) from different Locations

Compound
Swertiamarin

90

Place of
collection

% w/w of sample
mean SD

Nepal

1.420.006

Swertiamarin

Kumaon,
Uttarakhand

0.8590.027

Swertiamarin

Gangtok,
Sikkim

0.7040.009
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ACKNOWLEDGMENTS
Financial support by UGC under Special Assistance Programme (SAP), Government of India, New Delhi is thankfully acknowledged.
Honble Vice Chancellor of Jamia Hamdard is also thankfully acknowledged for his support.

REFERENCES:
1.

N. Pant D.C. Jain, and R.S. Bhakuni, Indian J. Chem. 39B : 565-586.2000

2.

Joshi P, Dhawan V, Swertia chirayita- an overview. Curr Sci 89: 635-640. 2005

3.

Shah, S. P., Aspects of medicinal plant conservation in Nepal. In XVI International Botanical Congress, Abstr. no. 5227, Poster
no. 2453, 1999.

4.

Nadkarni KM, Indian materia medica, 3rd edn. Popular Prakashan, Bombay,1184- 1185. 1976

5.

Duke, J. A., Bogenschutz-Godwin MJ, duCellier J, Duke PAK, Handbook of medicinal herbs, 2nd edn CRC Press , Boca Raton,
190.2002

6.

Chatterjee A, Parkrashi SC, The treatise on Indian medicinal plants vol 4 CSIR, New Delhi, 92-94. 1995

7.

Chandrasekar B, Bajpai MB, Mukherjee SK , Hypoglycemic activity of Swertia chirayita (Roxb ex Flem) Karst. Indian J Exp Biol
28:616-618. 1990

8.

Chodhary NI, Bandyopadhyay SK, Banerjee SN, Dutta MK, Das PC, Preliminary studies on the anti-inflammatory effects of
Swertia chirata in albino rats. Indian J Pharmacol 27:37-39. 1995

9.

Karan M, vasisht K, Handa SS, Antihepatotoxic activity of Swertia chirata on paracetamol and galactosamine induced
hepatotoxicity in rats. Phytother Res 13:95-101. 1999
10. Bhargava S, Garg R , Evaluation af antibacterial activity of aqueous extract of Swertia chirata Buch. Ham. root . Int J Green Pharm
1:51-52. 2007

11. Khanom F, Kayahara H, Tadasa K , Superoxide-scavenging and prolyl Biotechnol endopeptidaseinhibitory activiteis of
Bangladeshi indigenous medicinal plants. Biosci Biochem 64:837-840. 2000
12. Bhargava S, Rao PS, Bhargava P, Shukla S, Antipyretic potential of Swertia chirata Buch Ham. root extract. Sci pharm77:617623. 2009
13. , J. Rai and K.A. Thakur, Curr. Sci 35(6) 148-149. 1965
14. P.D. Desai, A.K. Ganguly, T.R. Govindachari, B.S. Joshi, V.N. Kamat, A.H. Journal Manmade, S.K. nagle, R.H. nayak, A.K.
Saksena, S.S. Sathe, and N. Vishwanathan, Indian Chem. 4 , 457-459. 1966
15. Vishwakarma SL., Bagul MS, Rajni M and Goyal RK, Journal of Planar chromatography A Sensitive HPTLC method for
Estimation of Swertiamarin in Enicostemma littorale Blume, Swertia chirata (Wall) Clarke, and in formulations containing E.
littorale 17, 128- 131. 2004

16. S.K. Bhattacharya. P.K. Reddy. S. Ghosal, Chm. Ind. (London) 3, 457-459. 1975

17. J. Yamahara. M. Kobayashi. H. Matusuaa and S. Akoi, J. Ethnopharmcol. 33 (1/2), 31-36. 1991
91

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

18. H. Takei, K.Y. Nakauchi. F. Yoshizaki, Anal. Sci. 17,885-888. 2001


19. Sawant, P. L. Prabhakar B. R.,Pandita, N. S., Journal of Planar Chromatography 24 A Validated Quantitative HPTLC Method for
Analysis of Biomarkers in Enicostemma littorale Blume , 6, 497-502. 2011
20. ICH, Q2 (R1), Harmonised Tripartite Guideline, Validation of Analytical Procedures: Text and methodology, International
Conference on Harmonization (ICH), Geneva, 2005

92

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Simulation & Implementation Of Three Phase Induction Motor On


Single Phase By Using PWM Techniques
Ashwini Kadam1,A.N.Shaikh2
1

Student, Department of Electronics Engineering, BAMUniversity,akadam572@gmail.com,9960158714

AbstractThe main objective of this paper is to control the speed of an induction motor by changing the frequency using three level
diode clamped multilevel inverter. To obtain high quality sinusoidal output voltage with reduced harmonics distortion, multicarrier
PWM control scheme is proposed for diode clamped multilevel inverter. This method is implemented by changing the supply voltage
and frequency applied to the three phase induction motor at constant ratio. The proposed system is an effective replacement for the
conventional method which produces high switching losses, results in poor drive performance. The simulation & implementation
results reveal that the proposed circuit effectively controls the motor speed and enhances the drive performance through reduction in
total harmonic distortion (THD). The effectiveness of the system is checked by simulation using MATLAB 7.8 simulink package.
Keywords Clamped Diode, MOSFET, Induction motor, Multicarrier PWM technique, THD.

INTRODUCTION
Majority of industrial drives use ac induction motor because these motors are rugged, reliable, and relatively inexpensive.
Induction motors are mainly used for constant speed applications because of unavailability of the variable-frequency supply voltage
[1]. But many applications need variable speed operations. Historically, mechanical gear systems were used to obtain variable speed.
Recently, power electronics and control systems have matured to allow these components to be used for motor control in place of
mechanical gears. These electronics not only control the motors speed, but can improve the motors dynamic and steady state
characteristics. Adjustable speed ac machine system is equipped with an adjustable frequency drive that is a power electronic device
for speed control of an electric machine. It controls the speed of the electric machine by converting the fixed voltage and frequency to
adjustable values on the machine side. High power induction motor drives using classical three - phase converters have the
disadvantages of poor voltage and current qualities. To improve these values, the switching frequency has to be raised which causes
additional switching losses. Another possibility is to put a motor input filter
between the converter and motor, which causes additional weight. A further inconvenience is the limited voltage that can be applied to
the induction motor determined by inconvenience is the limited voltage that can be applied to the induction motor determined by the
blocking voltage of the semiconductor switches. The concept of multilevel inverter control has opened a new possibility that induction
motors can be controlled to achieve dynamic performance equally as that of dc motors [2].
Recently many schemes have been developed to achieve multilevel voltage profile, particularly suitable for induction motor
drive applications. The diode clamp method can be applied to higher level converters. As the number of level increases, the
synthesized output waveform adds more steps, producing a staircase waveform. A zero harmonic distortion of the output wave can be
obtained by an infinite number of levels [3]. Unfortunately, the number of the achievable levels is quite limited not only due to voltage
unbalance problems but also due to voltage clamping requirement, circuit layout and packaging constraints. In this paper, a threephase diode clamped multilevel inverter fed induction motor is described. The diode clamped inverter provides multiple voltage levels
from a series bank of capacitors [4]. The voltage across the switches has only half of the dc bus voltage. These features effectively
double the power rating of voltage source inverter for a given semiconductor device [5]. The proposed inverter can reduce the
harmonic contents by using multicarrier PWM technique. It generates motor currents of high quality. V/ is an efficient method for
speed control in open loop. In this scheme, the speed of induction machine is controlled by the adjustable magnitude of stator voltages
and its frequency in such a way that the air gap flux is always maintained at the desired value at the steady-state. Here the speed of an
induction motor is precisely controlled by using three level diode clamped multilevel inverter.

CONVENTIONAL METHOD
The voltage source inverter produces an output voltage or a current with levels either zero or Vdc. They are known as two
level inverter. To obtain a quality output voltage or a current waveform with a minimum amount of ripple content, they require high
switching frequency along with various pulse-width modulation strategies. In high power and high-voltage applications, these twolevel inverters have some limitations in operating at high frequency mainly due to switching losses and constraints of device rating.
The dc link voltage of a two-level inverter is limited by voltage ratings of switching devices, the problematic series connection of
93

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

switching devices is required to raise the dc link voltage. By series connection, the maximum allowable switching frequency has to be
more lowered; hence the harmonic reduction becomes more difficult [6]. In addition, the two level inverters generate high frequency
common-mode voltage within the motor windings which may result in motor and drive application problems [7]. From the aspect of
harmonic reduction and high dc-link voltage level, three-level approach seems to be the most promising alternative. The harmonic
contents of a three-level inverter are less than that of a two-level inverter at the same switching frequency and the blocking voltage of
the switching device is half of the dc-link voltage [8]. A three level inverter will not generate common-mode voltages when the
inverter output voltages are limited within
certain of the available switching states [9]. So the three-level inverter topology is generally used in realizing the high performance,
high voltage ac drive systems [10].

DRIVE SYSYEM DESCRIPTION


In the conventional technique normal PWM method is used. So that the voltage and current is of poor qualities and the
switching frequency causes more amount of switching losses. Those drawbacks are rectified using three phase diode clamped
multilevel inverter. The voltage and current quality are better and the switching losses are reduced when compared to the conventional
technique. Also the THD is found to be better.

Fig. 2. Multilevel inverter based drive circuit

Structure of Three Level Diode Clamped Multilevel Inverter


The three-level neutral point-clamped voltage source inverter is shown in Fig.2. It contains 12 unidirectional active switches
and 6 neutral point clamping diodes. The middle point of the two capacitors n can be defined as the neutral point [7]. The major
benefit of this configuration is each switch must block only one-half of the dc link voltage (Vdc/2). In order to produce three levels,
only two of the four switches in each phase leg should be turned on at any time. The dc-bus voltage is split into three levels by two
series-connected bulk capacitors, Ca and Cb, they are same in rating. The diodes are all same type to provide equal voltage sharing
and to clamp the same voltage level across the switch, when the switch is in off condition. Hence this structure provides less voltage
stress across the switch.

PRINCIPLE OF OPERATION
To produce a staircase-output voltage, consider one leg of the three-level inverter, as shown in Fig.3. The steps to synthesize
the three-level voltages are as follows.
1. For an output voltage level Vao=Vdc, turn on all upper-half switches A1 and A2.
2. For an output voltage level Vao=Vdc, turn on one upper switch A2 and one lower switch A1.
94

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3. For an output voltage level Vao=0, turn on all lower half switches A1 and A2.

Fig.3. One leg of a bridge.


Table.1. shows the voltage levels and their corresponding switch states. State condition 1 means the switch is on, 0 means the
switch is off. There are two complementary switch pairs in each phase. These pairs for one leg of the inverter are (A1, A1), (A2,
A2). If one of the complementary switch pairs is turned on, the other of the same pair must be off.

TABLE 1 Output voltage levels and their Switching states.


Fig.4. shows the phase voltage waveform of the three-level inverter. The m-level converter has an m-level output phase-leg
voltage and a (2m-1)-level output line voltage.

95

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.4. Three level inverter output voltage.


The most attractive features of multilevel inverters are as follows.
They can generate output voltages with extremely low distortion and lower dv/dt.
They draw input current with very low distortion [7].
They generate smaller common mode (CM) voltage, thus reducing the stress in the motor bearings.
They can operate with a lower switching frequency [7].

PROPOSED SCHEME
The block schematic of multilevel inverter fed three phase induction motor is as shown in Fig.5. The complete system will
consist of two sections; a power circuit and a control circuit. The power section consists of a power rectifier, filter capacitor, and three
phase diode clamped multilevel inverter. The motor is connected to the multilevel inverter.

Fig.5. Basic block diagram.


96

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The complete system will consist of two sections; a power circuit and a control circuit. The power section consists of a power
rectifier, filter capacitor, and three phase diode clamped multilevel inverter. The motor is connected to the multilevel inverter. Fig.6
shows an ac input voltage is fed to a three phase diode bridge rectifier with capacitor filter. A capacitor filter, removes the ripple
contents present in the dc output voltage.

Fig.6 Ac input voltage is fed to a three phase diode bridge rectifier with capacitor filter
The pure dc voltage is applied to the three phase multilevel inverter through capacitor filter. The multilevel inverter has 12
MOSFET switches that are controlled in order to generate an ac output voltage from the dc input voltage. The control circuit of the
proposed system consists of three blocks namely PWM opto-coupler and gate driver circuit. Fig.7. Shows circuit diagram of PWM

Fig.7 Circuit for PWM


The PWM is used four clocks reference clock, voltage control clock, frequency clock, output delay clock. For this clock we
used the VCO which generate frequency clock & Schmitt trigger which used for generating the reference clock, output delay clock,
97
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

voltage control clock. The PWM is used for generating gating signals required to drive the power MOSFET switches present in the
multilevel inverter. The voltage magnitude of the gate pulses generated by the PWM is normally 5V. To drive the power switches
satisfactorily the opto-coupler and driver circuit are necessary in between the controller and multilevel inverter. The output ac voltage
is obtained from the multilevel inverter can be controlled in both magnitude and frequency (V/ open loop control). The controlled ac
output voltage is fed to the induction motor drive. When the power switches are on, current flows from the dc bus to the motor
winding. The motor windings are highly inductive in nature; they hold electric energy in the form of current. This current needs to be
dissipated while switches are off. Diodes are connected across the switches give a path for the current to dissipate when the switches
are off. These diodes are also called freewheeling diodes. The V/ control method permits the user to control the speed of an induction
motor at different rates. For continuously variable speed operation, the output frequency of multilevel inverter must be varied. The
applied voltage to the motor must also be varied in linear proportion to the supply frequency to maintain constant motor flux.

MODULATION STRATEGY
This Paper mainly focuses on multicarrier PWM method. This method is simple and more flexible than SVM methods. The
multicarrier PWM method uses several triangular carrier signals, keeping only one modulating sinusoidal signal. If an n-level inverter
is employed, n-1 carriers will be needed. The carriers have the same frequency WC and the same peak to peak amplitude Ac and are
disposed so that the bands they occupy are contiguous. The zero reference is placed in the middle of the carrier set. The modulating
signal is a sinusoid of frequency Wm and amplitude Am. At every instant each carrier is compared with the modulating signal. Each
comparison gives 1(-1) if the modulating signal is greater than (lower than) the triangular carrier in the first (second) half of the
fundamental period, 0 otherwise. The results are added to give the voltage level, which is required at the output terminal of the
inverter. Multicarrier PWM method can be categorized into two groups: 1) Carrier Disposition (CD) method 2) Phase shifted PWM
method.
Advantages of multicarrier PWM techniques:
Easily extensible to high number of levels.
Easy to implement.
To distribute the switching signals correctly in order to minimize the switching losses.
To compensate unbalanced dc sources.
Related to the way the carrier waves are placed in relation to the reference signal, three cases can be distinguished:
Alternative Phase Opposition Disposition (APOD), where each carrier band is shifted by 180 from the adjacent
bands.
Phase Opposition Disposition (POD), where the carriers above the zero reference are in phase, but shifted by 180 from those
carriers below the zero reference.
In-Phase Disposition (PD), where all the carriers are in phase [8].
In this paper the gating pulses for MOSFET switches are generated by using In-phase disposition technique.

98

Fig.8. In-phase disposition technique


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

V/ CONTROL THEORY
Fig.7. shows the relation between the voltage and torque versus frequency. The voltage and frequency being increased up to
the base speed. At base speed, the voltage and frequency reach the rated values. We can drive the motor beyond base speed by
increasing the frequency further. But the voltage applied cannot be increased beyond the rated voltage. Therefore, only the frequency
can be increased, which results in the field weakening and the torque available being reduced. Above base speed, the factors
governing torque become complex, since friction and wind age losses increase significantly at higher speeds. Hence, the torque curve
becomes nonlinear with respect to speed or frequency.

Fig.9. Speed-Torque characteristics with V/ control.

SIMULATED CIRCUITS AND WAVEFORMS


Fig.8. shows the PWM circuit to generate the gating signals for the multilevel inverter switches. To control a three phase
multilevel inverter with an output voltage of three levels; two carriers are generated and compared at each time to a set of three
sinusoidal reference waveforms. One carrier wave above the zero reference and one carrier wave below the reference. These carriers
are same in frequency, amplitude and phases; but they are just different in dc offset to occupy contiguous bands. Phase disposition
technique has less harmonic distortion on line voltages.

99

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.10. PWM simulation circuit.

Fig.11. Gate pulses for leg A switches.


Fig.11. shows the waveform of sine-triangle intersection. Two carriers together with modulation signal have been used to
obtain SPWM control. Simulated model for entire circuit is shown in Fig.12.
100

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.12. Simulated circuit.


Output voltage and current waveforms for 50 Hz frequency are shown in below figures 13 and 14.

Fig.13. Output line-line voltage for 50Hz frequency.

101

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.14. Output current waveform.


The FFT plot of the output voltage is shown in Fig.15. The plot shows that the harmonic content present in the output voltage is very
low.

Fig.15. FFT for output voltage.


Speed-Torque curves for 50 Hz and 45 Hz frequencies are shown in figures 16 and 18.

Fig.16. N-T curves for 50Hz frequency.


The frequency of reference signal determines the inverter output frequency; and its peak amplitude controls the modulation
index. The variation in modulation index changes the rms output voltage of the multilevel inverter. By varying the reference signal
frequency as well as modulation index, the speed of an induction motor gets controlled.

102

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.17. Output line-line voltage for 45Hz frequency.

Frequency
(Hz)

Speed
(rpm)

THD(%)
Current
Voltage
(Ia)
(Vab)

Fig.18. N-T curves for 45Hz frequency.


The speed-torque curves conclude that the voltage and frequency applied to the motor gets decreased; then the speed of an induction
motor also decreased simultaneously.

103

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

50
45
40
35

1479
1300
1275
1065

3.33%
3.09%
7.89%
10.76%

11.43%
10.97%
18.20%
27.30%

TABLE 2 Speed range for different frequency values.

ACKNOWLEDGMENT
The completion of this Dissertation-I would not have been possible without the support and guidance following people and
organization. With my deep sense of gratitude, I thank respected Principal Dr. G. S. Sable, respected H.O.D Prof. A. K. Pathrikar
and my respected teachers for supporting this topic of my Dissertation-I. I thereby take the privilege opportunity to thank my guide
Prof. A.N.Shaikh and other teachers whose help and guidance made this study possible.I would like to express my gratitude with a
word of thanks to all those who are directly or indirectly with this paper.

CONCLUSION
In this paper a diode clamped multilevel inverter has been presented for drive applications. The multicarrier PWM technique
can be implemented for producing low harmonic contents in the output, hence the high quality output voltage was obtained. The open
loop speed control was achieved by maintaining V/ ratio at constant value. The simulation results show that the proposed system
effectively controls the motor speed and enhances the drive performance through reduction in total harmonic distortion (THD). This
drive system can be used for variable speed applications like conveyors, rolling mills, printing machines etc.
REFERENCES:
[1] Jose, Steffen Sernet, Bin Wu, Jorge and SamirKouro, Multilevel Voltage Source - Converter Topologies for Industrial
Medium -Voltage Drives, IEEE Trans on Industrial Electronics, vol.54, no.6, Dec 2007.
[2] Dr.Rama Reddy and G.Pandian, Implementation of Multilevel inverter fed Induction motor Drive, Journal of Industrial
Technology, vol 24, no. 1 April 2008.
[3] Nikolaus P. Schibli, T. Nguyen, and Alfred C. Rufer, A Three-Phase Multilevel Converter for High - Power Induction
Motors, IEEE Trans on Power Electronics, vol.13, no.5, Sept 1998.
[4] N.Celanovic and D.Boroyevich, A Comprehensive study of neutral-point voltage balancing problem in three-level neutral point clamped voltage source PWM inverters, IEEE Trans on Power Electronics, vol.15, no. 2, pp. 242 249, March 2000.
[5] K.Yamanaka and A.M.hava, A novel neutral point potential stabilization technique using the information of output current
polarities and voltage ector, IEEE Trans on Ind. Appl, vol.38, no. 6, pp. 1572 1580 Nov/Dec 2002.
[6] Fang Zheng Peng, A Generalized Multilevel Inverter Topology with Self Voltage Balancing, IEEE Trans on Ind. Appl,
vol. 37, no. 2, pp 611-618 Mar/April 2001.
[7] J. Rodriguez, J.S. Lai, Fang Z. Peng, Multilevel Inverters: A Survey of Topologies, Controls and Applications, IEEE Trans
on Industrial Electronics, vol.49, no.4, pp 724-738 Aug 2002.
[8] L.M.Tolbert and T.G.Habetler, Novel Multilevel Inverter Carrier Based PWM methods, IEEE Trans on Ind. Appl, vol.35,
pp. 1098-1107, Sept.1999.
[9] Leon M.Tolbert, Fang Zheng Peng and Thomas G.Habetler, Multilevel PWM methods at Low modulation Indices, IEEE
Trans on Power Electronics, vol.15, no.4, July 2000.
[10] N.S. Choi, J.G.Cho, and G.H.Cho, A general Circuit topology of multilevel inverter, in Proc. IEEE PESC91, 1991, pp.96103

104

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of T-Source Inverter with Integrated Controller using PSO


Algorithm
Sathyasree.K1,Ragul.S2,Divya.P3,Manoj kumar.R4
Assistant Professor , Nandha Engineering College,Erode
2
PG Scholar , Nandha Engineering College,Erode
3
PG Scholar Nandha Engineering College,Erode
4
PG Scholar , Nandha Engineering College,Erode

E-mail- ragul.tetotteler@gmail.com, Contact no.7639651820

Abstract This paper deals with the design and simulation of integrated controller for T-source inverter (TSI) based photovoltaic
(PV) power conversion system. The TSI has less reactive component, high voltage gain and reduced voltage stress across the switches
compared to conventional inverter used in PV power conversion system. Integrated controller provides the maximum power point
tracking and DC link voltage control to the PV power conditioning system. Here the Maximum Power Point Tracking (MPPT) in
achieved by PSO algorithm and DC link voltage is controlled by capacitor voltage controller algorithm. T-Source inverter
implemented with integrated control for PV system is simulated using MATLAB Simulink. The results are analyzed, compared with
DC link controller; the same has been verified with experimental setup.

Keywords : Photovoltaic(PV), DC link controller, Maximum Power Point Controller (MPPT), Space Vector Pulse Width
Modulation(SVPWM), T- source inverter(TSI), PSO algorithm

1.INTRODUCTION
The reduction of greenhouse gas emission has become a great issue among developed countries for example European Union
(EU) has targeted to utilize the renewable energy source among 20% of total energy consumption by 2020 [1]. In green source effect,
the photovoltaic plays an important role to generate electricity from the solar irradiation. In remote locations where there is no
electricity is available the photovoltaic cells are installed on roofs and deserts [2]. The latter type of installations is known as off-grid
facilities and sometimes they are the most economical alternative to provide electricity in isolated areas.
The three main factors which affect the efficiency of a PV plant are inverter efficiency, MPPT efficiency and photovoltaic
plant efficiency. The PV panel is commercially lies between 8-15%, the inverter efficiency is 95-98% and MPPT efficiency is over
98%. By the growth of Perturb & Observe (P&O) and the PSO algorithms are used to track the maximum power during different
irradiation conditions. Depending upon the temperature of the panel and irradiation conditions, the MPPT is determined. ZSI has
suffered due to high input inductor ripple and more switching stress on the switches. To overcome the previously mentioned
drawbacks Trans Z-Source Inverter it is also called T-source inverter (TSI) is proposed [8]. This T-Source Inverter (TSI) has reduced
number of passive component, high rate of input utilization, high voltage gain, total volume of system and cost is decreased . In this
paper, due to the above advantages the TSI with Modified Space Vector Pulse Width Modulation (MSVPWM) is used. The
Photovoltaic is an intermittent source. Output voltage of PV highly depends on environmental condition like temperature and
irradiation. So it is necessary to maintain the output voltage of PV, controller is essential for PV power converter. There are lot of
authors discussed the controller for PV inverters .
In this context MPPT and DC link controllers is play very important role in input line balancing. The DC link controller is
used to sense the dc link voltage value and compares it with the reference value of capacitor voltage and changes the reference current
correspondingly. Here DC link controller and MPPT controller are unified and produce a shoot through ratio and improve the response
time of MPPT controller . Due to integrated controller the settling time and the oscillations are reduced. The control of T-source
capacitor voltage beyond the MPP voltage of a PV array is not facilitated in the traditional MPPT algorithm. This paper an integrated
control algorithm is proposed. The integrated control algorithm is combination of PSO MPPT algorithms and DC link capacitor
voltage control algorithm. The PSO algorithm is used which is used to reduce the oscillations in steady state, improve the tracking
accuracy, fast response speed. DC link control is used to reduce the harmonic distortion and to prevent high voltage stress on the
switching device. The proposed algorithm is investigated and compared with dc link controller is presented.

2. PHOTOVOLTAIC CELL
Photovoltaic (PV) is used in application such as PV with generators, PV with batteries, solar water pumps etc., PV has
advantages such as of free pollution, low maintenance, and no noise and wear due to the absence of moving parts . Because of these
PV is used in power generation across the world.The environmental factors such as illumination and temperature depends on the
105
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

output power of a photovoltaic cell so we get non-linear V-I characteristics. In order to match the solar cell power to environment
changes, MPPT controller is required. To track the MPP of a solar cell the P&O, fuzzy control and many algorithms have been
developed. It changes frequently by the surroundings, improving the speed of tracking the PV power system could obviously improve
the system performance and increase the PV cell efficiency. The PV cell output is both limited by the cell current and the cell voltage,
and it can only produce a power with any combinations of current and voltage on the I-V curve from fig1. It also shows that the cell
current is proportional to the irradiance. Due to the open circuit voltage and short circuit current, the maximum power is produced.A
single PV cell produces output voltage less than 0.6V for silicon cells. To get the desired output voltage number of photovoltaic cells
is connected in series. By placing the series connected cell in a frame forms a module [8]. In series connection, the output current is
same as the output current is same as the output voltage is sum of each cell voltage . Fig.2 shows the simulation of I-V and P-V
characteristics of a photovoltaic cell during the different irradiation condition. By using the open circuit voltage and short circuit
current, the maximum power is obtained.

Fig.1. I-V and P-V characteristics

Fig.2 PV cell a) I-V characteristicsb) P-V characteristics


Table 1: PV Module specification

106

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Photovoltaic Model Datasheet Sun Power E20/327W is chosen for MATLAB simulation model The module is made up of 96 mono
crystalline silicon solar cells and provides 327W of nominal of maximum power [11].

3. T SOURCE INVERTER
The New type T source inverter (TSI)overcome the limitation of traditional voltage source inverter and current source
inverter. With the use of TSI, the inversion and also the boost function are accomplished in a single stage [8]. TSI has fewer
components. Due to these reasons, the efficiency appreciably increases. Unlike the traditional inverter, TSI utilizes a unique
impedance network that links the inverter main circuit to the DC source. The TSI number of elements is reduced when compared to Zsource inverter; transformer and only capacitor are needed.
T-source inverter used to reduce the number of switching devices; inductor decreases the inrush current and harmonics in the
inrush current. TSI works on two modes of operation shoot-through mode and non shoot-through mode.

Fig.3 T-source inverter

3.1 SHOOT-THROUGH MODE


Fig.4 shows the equivalent circuit of T source inverter in shoot through mode operation. This shoot through zero state
prohibited in traditional voltage source inverter. It can be obtained in three different ways such as shoot through via any one phase leg
or combination of two phase leg. During this mode, diode is reverse biased, separating DC link from the AC line.

Fig.4 Shoot-through of TSI

The output of the shoot through state can be controlled by maintaining the desired voltage. Thus the T Source inverter
highly improves the reliability of the inverter since short circuit across any phase leg is allowed and it cannot destroy the switches in
the inverter.

3.2 NON SHOOT-THROUGH MODE


Fig.5 shows the equivalent circuit of TSI in non shoot through mode operation. In this mode, the inverter bridge operate in
one of traditional active states, thus acting as a current source when viewed from T source circuit. During active state, the voltage
impressed across load. The diode conduct and carry current difference between the inductor current and input DC current. Note that
both the inductors have an identical current because of coupled inductors.

107

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.5 Non shoot-through mode of TSI

3.3 DESIGN OF T SOURCE INVERTER


During the design of TSI the most challenging is the estimation of values of the reactive components of the impedance
network. The component values should be evaluated for the minimum input voltage of the converter, where the boost factor and the
current stresses of the components become maximal. Calculation of the average current of an inductor

The maximum ripple current takes place due to the maximum shoot through states, 60% of peak to peak ripple current was selected to
design the T-source inductor. The ripple current is IL, and the maximum

The boost factor of the input voltage is:

Where DL is the shoot-through duty cycle:

Calculation of required inductance of Z-source inductors:

Where T0 - is the shoot-through period per switching Cycle Calculation of required capacitance of T source capacitors:

3.4 Modified Space Vector PWM


Space vector PWM (SVPWM) in three phase voltage source inverters offers improved DC link voltage and reduced
harmonic distortion, and has been therefore recognized as the preferred PWM method, especially in the case of digital
implementation. The output voltage control by SVPWM consists of switching between the two active and one zero voltage vector in
such a way that

108

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 2. Parameters and Values of T-Source Inverter

the time average within each switching cycle corresponds to the voltage command. In order to apply this concept for Z-source
inverter, a novel modified SVPWM is needed to introduce the shoot-through states into the zero vectors without compromising the
active states [15] and it is represented in Fig 6.The DC link voltage can be boosted by adding the new duration T to the switching time
T1,T2 and T0 of the Z source converter to produce the sinusoidal ac output voltage[16].

Tz = Ta+Tb Where Tz denotes the switching period, T is the total duration during the shoot through and non shoot through period, M
represents the modulation index, K is boost factor and V is the peak dc-link voltage. The shoot through zero vector takes place when
both switches turn on in a leg, when shoot through takes place the dc link voltage is boosted which depends on the total duration of Td
= 3T. By without changing the zero vector V0, V7 and T and non zero vectors V1-V6, one shoot through zero state T occurs for a one
period of switching cycle T by turning on and off switches in each phase. By adjusting the shoot through time interval, the DC link
voltage and the output voltage of the inverter is controlled. The modulation index M = a(4 / 3) is determined by the zero vector
duration T0/ 2.

Fig 6 Modified SVPWM implementation for sector1

4. Control algorithms
4.1 PSO methodology

109

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The general idea Control (PSO) is to create a closed loop controller with parameters that can be updated to change the
response of the system. The output of the system is compared to a desired response from a reference model. The control parameters
are update based on this error. The goal is for the parameters to converge to ideal values that cause the plant response to match the
response of the reference model.Each particle maintains its position, composed of the candidate solution and its evaluated tness, and
its velocity. Additionally, it remembers the best tness value it has achieved thus far during the operation of the algorithm, referred to
as the individual best tness, and the candidate solution that achieved this tness, referred to as the individual best position or
individual best candidate solution. Finally, the PSO algorithm maintains the best tness value achieved among all particles in the
swarm, called the global best tness, and the candidate solution that achieved this tness, called the global best position or global best
candidate solution.
The PSO algorithm consists of just three steps, which are repeated until some stopping condition is met:
1. Evaluate the tness of each particle
2. Update individual and global best tnesses and positions
3. Update velocity and position of each particle
The rst two steps are fairly trivial. Fitness evaluation is conducted by supplying the candidate solution to the objective
function. Individual and global best tnesses and positions are updated by comparing the newly evaluated tnesses against the
previous individual and global best tnesses, and replacing the best tnesses and positions as necessary.The velocity and position
update step is responsible for the optimization ability of the PSO algorithm. The velocity of each particle in the swarm is updated
using the following equation:

Apart from the canonical PSO algorithm it has many variations of the PSO algorithm exist. For instance, the inertia weight
coecient was originally not a part of the PSO algorithm, but was a later modication that became generally accepted. Additionally,
some variations of the PSO do not include a single global best aspect of the algorithm, and instead use multiple global best that are
shared by separate subpopulations of the particles.

4.2 Maximum power point control:


Many MPPT techniques are available, (i.e.) open circuit voltage, Short circuit current, Perturb and observe method (P&O),
PSO algorithm, fuzzy controller etc., An MPPT algorithm that provides high-performance tracking in steady state conditions can be
easily found [12]. A very popular PSO tracker, which has some important advantages as simplicity, applicability to almost any PV
system configuration, automatically adjusts the step size to track the MPP from PV array and does not change during the
environmental conditions.

5. PROPOSED CONTROLLER
The proposed system is represented in Fig.9. Controller processes the maximum power point tracking algorithm and DC link
control algorithm simultaneously it In capacitor voltage control algorithm capacitor voltage of T-network. is measured and given to
capacitor results reduction in response time For maximum power point tracking of PV array is done using PSO algorithm is used
because it has reduced oscillation in steady state.voltage controller. The output of capacitor voltage controller and MPPT controller is
integrated using integrator and is processed then generate the reference signal. The output of integrated controller is feed to pulse
width modulator to control the power switches

110

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.9 Proposed block diagram

6.EXPERIMENTAL RESULT
The corresponding simulation results are represented in Fig.10. The Integrated controller gives more power output than DC
link controller because the integrated controller is incorporated with MPPT algorithm and DC link control algorithm and also it gives
good dynamic response.

Fig. 10 Power of integrated controller

111

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.11% Total harmonic distortion


Table 4. Comparison of DC link and integrated controllers

7. CONCLUSION
In this paper, integrated controller for TSI based photovoltaic power conversion system has been proposed. The PSO
algorithm is used to track the maximum power during different irradiations and temperature condition. The proposed method is
simulated; investigated using MATLAB Simulink and the same has been implemented and verified using hardware setup. The
comparison between DC link and MPPT controller are presented. The proposed algorithm improves the tracking accuracy, reduce the
oscillations in steady state and the response time is reduced in the integrated controller than the DC link controller.

REFERENCES
[1] The European Union climate and energy package,http://ec.europa.eu/clime/policies/au/package_en.h tm.
[2] D. JC. MacKay, Sustainable Energy Without theHotAir,UITCambridge,2009.[Online].Available:http://
www.ierence.phy.cam.ac.uk/sustainable/book/tex/cft.pdf.
[3] J. Anderson, F.Z. Pang, Four Quasi-Z-Source Inverters Proc. of IEEE PESC 2008,15-19 June 2008, pp.2743 2749.
[4]J.Anderson,F.Z.Peng,AClassofQuasi-Z-Source Inverters in Proc. of 43rd IAS Annual Meeting, 2008, pp.1-7.
[5] F.Z.Peng, Z-Source Inverter, Proc. of the 37th IAS Annual Meeting, 2002, p.775-781.
[6] F.Z Pang, Z-Source Inverter, IEEE Trans. on Industry Applications, No.2, Vol.39, 2003, pp.504-510.
[7] J. Anderson, F.Z Pang, Four Quasi-Z-Source Inverters,Proc. of IEEE PESC 2008, 15-19 June 2008, pp . 2743 2749.

112

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[8] Ryszard Strzelecki, Marek Adamowicz and Natalia Strzelecka , New Type T-Source Inverter in Proceedings on Compatibility
and Power Electronics 6th International Conference-Workshop,2009.
[9] J. Surya Kumari and Ch. Sai Babu , Mathematical Modeling and Simulation of Photovoltaic Cell using Matlab-Simulink
Environment in International Journal of Electrical and Computer Engineering (IJECE) Vol. 2, No. 1,2012.
[10] Tarak Salmi, Mounir Bouzguenda, Adel Gastli, Ahmed Masmoudi, MATLAB/Simulink Based Modelling of Solar Photovoltaic
Cell in International Journal of Renewable Energy Research Vol.2, No.2, 2012.
[11] Seok-Il Go, Seon-Ju Ahn, Joon-Ho Choi, Simulation and Analysis of Existing MPPT Control Methods in a PV Generation
System, Journal of International Council on Electrical Engineering Vol. 1, No. 4, pp. 446~451, 2011.
[12] Mingwei Shan, Liying Liu, and Josep M. Guerrero, A Novel Improved Variable Step-Size Incremental Resistance MPPT
Method for PV Systems, in IEEE Transactions on Industrial Electronics, VOL. 58, NO.6,2011.
[13] Fangrui liu, Shanxu duan, Fei liu, Bangyin liu, and Yong kang , A Variable Step Size INC MPPT method for PV systems in
IEEE Transactions On Industrial Electronics, vol. 55, no. 7,2008.
[14] S. Thangaprakash and A. Krishnan, Modified Space Vector Pulse Width Modulation for Z-Source Inverters, in International
Journal of Recent Trends in Engineering, Vol 2, No. 6, 2009

113

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Experimental study of different operating temperatures and pressure in PEM


fuel cell
Sudhir Kumar Singh
M Tech, Bhabha Engineering and Research Institute Bhopal (M.P.)
Prof. S.S Pawar
Head. Department of mech engg. Bhabha Engineering and Research Institute, Bhopal (M.P.)
Email id: Narayan.sudhirsingh@gmail.com

Abstract :- This Paper also give the Various range parameter of temperature of using fuel cell and including current voltage power
characteristic , uniformity of cell unit voltage, Gas, pressure impact and air flux impact operating temperature of fuel cell has been
change in anode and cathode side. This paper also give the experimental approach of the fuel cell.
The effects of different operating parameters on the performance of proton exchange membrane (PEM) fuel cell have been
studied experimentally using pure hydrogen on the anode side and air on the cathode side.

Key words : PEM fuel cell, anode material Cathode Material, Pressure Range,
Introduction : - Proton exchange membrane (PEM) fuel cells have been widely recognized as the most
promising candidates for future power generating devices in the automotive, distributed power generation and
portable electronic applications. The proton exchange membrane fuel cell (PEMFC) is of great interest in
energy research field because of its potential application for direct conversion of chemical energy into electrical
energy with high efficiency, high power density, low pollution and low operating temperature .
Proton exchange membrane fuel cell Reactions
A stream of hydrogen is delivered to the anode side of the membrane electrode assembly (MEA). At the anode
side it is catalytically split into protons and electrons. This oxidation half-cell reaction or Hydrogen Oxidation
Reaction

(HOR)

is

represented

by:

At the Anode:

SHE

The newly formed protons permeate through the polymer electrolyte membrane to the cathode side. The
electrons travel along an external load circuit to the cathode side of the MEA, thus creating the current output of
114

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the fuel cell. Meanwhile, a stream of oxygen is delivered to the cathode side of the MEA. At the cathode side
oxygen molecules react with the protons permeating through the polymer electrolyte membrane and the
electrons arriving through the external circuit to form water molecules. This reduction half-cell reaction or
oxygen reduction reaction (ORR) is represented by:
At the cathode:

SHE

Overall reaction:

SHE

The reversible reaction is expressed in the equation and shows the reincorporation of the hydrogen protons and
electrons together with the oxygen molecule and the formation of one water molecule.
Polymer electrolyte membrane
To function, the membrane must conduct hydrogen ions (protons) but not electrons as this would in effect
"short circuit" the fuel cell. The membrane must also not allow either gas to pass to the other side of the cell, a
problem known as gas crossover. Finally, the membrane must be resistant to the reducing environment at the
cathode as well as the harsh oxidative environment at the anode.
Splitting of the hydrogen molecule is relatively easy by using a platinum catalyst. Unfortunately however,
splitting the oxygen molecule is more difficult, and this causes significant electric losses. An appropriate
catalyst material for this process has not been discovered, and platinum is the best option. One promising
catalyst that uses far less expensive materialsiron, nitrogen, and carbonhas long been known to promote the
necessary reactions, but at rates that are far too slow to be practical. Recently, a Canadian research institute has
dramatically increased the performance of this type of iron-based catalyst. Their material produces 99 amperes
per cubic centimeter at 0.8 volts, a key measurement of catalytic activity. That is 35 times better than the best
no precious metal catalyst so far, and close to the Department of Energy's goal for fuel-cell catalysts: 130
A/cm3. It also matches the performance of typical platinum catalysts. The only problem at the moment is its
durability because after only 100 hours of testing the reaction rate dropped to half. Another significant source of
115

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

losses is the resistance of the membrane to proton flow, which is minimized by making it as thin as possible, on
the order of 50 m.
Fuel cell applications
1 Transportation
2 Distributed power generation
2.1 Grid-connect applications
2.2 Non-grid connect applications
3 Residential Power.
4 Portable Power
4.1 Direct methanol fuel cells for portable power

Fuel cell systems operate without pollution when run on pure hydrogen, the only by-products being pure water
and heat. When run on hydrogen-rich reformate gas mixtures, some harmful emissions result although they are
less than those emitted by an internal combustion engine using conventional fossil fuels. To be fair, internal
combustion engines that combust lean mixtures of hydrogen and air also result in extremely low pollution levels
that derive mainly from the incidental burning of lubricating oil.

A PEM fuel cell is an electrochemical cell that is fed hydrogen, which is oxidized at the anode, and oxygen that
is reduced at the cathode. The protons released during the oxidation of hydrogen are conducted through the
proton exchange membrane to the cathode. Since the membrane is not electrically conductive, the electrons
released from the hydrogen travel along the electrical detour provided and an electrical current is generated.
These reactions and pathways are shown schematically in Fig..1

116

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.1.

Experimental setup- Experimental setup are the combination of two major plates such as , anode plates and
cathode plates , which has been well design and construction with using parameter. A single unit cell with
active surface aria of 7.4 x 7.4 c.m. was used for experiment in this study. The membrane electrode assembly
(MEA) consists of a Nafion in combination with platinum loadings of 0:4 mg/cm2 per electrode. The gas
diffusion layers are made of carbon fiber cloth. The MEA positioned between two graphite plates is pressed
between two gold-plated copper plates. The graphite plates are grooved with serpentine gas channels. In the
test station, reactant gases are humidified by passing through external water tanks. Regulating the water
temperature controls the humidification of the reactant gases.

Experimental procedure
The procedure for each experiment is as follows:
1. Power on the Fuel Cell Test Station and open the valves of the gas cylinders of hydrogen, oxygen.
2. Before starting experiments, purge the anode side with hydrogen to ensure no oxygen is present.
3. Set the experimental parameters of mass flow rate of the gas cylinders of hydrogen, oxygen.
117

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4. Set the maximum voltage, minimum voltage and voltage increment step of the fuel cell polarization data by
A meter and voltmeter
5. Set the delay between every two voltage and current data points

Fig.2. Control pane

Fig.3. Pressure gauge

118

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.4. Connection pipe

Fig.5. Experimental zone

Result and discussionEffect of the humidification in plates


When the anode humidification temperature is at 40C, the current density of the fuel cell is the lowest at a
given voltage. graph shows that the linear portion of the polarization curve for different anode humidification
temperatures are almost parallel to each other, which indicates that the electrical resistance of the fuel cell
causing the ohmic losses does not vary significantly.

At low current densities, the lower the anode humidi1cation temperature, the lower the cell voltage. This
phenomenon could be explained by the decrease of the active catalyst surface area caused by lack of water in
the catalyst layers. When the anode is dry, the water transfer through the membrane from the cathode side to the

119

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

anode side due to back-diffusion is dominant. This is even more pronounced at low current densities, when
water transfer due to electro-osmosis is low.

The result of the combined effect is water de1ciency in the cathode catalyst layer. At higher current densities,
the cell voltages at different anode humidification temperatures come closer. This, again, could be explained by
hydration of the catalyst layer. At high current densities, water generation rate is high and water transfer due to
electro-osmosis is high. Thus the cathode catalyst layer is better hydrated even though the anode humidification
is low
Table .1.humidification temperature , hydrogen flow rates are 1.0 M3/Min. and Oxygen flow rates 2.0
M3/Min.

120

Sr. No.

Current
density
(A/cm2)

Voltage at
Voltage at
Voltage at
humidification humidification humidification
temperature
temperature
temperature
40 Degree C
50 Degree C
60 Degree C

0.2

0.91

0.92

0.96

0.4

0.79

0.81

0.85

0.6

0.72

0.75

0.77

0.8

0.70

0.72

0.73

1.0

0.68

0.70

0.71

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.2
1

VOLTAGE

0.8
0.6

40 degree. C.

0.4

50 degree. C.
60 degree. C.

0.2
0
0

0.2

0.4

0.6

0.8

1.2

AMP.

Fig.6.
Table .2.humidification temperature , cathode sides and and hydrogen flow rates are 2.0 M3/Min. and
Oxygen flow rates 4.0 M3/Min.

121

Sr. No.

Current
density
(A/cm2)

Voltage at
Voltage at
Voltage at
humidification humidification humidification
temperature
temperature
temperature
40 Degree C
50 Degree C
60 Degree C

0.2

1.11

1.13

1.20

0.4

0.96

0.98

0.99

0.6

0.94

0.95

0.96

0.8

0.92

0.93

0.94

1.0

0.90

0.91

0.92

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.4
1.2

VOLTAGE

1
0.8
40 degree. C.

0.6

50 degree. C.
0.4

60 degree. C.

0.2
0
0

0.2

0.4

0.6

0.8

1.2

AMP.

Fig.7.
Table .3.humidification temperature, hydrogen flow rates are 3.0 M3/Min.and Oxygen flow rates 6.0
M3/Min.

122

Sr. No.

Current
density
(A/cm2)

Voltage at
Voltage at
Voltage at
humidification humidification humidification
temperature
temperature
temperature
40 Degree C
50 Degree C
60 Degree C

0.2

1.08

1.12

1.14

0.4

0.99

1.08

1.09

0.6

1.02

1.04

1.05

0.8

0.96

0.97

0.98

1.0

0.82

0.90

0.93

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.2
1

VOLTAGE

0.8
0.6

40 degree. C.

0.4

50 degree. C.
60 degree. C.

0.2
0
0

0.2

0.4

0.6

0.8

1.2

AMP.

Fig.8.

Conclusion
1. The anode humidification temperature has considerable effects on fuel cell performances. In the low current
density region, the lower the degree of humidification, the lower the fuel cell performances. At high current
densities, the effect of anode humidi1cation temperature is not significant.
2. The cathode humidification temperature has no significant effects on fuel cell performances, especially at
high current densities.
REFERENCES:
1-On-line fault diagnostic system for proton exchange membrane fuel cells Luis Alberto M. Et.al,Paper from science direct
2- Effects of operating conditions on cell performance of PEM fuel cells with conventional or interdigitated flow field Wei-Mon Yan
Et.al Paper from science direct
3- A review of water flooding issues in the proton exchange membrane fuel cell Hui Li Et.al,Paper from science direct
4- Increasing the efficiency of a portable PEM fuel cell by altering the cathode channel geometry: A numerical and experimental study
T. Henriques, Et.al,Paper from science direct
5- Development of a fast empirical design model for PEM stacks Xiao-guang Lia Et.al,Paper from science direct
6- Parametric analysis of a hybrid power system using organic Rankine cycle to recover waste heat from proton exchange membrane
fuel cell Pan Zhao, Et.al Paper from science direct
7- A parametric study of PEM fuel cell performances Lin Wang, Et.al,Paper from science direct
8-Performance studies of PEM fuel cells with interdigitated flow fields Lin Wang Et.al,Paper from science direct
9-An improved model for predicting electrical contact resistance between bipolar plate and gas diffusion layer in proton exchange
membrane fuel cells ZhiliangWua Et.al,Paper from science direct

123

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Engineering and Sustainable Environment


1.

Er. Amit Bijon Dutta 1, Dr. Ipshita Sengupta 2


Senior Manager, Mecgale Pneumatics Private Limited, N65, Hingna MIDC, Nagpur 440 016
dutta.ab@gmail.com
2
Lecturer, Dept. of Environmental Science, Institute of Science, Nagpur,
ipshita_sengupta@hotmail.com

Abstract India is one of the promising universal business giant with a second fastest fiscal escalation rate (8.9%) and a fourth
largest GDP in terms of Purchasing Power Parity (US$ 3.6 trillion). Every industry faces the threat of failure in business. Construction
companies are predominantly susceptible to fiscal risk owing to the nature of the industry, intense rivalry, reasonably low access
barrier, soaring uncertainty and risk involved, and fanciful rise and falls in construction volume. We need to encompass a closer
perceptive of the correlation involving the two inter-related matters of risk management and funding on construction projects. It is
becoming progressively more essential to attain the goals of the patron, the proprietor and the constructor and its supply chain,
particularly when the interest in PFI and PPP arrangements are incessantly budding all around globally.
Engineers carry on their shoulders the responsibility of endorsing the principles of sustainable growth. Sustainable development
deals with meeting existing individual needs from naturally accessible reserves, while preserving and enhancing the surrounding
environmental quality. This paper deals with the various issues involved in the process of engineering while taking in view the
environmental considerations for future generations, the prospective responsibility industrial engineers can have in putting a stop to
pollution caused by industrialized processes and accentuate how both disciplines can be pooled to craft well-organized and competent
resolutions.

Keywords Engineering, Environment, Sustainability, Pollution, Growth, Development, Economy

INTRODUCTION
Engineering plays a vital role in human life, with respect to economic, social and cultural development. This is
necessary in order to achieve United Nation (UN) millennium progressive objectives, principally ecological sustainable
growth. Sustainable development confronts to meeting current human needs from naturally available reserves, engineering
merchandises, energy, foodstuff, transportation, shelter and efficient ravage management while conserving and enhancing
earths environmental quality. This challenging issue is trying to be met by engineering knowledge and modern expertise.
Sustainable development is the need of the hour not only for the current world but also a necessity for the next generation
that can be achieved by certified engineers. Engineers ought to make every effort in order to augment the eminence of
biophysical and socioeconomic urban environment and also to promote the principles of sustainable development.
Engineers have an obligation towards the general public in order to seek the various available opportunities to work for
the enrichment of wellbeing, security and the communal welfare of local and global community equally through the
practice of sustainable development. In view of the fact that community wellbeing and safety is entirely dependent on
engineering judgments, assessments of risk, decisions and practices incorporated into structures, machinery, produce,
practices and strategies, engineers have to be conscientious and answerable to their assigned duty. Thus, engineers should
follow the rules and regulation imposed by local and global community.

ENGINEERING AND SUSTAINABILITY:


Engineering is the application of scientific and mathematical principles for practical purposes such as the design,
manufacture, operations of products and processes, at the same time, accounting for constraints invoked by economics,
and the environmental and sociological factors. Engineering have brought through many technical. Engineering has
always been a significant contributor to economical development, standard of living, well being of the society and its
impact on the cultural development and environment. Engineering is constantly evolving as a profession, and engineering
education is correspondingly changing continually.
124

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Sustainability is the ability to make development as sustainable by ensuring the needs of the present demands without
compromising any power or ability of future generations to meet their own needs. It was also recognized that many of the
practices and lifestyles of present social order, predominantly but not entirely developed society, simply cannot be
maintained for the foreseeable future without letting up. We are way beyond the competence and potential of the earth to
provide many of the resources that we use and to accommodate our emissions, while many of the planets inhabitants
cannot meet even their basic needs. Recognizing the needs for living within constraints and ensuring more fairness in
access to inadequate resources, are the key predicaments lying at the core of the concepts of sustainability and sustainable
development. It is something new in human history that the planet is full and we have no new geographical sphere to shift
to. Unswerving with societal, economical and ecological aspects of sustainability, sustainable edifice is one of the
speedily growing practices in new construction and development area in the world as the movement of green development
has been acquired and adopted by engineers, designers and builders.
Life -cycle analysis shows evidence that sustainable design and building makes good economical sense with regard to
environmental impact. It is anticipated that this kind of trend will continue to propel and pick up the pace, owing to the
fact that as a rule most of the countries are scheduling to shift to green technology. Furthermore, individual owners are
intent upon raising the standards of existing buildings and structures with the green renovation to lift them up to the
sustainable state of specifications.
Sustainable development is the process of moving human activities to a pattern that can be sustained in time without
end. It is a positive move towards the environmental and growth issues that seek to reconcile human needs with the
capacity of the planet to cope with the consequences of human activities. Sustainable development consists of the three
broad subject matters of social, ecological and financial responsibility which is said as the Triple Bottom Line concept. It
is useful to represent the constraints of the sustainable development, in the form of a simple Venn diagram shown in the
figure 1.

Figure 1. Components of a sustainable environment


Society, environment and economy are the three basic essential parts for human beings for survival. In order to lead a
vigorous life in the social order fine quality environment is needed. Figure 1 indicates the interrelationship of a vigorous
social order, financial system, and environment. It is apparently comprehensible that sustainable state is thriftily wellorganized, environmentally fit and justified in social context in all aspects.
Law and Ethics are the fundamental way dealing with the meaningfulness of life. So this term can be included as
another component of the sustainable environment. Inspiring with the Triple Bottom Line perception, an innovative
thought can be created where the Law and Ethics is considered as another and valuable component of the sustainable
environment. This reflection is envisaged through an additional Venn diagram as shown in the Figure 2.
There is no conflict with the state named as Healthy and Efficient as stated in the Triple Bottom Line notion. A
civilization can be considered an ethical society only when the ethical values are established and they are more acceptable
125

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and trustworthy. A financial system encompassing the ethical values is prudent which means the state is intellectually and
intuitively perceived through sound judgment. The position of a social order having interface with only Environment and
their Ethical values can be defined as unstable state because if economy falls down, society will collapse. The capability
of performing what is correct and dynamic while avoiding the wrong is the virtue which can be established with the
combination of Economy and Ethics having the proper knowledge and understanding to the Environment. Society and
their Environment with efficient Economy make a state which can be defined as Sound State. But in a comprehensive
matter the Sustainable Environment is a state which is environmentally healthy, economically efficient, socially moral and
ethically sensible.

Figure 2. Sustainable environment and its relation with four components


PROFESSIONAL ETHICS OF ENGINEERS
In the discipline of philosophy, moral principles envelop the study of the accomplishments that a responsible individual
ought to select, the values that an honourable individual ought to espouse and the character that a virtuous individual must
encompass. For example, every person must be candid, reasonable, kind, social, courteous and responsible. Above and
beyond these familiar obligations, the skilled professionals have added commitments. These obligations arise from the
responsibilities of their professional work and their relationships with employers, patrons, erstwhile professionals and the
general community. Owing to the specialization, acquaintance and expertise they are endowed licenses by the public to
use their knowledge and skills to affect the lives of their clients significantly.
Codes of ethics for engineers state that they should hold paramount for the wellbeing, healthiness, and benefit of the
community, more than other commitments to patrons and employers. Engineering and engineering societies have
provided little guidance on prioritizing the community goods, with the exception in intense cases such as whistle-blowing.
When an engineer finds an immediate threat to the safety of the community, the engineer ought to give notice to the
concerned authorities. If this intimidation is not handled and run through within the concerned association, the engineers
are expected either to raise the flag or blow the whistle.
SOCIAL RESPONSIBILITIES OF ENGINEERS
Engineers are the valuable part of the society. This necessitates them to put together the determined efforts in
discovering all of the relevant facts pertaining to the design, development, operation and all achievable outcome of the
choices available that may positively and negatively affect the society and the public. Citizens are entirely reliant on their
premeditated products and goods that should be robust, safe, reliable, economically feasible and environmentally
sustainable. Some principles of Engineering for Sustainable Development are:

126

Adopting holistic like, cradle to grave approach.


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Be cautious to the cost reductions that masquerade as value engineering.


Being creative and innovational.
Being sure about the knowledge of needs and wants.
Commitment of risk assessment experts to safety assessments or ethical risk.
Contributing ones services to worthy, non profit groups and projects
Declining work on a particular project or for a particular company.
Right things been done with the right decisions.
Valuable and competent, scheduling and administration.
Engineering schools commitment for educating future engineers about their social and moral responsibilities.
Commitment of engineers in designing and developing sustainable technologies
Explicit care and concern about technologys impact on nature and the environment.
Principles of sustainable development followed, while thinking about any technical and engineering designs.
Guarantee the safety and wellbeing to the public.
Guarantee the societys fund and resources concerning technology are well used.
Honoring the precautionary principles to take any steps in engineering designs
Individual and organizational apprehension about any engineering projects and its impact on the society.
Looking for a balanced solution.
On the lookout for engagement from all stakeholders.
Participating in democratic procedures for technology decision making and policy management.
Practicing the engineers preach.
Promoting the principled development enthusiastically and use of technologies.
Provide expert advice to non experts.
Offer security measures for whistleblowers.
Remonstrate the illegality or wrong-doing.
Social activities by engineers in public interest.
Speaking out publicly in disparity to a proposed project.
Thinking beyond the locality and the immediate future.
Voluntarily assume the job of education to the public about valuable consequences of different technical and
scientific developments.

THE ROLE OF ENGINEERS IN SUSTAINABILITY


The group of people that maintains, enhances, or improves its environmental, social, cultural, and economic resources
in such a way that support current and future community members in pursuing the healthy, productive and happy lives can
very well be termed as Sustainable Community.
Professional engineers play an important and significant role to meet the sustainability. They work to improve the
welfare, health and safety, with the minimal use of natural resources and paying attention with regard to the environment
and the sustainability of resources. The sustainability is influenced by the challenges and opportunities. To provide an
options and solutions to maximize social value and minimize environmental impacts are to be provided by Engineers.
There are some grave challenges because of the undesirable effects of exhaustion of resources, rapid population growth
and damage the ecosystems and environmental pollution. Only an environmentally sociable advancement is not sufficient
and increasingly engineers are required to take a wider perspective including societal integrity and local and universal
associations and poverty mitigation. Comprehensiveness brings crucial prospects for engineers to promote change through
sharing experiences and through good quality practice. The authoritative responsibility and guidance of engineers in
achieving sustainability should not be underestimated. Increasingly this will be as part of multidisciplinary teams that
include non-engineers and through work that crosses national boundaries.
127

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The main goal of the sustainable development is to enable the people throughout the world to meet and satisfy their
basic needs and enjoy an improved and better quality of life without compromising the quality of life for generations to
come. Sustainable development has largely being categorized in two perceptions, requirements and precincts imposed by
the state of technology and the social organisation on the environments ability to meet present and future demands.
Following principles have been agreed upon to achieve sustainable development

Living within the environmental goals,


Ensuring a strong , healthy and justified society,
Promotion of good governance,
Achieving a sustainable with an efficient economy, using science responsibly.
Engineers have a responsibility to maximize the value of their activity to build a sustainable planet. In order
perceive attainable goal and recognition of the changes over time and demand of the society.
Empathies the important potential role for engineering
Empathies about the environmental limits and finite resources
Reduce the demand of resources
Reduction of waste production by using effectively the resources that are used
Make use of systems and products which reduce embedded carbon, energy and water use, waste and pollution, etc.
Adoption of full life cycle assessment as normal practice including the supply chain,
Adopt strategies such as salvaging, reprocessing, decommissioning and discarding of components and materials,
During Design stage itself minimization of any adverse impacts on sustainability .
Carrying out a comprehensive risk assessment prior to starting of the project.
Risk assessment should ensure and includes the potential environmental, economical and societal impacts, way
ahead of the natural life of the engineering venture.
Monitoring systems to measure any environmental, social and economical impacts of engineering projects so it can
be identified at an early stage.

ENGINEERS RESPONSIBILITIES TOWARDS ENVIRONMENT:


Scientific research continues to provide in-format ion about the links between human health and environmental quality.
Essential components of life are air, water and food, which provide potential pathways for contaminants to have an effect
on our health. Air, water and soil pollutions exposure has been linked to various diseases/disorders to name few cancer,
lupus, immune diseases, allergies, and asthma, problems in reproduction and birth defects, allergic reactions, nervous
system disorders, hypersensitivity and decreased diseases resistance.
a) AIR POLLUTION
Air pollution is a great threat to our sustainable environment. Engineers in every country of the world should try to,

Cut down the release of sulphur dioxide, nitrogen oxide, carbon dioxide and mercury through regulatory programs
according to established targets and time frames.
Involve yourself in national and international initiatives to address trans-boundary air issues.
Work to meet standards for two primary components of smog(formed mainly above urban centres, is composed
mainly of tropospheric ozone (O3) ground level ozone and particulate matter.
Build up air-shed management plans and team-up with large industrial facilities to monitor transport and
deposition from major sources.

b)

WATER POLLUTION
Safe drinking water is another challenge for many developing countries where engineers in the world can
contribute a good deal on this issue. So, engineers should do the followings,

128

Resolve quality and quantity issues of water for agriculture and fisheries sectors,
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Develop a scaffold for safeguard water resources and aquatic habitat that builds on the drinking water strategy,
Consult with the Municipal/Public Works Association of relevant region while developing guidelines, standards
and regulations for issues related to municipal water and wastewater ,

Employ a government-wide approach to water problems through the Interdepartmental Drinking Water
Management Committee,

Develop, modify and upgrade the ambient water monitoring system with proper maintenance,

Team-up with the Department of Health to tackle issues related to contaminants in drinking water,

Wastewater issues to be addresses by working with municipal and domestic partners,

Be a support system to municipalities for their water and wastewater infrastructure programs, and for land use
planning in water supply areas.
c) LAND POLLUTION
Hazardous substances in water, air, and soil cause noteworthy health perils. The concerned government is devoted to
minimize the environmental impacts of such materials and protecting countrys health. In this regards engineers will

Promote pollution prevention in efficient way,


Validate risk-based administrative approaches to spotlight efforts where they are most needed,
Bring up to date existing directives controlling perilous substances and eliminating regulatory duplication,
Promote effective utilization, storage, handling, and discarding of harmful substances,
Apply the "polluter pays" principle to users of hazardous substances,
Encourage stewardship by manufacturers to promote proper lifecycle management of hazardous substances,
Make joint efforts with other authorities to perk up treatment of contaminated sites and promote sustainable
redevelopment,
Promote early detection and response to land quality issues through legislated requirements for mandatory
reporting of site contamination.

IMPORTANT AREAS OF SUSTAINABILITY


i)
Construction area
To make this planet sustainable certain areas should be given highest priority. Development in construction is one such
most important issue, as it is related to everyones daily life. Construction engineers need to be careful while designing a
buildings, industries, roads and highways efficiently, considering greening technology. Concept of sustainability to build
and construct any structure must focus not only to the limited resources but also to the energy and on the procedure to
reduce the impacts on the environment and methodological issues, structural modules, materials, erection technologies
and concepts of design related to the energy but and also ethics, values and humanity for the occupants of buildings
ii)
Education area
A novel instructional arrangement for engineers, up-and-coming engineers or general students must be developed
introducing sustainable development into the curriculum both for undergraduate and graduate level and also for the
professional skill levels.
CONCLUSION
Sustainable development has become as an accepted orthodoxy for the global economic development and
environmental protection since the ending of the twentieth century where engineers play an important role for this
sustainable development and fortification. Our environment is made up of intermingling systems of air, water, land,
organic and inorganic substance and living organisms. Maintaining a healthy environment for current and future
generations requires the group effort of the general public, organizations and companies and each and every echelon of
government. Using a balanced, coordinated approach, we can protect the health, prosperity, and environmental integrity of
our society. Individuals can safeguard energy, opt for environmentally accountable products, and modify their behaviour.
Organizations can develop environmental supervision strategies, cut down emissions and squander from their operations,
129

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and adopt environmentally responsible practices. The government can continue to lead by administering legislation,
establishing public policy, delivering programs and services, participating in local, countrywide, and worldwide
environmental proposals, and managing its own operations conscientiously.
Nevertheless, each and every above mentioned strategy are tackled by different rank of engineers. Engineers and
scientists are required to protect our environment in terms of inventing safe elemental substances, sustainable
environment, and recyclable and renewable energy resources and so on to ensure a better life for us and our future
generation.

REFERENCES:
[1] Parkin, Sustainable development: the concept and practical challenge, in Sustainable development: Making it happen.
Special Issue of Civil Engineering. Journal of the Institution of Civil Engineers, Special Issue two of Volume 138,
2000, pp 38.
[2] Guidance
on
sustainability
for
the
Engineering
Profession.
Published
on
may
2009,
http://www.engc.org.uk/ecukdocuments/internet/document%20library/Engineering%20Council%20Sustainability%2
0Guide.pdf (last visited on 12th May, 2011).
[3] A conference paper jointly organized by The Norwegian University of Science and Technology (NTNU), Institute for
Energy Technology (IFE) and The Foundation for Scientific and
[4] Edmond Byrne, Cheryl Desha, John Fitzpatrick and Karlson HragorvesEngineering Education for for Sustainable
development A review of international progress, proceedins of 3rd International Symposium for Engineering
Education, 2010, University College Cork, Ireland.
[5] Engineering Sustainability: A Technical Approach to Sustainability By Marc A. Rosen,
www.mdpi.com/journal/sustainability, Pg No. - 2270 2292

130

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of Cancer Gene Expression Profiling in DNA Microarray Data using


Clustering Technique
1

C. Premalatha, 2 D. Devikanniga
1, 2
Assistant Professor, Department of Information Technology
Sri Ramakrishna Engineering College, Coimbatore, Tamil Nadu
1
premalathajck@gmail.com
Abstract DNA microarray technology has been extremely used in the field of bioinformatics for exploring genomic organization. It
enables to analyze expression of many genes in a single reaction. The techniques currently employed to do analysis of microarray
expression data is clustering and classification. In this paper, the cancer gene expression is analyzed using hierarchical clustering that
identifies a group of genes sharing similar expression profiles and dendrograms are employed that provides an efficient means of
prediction over the expression.
Keywords Hierarchical clustering, Microarray data, Gene expression, Dendrograms.
INTRODUCTION
Molecular Biology research evolves through the development of the technologies used for carrying them out. In the past, only
genetic analyses on a few genes had been conducted and it is not possible to research on a large number of genes using traditional
methods. DNA Microarray [4] is one such technology which enables the researchers to investigate how active thousands of genes at
any given time and address issues which were once thought to be non traceable. One can analyze the expression of many genes in a
single reaction quickly and in an efficient manner. DNA Microarray technology has empowered the scientific community to
understand the fundamental aspects underlining the growth and development of life as well as to explore the genetic causes of
anomalies occurring in the functioning of the human body.
Microarray technology will help researchers to learn more about many different diseases, including heart disease, mental
illness and infectious diseases, to name only a few. One intense area of microarray research at the National Institutes of Health (NIH)
[1] is the study of cancer. In the past, scientists have classified different types of cancers based on the organs in which the tumors
develop. With the help of microarray technology, however, they will be able to further classify these types of cancers based on the
patternsof gene activity in the tumor cells. Researchers will then be able to design treatment strategies targeted directly to each
specific type of cancer. Additionally, by examining the differences in gene activity between untreated and treated tumor cells - for
example those that are radiated or oxygen-starved - scientists will understand exactly how different therapies affect tumors and be able
to develop more effective treatments.
In addition, data mining clustering technique having an appealing property is employed, such that the nested sequence of
clusters can be graphically represented with a tree, called a dendogram. It simplifies the identification of gene expression over the
microarray thus provides an efficient means of prediction over the expression.
GEO DATABASE
Microarray technology has been extensively used by the scientific community. Consequently, over the years, there has been a
lot of generation of data related to gene expression. This data is scattered and is not easily available for public use. For easing the
accessibility to this data, the National Center for Biotechnology Information (NCBI) has formulated the Gene Expression
Omnibus or GEO. It is a data repository facility which includes data on gene expression [6] from varied sources. GEO currently
stores approximately a billion individual gene expression measurements, derived from over 100 organisms, addressing a wide range of
biological issues.
MICROARRAY TECHNIQUE
An array is an orderly arrangement of samples where matching of known and unknown DNA samples is done. An array
experiment makes use of common assay systems such as micro plates or standard blotting membranes. The sample spot sizes are
typically less than 200 microns in diameter usually contain thousands of spots.
131

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

WORKING OF DNA MICROARRAY TECHNOLOGY


DNA microarrays are created by robotic machines that arrange minuscule amounts of hundreds or thousands of gene
sequences on a single microscope slide. When a gene is activated, cellular machinery begins to copy certain segments of that gene.
The resulting product is known as messenger RNA (mRNA), which is the body's template for creating proteins. The mRNA produced
by the cell is complementary, and therefore will bind to the original portion of the DNA strand from which it was copied. To
determine which genes are turned on and which are turned off in a given cell, first the messenger RNA molecules present in that cell
are collected then they are labelled by using a reverse transcriptase enzyme (RT) that generates a complementary cDNA to the mRNA.
During that process fluorescent nucleotides are attached to the cDNA.
The tumor and the normal samples are labeled with different fluorescent
dyes[7]. Next, the researcher places the labeled cDNAs onto a DNA microarray
slide. The labeled cDNAs that represent mRNAs in the cell will then hybridize or
bind to their synthetic complementary DNAs attached on the microarray slide,
leaving its fluorescent tag. A special scanner is used to measure the fluorescent
intensity for each spot/areas on the microarray slide.If a particular gene is very
active, it produces many molecules of messenger RNA, thus, more labeled cDNAs,
which hybridize to the DNA on the microarray slide and generate a very bright
fluorescent area.
Fig.1. Microarray Technology
Genes that are less active produce fewer mRNAs, thus, less labeled cDNAs, which results in dimmer fluorescent spots. If
there is no fluorescence, none of the messenger molecules have hybridized to the DNA, indicating that the gene is inactive.
Researchers frequently use this technique to examine the activity of various genes at different times. When co-hybridizing Tumor
samples (Red Dye) and Normal sample (Green dye) together, they will compete for the synthetic complementary DNAs on the
microarray slide. As a result, if the spot is red, this means that that specific gene is more expressed in tumor than in normal (upregulated in cancer). If a spot is Green, it means that the gene is more expressed in the Normal tissue (Down regulated in cancer). If a
spot is yellow that means that the specific gene is equally expressed in normal and tumor.
Thousands of spotted samples known as probes (with known identity) are immobilized on a solid support (a microscope glass
slides or silicon chips or nylon membrane). The spots can be DNA, cDNA, or oligonucleotides. These are used to determine
complementary binding of the unknown sequences thus allowing parallel analysis for gene expression and gene discovery. An
experiment with a single DNA chip can provide information on thousands of genes simultaneously. An orderly arrangement of the
probes on the support is important as the location of each spot on the array is used for the identification of a gene.
INTERPRETING MICROARRAY DATA
Microarray data for a simple dataset having five samples and four genes, represented in dots of different color indicating the
intensity of tumor have been interpreted. The different colors of the spots have to be converted to numbers before analysis in order to
obtain the intensity of tumor. There are many approaches but here a simplified version of common techniques is employed.
First, each spot is converted to a number that represents the intensity of the red dye and green dye. In this example, arbitrary
light units are used.
Next, we calculate the ratio of red to green (red/green).

a
Red (tumor)
400
00

200
400

Green (normal)
100
300

Ratio (Red/Green)
200
400

4
0.33

1
1

Fig.2. Ratio between tumor and normal gene

132

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

TABLE I CELL INTENSITY RATIO

A
C

Genes
Cell intensity
Red(tumor)
Green(normal)
Ratio

B
D

Gene A

Gene B

Gene C

Gene D

400
100
4

200
200
1

100
300
0.33

400
400
1

TABLE II INTENSITY RATIO FOR FIVE SAMPLES


Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5

Gene A

Gene B

Gene C

Gene D

4
2
3.5
1.5
0.8

1
0.8
2
0.5
1

0.33
1
0.25
0.25
1.2

1
1.3
3
1
0.8

When, Ratio >1: Indicates the gene was induced by tumor formation.
Gene A induced four fold.
Ratio <1: Indicates the gene was repressed by tumor formation.
Gene C repressed 3 fold.
Gene B and D not affected by tumor formation.
PROCESSING MICROARRAY DATA
To analyze large amount of expression data, its necessary to use statistical analysis. Unfortunately, fractions are not suitable
for statistics. For this reason, the expression ratios are usually transformed by log2 function, in which, for every increase or decrease
of 1, there are 2 fold changes.
In our example, log10 is used, since it is easier for efficient outcome. In log10, for every increase or decrease of 1, there are
10 fold changes. The table below shows the relationships between log2 and log10.
Numbers are often converted to colored scale i.e., red and green fluorescents, to make it easier to see the patterns. Results are
often reported in this way of representation.
TABLE III PROCESSING MICROARRAY DATA
Genes

Gene A

Gene B

Gene C

Gene D

4
2
0.602
2
1
0.301
3.5
1.807
0.544
1.5
0.584
0.176
0.8
-0.321
-0.096

1
0
0
0.8
-0.321
-0.096
2
1
0.301
0.5
-1
-0.301
1
0
0

0.33
-1.599
-0.481
1
0
0
0.25
-2
-0.602
0.25
-2
-0.602
1.2
0.263
0.079

1
0
0
1.3
0.378
0.114
3
1.584
0.477
1
0
0
0.8
-0.321
-0.096

Samples
Sample1

Sample2

Sample3

Sample4

Sample5

Ratio
Log 2
Log 10
Ratio
Log 2
Log 10
Ratio
Log 2
Log 10
Ratio
Log 2
Log 10
Ratio
Log 2
Log 10

Numbers are often converted to colored scale i.e., red and green fluorescents, to make it easier to see the patterns. Results are often
reported in this way of representation.
133

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Repressed

Induced

10*
3*
1:1
3*
10*
Fig.3. Color conversion for repressed and induced cell
TABLE IV CELL PATTERNS (TUMOR vs. NORMAL)
Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5

Gene A

Gene B

Gene C

Gene D

0.602
0.301
0.544
0.176
-0.096

0
-0.096
0.301
-0.301
0

-0.481
0
-0.602
-0.602
0.0792

0
0.114
0.477
0
-0.096

CALCULATING SIMILARITIES
To calculate similarity score, mean and standard deviation of each genes expression values are calculated.
TABLE V SIMILARITY CALCULATION USING MEAN AND STANDARD DEVIATION
Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5
Mean
Std dev

Gene A

Gene B

Gene C

Gene D

0.602
0.301
0.544
0.176
-0.096
0.305
0.254

0
-0.096
0.301
-0.301
0
-0.0194
0.194

-0.481
0
-0.602
-0.602
0.0792
-0.321
0.299

0
0.114
0.477
0
-0.096
0.0988
0.200

Next, normalize the values subtracting the mean for that gene and dividing it by standard deviation.
For Ex, Gene A - sample1 becomes
(0.602-0.305)/0.254=1.167
TABLE VI NORMALIZED DATA
Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5

Gene A

Gene B

Gene C

Gene D

1.166744
-0.01659
0.938726
-0.50801
-1.58087

0.099756
-0.39902
1.649106
-1.4496
0.099756

-0.53475
1.074453
-0.93956
-0.93956
1.339419

-0.49276
0.075694
1.885779
-0.49276
-0.97595

Next for each pair of gene, multiply the values from each sample, add up the products and divide by the number of sample (5). The
result is similarity score. For example, for Gene A and Gene B,
TABLE VII SIMILARITY SCORE FOR GENE (AB) AND (CD) USING DOT PRODUCT
Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5
134

Gene A

Gene B

1.166744
-0.01659
0.938726
-0.50801
-1.58087
SUM

0.099756
-0.39902
1.649106
-1.4496
0.099756

Product
(A and B)
0.116389
0.00662
1.548059
0.736406
-0.1577
2.249774

Gene C

Gene D

-0.53475
1.074453
-0.93956
-0.93956
1.339419
SUM

-0.49276
0.075694
1.885779
-0.49276
-0.97595

www.ijergs.org

Product
(C and D)
0.26350341
0.081329645
-1.771802517
0.462977586
-1.307205973
-2.271197849

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SIMILARITY SCORE

0.449955

SIMILARITY SCORE

-0.45423957

TABLE VIII SIMILARITY SCORE FOR GENE (AC) AND (AD) USING DOT PRODUCT
Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5

Gene A

Gene C

1.166744
-0.53475
-0.01659
1.074453
0.938726
-0.93956
-0.50801
-0.93956
-1.58087
1.339419
SUM
SIMILARITY SCORE

Product
(A and C)
-0.62391635
-0.01782518
-0.8819894
0.477305876
-2.11744731
-3.16387237
-0.63277447

Gene A

Gene D

1.166744
-0.49276
-0.01659
0.075694
0.938726
1.885779
-0.50801
-0.49276
-1.58087
-0.97595
SUM
SIMILARITY SCORE

Product
(A and D)
-0.574924773
-0.001255763
1.770229778
0.250327008
1.542850077
2.987226325
0.597445265

TABLE IX SIMILARITY SCORE FOR GENE (BC) AND (BD) USING DOT PRODUCT
Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5

Gene B

Gene C

0.099756
-0.53475
-0.39902
1.074453
1.649106
-0.93956
-1.4496
-0.93956
0.099756
1.339419
SUM
SIMILARITY SCORE

Product
(B and C)
-0.05334452
-0.42872824
-1.54943403
1.361986176
0.133615082
-0.53590553
-0.10718111

Gene B

Gene D

0.099756
-0.49276
-0.39902
0.075694
1.649106
1.885779
-1.4496
-0.49276
0.099756
-0.97595
SUM
SIMILARITY SCORE

Product
(B and D)
-0.04915577
-0.03020342
3.109849464
0.714304896
-0.09735687
3.647438305
0.729487661

TABLE X SIMILARITY SCORE FOR ALL GENES

Gene A
Gene B
Gene C
Gene D

Gene A

Gene B

Gene C

Gene D

1
0.450
-0.633
0.597

0.450
1
-0.107
0.729

-0.633
0.107
1
0.454

0.597
0.729
-0.454
1

When, Similarity score = + ve, two genes behave similarly i.e., when one is induced, so is the other. Larger the
similar they are.
Similarity score = 1, two genes behave identically. Gene A obviously behaves exactly like Gene A.
Similarity score = 0, two genes behave in unrelated manner.
Similarity score = ve, two genes behave in opposite ways i.e., when one is induced other is suppressed.
By casual inspection, we could summarize that:
Gene Cs behavior is opposite to that of Gene A, B, and D
Gene B and Gene D have the most similar behaviors.

number, the more

HIERARCHICAL CLUSTERING
To analyze 1000 of genes, hierarchical clustering [3] is used which works by taking the most similar genes and joining them
in a cluster. The nested sequence of clusters produced by hierarchical methods makes them appealing, when different levels of detail
are of interest, because small clusters are nested inside larger ones. In microarray applications, interest may focus on both small
groups of similar observations and a few large clusters. The former might occur when individuals provide multiple samples or a few
samples have special meaning, such as the four samples in the carcinoma example that are normal tissue. The latter would occur when
larger groups exist, such as samples from two different sources, or different stages of carcinoma, or from different experiments.
Gene B and D are the most similar at 0.729, so they are joined to become [BD]. Next, the log-transformed expression levels
are averaged for the clustered genes and the similarity scores are recalculated:

135

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

TABLE XI SIMILARITY SCORE FOR CLUSTERED GENE

Gene A
Gene C
Gene [BD]

Gene A

Gene C

Gene [BD]

1
-0.633
0.564

-0.633
1
-0.305

0.564
-0.305
1

Next highest score is chosen i.e., A and [BD], to form the cluster [ABD]. Since only 4 genes are processed it terminates with
simple process. But this is an iterative process until a single pair is formed. The end product is a dendrogram, a graphic representation
of clusters:

TABLE XII DENDROGRAM REPRESENTATION OF GENE

Gene B
Gene D
Gene A
Gene C

Genes
Samples
Sample1
Sample2
Sample3
Sample4
Sample5

Gene B

Gene D

Gene A

Gene C

0
-0.096
0.301
-0.301
0

0
0.114
0.477
0
-0.096

0.602
0.301
0.544
0.176
-0.096

-0.481
0
-0.602
-0.602
0.0792

Many hierarchical clustering methods have an appealing property that the nested sequence of clusters can be graphically
represented with a tree, called a dendogram[7]. Usually, each join in a dendogram is plotted at a height equal to the dissimilarity
between the two clusters which are joined. Selection of K clusters from a hierarchical clustering corresponds to cutting the dendogram
with a horizontal line at an appropriate height. Each branch cut by the horizontal line corresponds to a cluster. The result of the
expression level analysis is usually presented as a dendrogram with an accompanying expression level table that has been reordered
according to the clusters.
The picture emerging from our analysis is not simple one such as Gene A is active in the tumor. The true power of
microarray technology is the possibility of assaying thousands of genes and hundreds of samples in the same experiment. Consider a
slightly more complicated example. This experiment has 13 genes with 12 samples. They
are ready to be stored by hierarchical cluster. The trees two major branches reveal two
major groups of genes and samples as well:
Genes: C, J, H, F, B and G
Genes: L, K, E, M, D, I and A
Samples: 9, 8, 12, 4, and 10
Samples: 2, 7, 5, 6, 3, 11, and 1
Fig.4.Microarray data with 13 genes and 12 samples
CONCLUSION
Microarray technology has been extensively used by the scientific community. Advances in computer technology have made
powerful analytical tools readily available. Even modest PC can analyze a dataset of 3,000 genes with 100 samples in minutes. In real
situations, additional complications need to be taken into account, such as making sure that comparing fluorescence from different
microarrays does not introduces additional variability. Other microarray may use different detection and analytical techniques that
dont use fluorescence. The clustering techniques have been widely used to identify group of genes sharing similar expression profiles
and the results obtained so far have been extremely valuable. However, the metrics adopted in these clustering techniques have
discovered only a subset of relationships among gene expression. Clustering can work well when there is already a wealth of
knowledge about the pathway in question, but it works less well when this knowledge is sparse. The inherent nature of clustering and
classification methodologies makes it less suited for mining previously unknown rules and pathways.

136

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] Holter, N.S., Mitra, M., Maritan, A. 2000 Fundamental patterns underlying gene expression profiles: Simplicity from Complexity,
Proc.Natl.Acad. Sci., 97(15): 8409-8414
[2] Alizadeh, A.A., Eisen, M.B., Davis, R.E. 2000 Distinct Types of Diffuse Large B-Cell Lymphoma Identified by Gene Expression
Profiling, Nature, 403:503-511
[3] C.B., Spellman, P.T., Brown, P.O., Botstein, D. 1998 Cluster analysis and display of genome-wide expression patterns, Proc. Natl.
Acad. Sci., 95(25): 14863-14868
[4] Schena, M., Shalon, D., Davis, R.W., Brown, P.O. 1995 Quantitative monitoring of gene expression patterns with a
complementary DNA microarray, Science, 270:467-470
[5] Lashkari, D.A., DeRisi, J.L., McCusker, J.H., Namath, A.F., Gentile, C., Hwang, S.Y., Brown, P.O., Davis, R.W. 1997 Yeast
microarrays for genome wide parallel genetic and gene expression analysis, Proc. Natl. Acad Sci USA. 94(24):13057-13062
[6] Brazma, A., Vilo, J. 2000 Gene expression data analysis, FEBS Letters, 480: 17-24

[7] Alon, U., Barakai, N., Notterman, D.A., Gish, K., Ybarra, S., Mack, D., Levine, A.J. 1999 Broad patterns of gene
expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays, Proc. Natl
Acad Sci USA. 96(12): 6745-6750

137

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Dynamic Book Search Using RFID Technology


A.Ankit Kumar Jain, T. Rama Krishna
Email : ankitbhansali110@gmail.com, contact no: 9492405110

Abstract--- In view of the problems existed in library management; we designed a RFID Intelligent Book Conveyor using Radio
Frequency Identification (RFID) technology. This book conveyor is portable equipment with complete functions, friendly interface
and convenient operation. It can greatly improve the work efficiency of librarians and the service quality of the library. This project
contains one server and multi slave microcontrollers here we are using system (PC) as a server and slave is an LPc2148
microcontroller, each slave microcontroller contains one RFID and each one communicates with master microcontroller.

Keywords--- Radio frequency identification technology, shelf management, RFID tags, active tags, passive tags, black box testing,
white box testing, GUI.

INTRODUCTION
Library management system is a planning system for a library that used to track items, orders made, bill paid and patrons
who have borrowed. Library management is essential because library housing thousands of books, pamphlets, CDs and others. Library
needs a good coordination of information of the entire item above in addition to library management. Shelf management is system that
classified all of the books on the shelf in the library. The position of the books on the shelf need to be appropriate or the books will be
difficult to be found.

Figure 1: RFID Library Management System


Radio-frequency identification (RFID) is an automatic identification method, which can store and remotely retrieve data
using devices called RFID tags. The technology requires cooperation of RFID reader and RFID tag. The RFID based LMS facilitates
the fast issuing, reissuing and returning of books with the help of RFID enabled modules. It directly provides the book information
and library member information to the library management system. This technology has slowly begun to replace the traditional
barcodes on library items and has advantages as well as disadvantages over existing barcodes [4]. The RFID tag can contain
identifying information, such as a books title or code, without having to be pointed to a separate database. The information is read by
an RFID reader, which replaces the standard barcode reader commonly found at a librarys circulation desk. For which utmost care
has been taken to remove manual book keeping of records, reduce time consumption as line of sight and manual interaction are not
needed for RFID-tag reading and improve utilization of resources like manpower, infrastructure etc.

COMPONENTS OF RFID SYSTEM


RFID Technology:
Radio Frequency Identification (RFID) is a wireless automatic identification technology that utilizes the Radio Frequency as
the medium of communication. With the capability of carrying and retrieving data, RFID offers a wide application in the automatic
identification areas. Figure 1 above illustrates the basic RFID system. The system consists of tag, reader and host pc. Reader will
138
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

energize the tag to transmit data it carries and an application in the host pc will manipulate the data. Here the library contains multi
racks each rack contains slave microcontroller if any person keep any book in the rack the slave microcontroller will detect the book
with the help of RFID then it will add the rack address along with book id, now if any person type the book id in the server then
system will display the books information by collecting data from slave microcontrollers.

RFID Tag:
The ATA5570 is a contactless R/W Identification IC for applications in the 125KHZ frequency range. A single coil,
connected to chip serves as the ICs power supply and bi-directional communication interface. The antenna and chip together form a
transponder or tag.

Figure 2: RFID tag

Active and Passive tags:


First basic choice when considering a tag is either passive or semi-passive or active [1]. Passive tags can be read at a distance of
up to 4 5 m using UHF frequency band, whilst the other types of tags (semi-passive and active) can achieve much greater distance of
up to 100m for semi-passive, and several KM for active. This large difference in communication performance can be explained by the
following,
(I) Passive tags use the reader field as a source of energy for the chip and for the communication from and to the reader. The
available power from the reader field, not only reduce very rapidly with distance but is also controlled by the strict regulations,
resulting in a limited communication distance of 4 -5 m when using UHF frequency band (860 MHz 930 MHz) [3].
(II) Semi-passive (battery assisted back scatter) tags have build in batteries and therefore do not require energy from the reader
field to power the chip. This allows them to function with much lower signal power levels, resulting in greater distance of up to
100meters. Distance is limited mainly due to the fact that tag does not have an integrated transmitter, and is still obliged to use the
reader field to communicate back to the reader.
(III) Active tags are battery powered devices that have an active transmitter onboard. Unlike passive tags, active tags generate RF
energy and apply to the antenna. This autonomy from the reader means that they can communicate at the distance of over several
KMs.

GUI Development
Figure below displays the GUI (Flash Terminal).Com Port is referred to the serial communication port on a computer. RFID
reader is connected to the host PC through serial port. Therefore, correct com port number must be selected to establish the
connection. There are two blocks in the GUI namely, transmit and receive with the help of which we send and receive the information
respectively. Transmit block has a field through which we deliver the commands to the slave microcontroller. Consequently, we
receive the information back from the slave microcontroller in the receive block.

139

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3: Flash Terminal

SYSTEM IMPLEMENTATION
Implementation is the stage where the theoretical design is turned into a working system. The implementation phase
constructs, installs and operates the new system. The intention of the research is to develop a shelf management system. The system
will assist the librarian to carry out the shelf management process thus reducing human intervention. The RFID reader will scan each
book on the shelf. The data acquired will be sent to the host PC to process the data. There will be a LCD output mechanism to alert the
librarian if there is a misplace book. Database and RFID chip will be used as storage.
A graphical user interface (GUI) is the backbone of the system. Librarian will interact with the system through GUI. RFID tag
will be attached to each book on the library. The tag will carry the specific information of the book. A reader will interrogate each
book and check which book is misplaced and notice the user to remove the book from the shelf. The aim of the research is to develop
a Graphical User Interface (GUI) as an Application Programmable Interface (API) for shelf management system using RFID, to create
database that will store crucial information of the books to the RFID tag and to create shelf identification (ID) code [7]. Subsequently
in the shelf management process, the software will retrieve the shelf information from the tag to find any misplaced books so that the
librarian can position the book back at the right shelf.
The present work was developed in integrating the RFID system and the creation of Graphical User Interface (GUI) at the
host PC. The scope of work of the research is to develop an RFID based library management system to assist the librarians for more
efficient management of books in the library. GUI for the system was developed using Flash Terminal. To store the details
information of the book to the database. Subsequently all the book information is loaded in the RFID tag. This covers the database
related to books and student based on UID.
Following tasks have to be done:
1.
2.
3.
4.
140

Write the book/student information on to the tag


Read the book/student information from the tag
Add the new books to the library/department
Issuing and returning of books
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5.
6.

Status of the book


Database management

When books are issued to student, the books are deleted from the department book database and added to the student data
base, and also record the issued date and return date of the book on to student database along with student and book information. In
the same way to return the books, books are add to department book database and deleted from student data base along with due date
fine. Searching of books using UID will search the information of Book UID, Book Title, Book Author and Book Publisher. Similarly
using Student search Book UID, student UID, student Name. All will process and analyzed using RFID Read/Writer by implementing
GUI for Library Management System easily and efficiently.

TESTING AND ANALYSIS


The testing process focuses on the logical intervals of the software ensuring that all statements have been tested and on
functional interval is conducting tests to uncover errors and ensure that defined input will produce actual results that agree with the
required results. Program level testing, modules level testing integrated and carried out. There are 2 major types of testing [8]. They
are:

White Box testing:


White box sometimes called Glass Box testing is a test case design uses the control structure of the procedural design to drive
test case. Using white box testing methods, the following test were made on the system,
a.
b.

all individual paths within a module have been exercised once.


all logical decisions were checked for the truth and falsity of the values.

Black Box testing:


Black box testing focuses on the functional requirements of the software i.e., black box testing enables the software
engineering to derive a set of input conditions that will fully exercise all functional requirements for a program.
a. Interface errors.
b. Performance in date structure.
c. Performance errors.
d. Initializing and termination errors.

CONCLUSION
Radio Frequency Identification (RFID) Systems have been in use in libraries for book identification, for self checkout, for
anti-theft control, for inventory control, and for the sorting and conveying of library books. These applications can lead to significant
savings in labor costs, enhance customer service, lower book theft and provide a constant record update of new collections of books. It
also speeds up book borrowing, returning and monitoring, and thus frees staff from doing manual work so that they could be used to
enhance user-services task. The efficiency of the system depends upon the information to be written in tag. To yield best performance,
RFID readers and RFID tags to be used must be of good quality.
ACKNOWLEGMENT
I would like to express my sincere gratitude to Dr. C. V Narashimhulu, HOD of Department of Electronics and
Communication Engineering, Mr. D. Rama Krishna Associate professor of Department of Electronics and Communication
Engineering in Geethanjali College of Engineering for the immense support and guidance for the successful completion of the work.

REFERENCES:
[1] Radio Frequency Identification (RFID) Vs Barcodes, Electro-Com (Australia) Pty Ltd.
141

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[2] Karen Coyle, Management of RFID in Libraries ,Preprint version of article published in the Journal of Academic Librarianship,
v. 31, n. 5, pp. 486-489
[3] Chawla, V. and H. Dong Sam, An overview of passive RFID. Communications Magazine, IEEE, 2007. 45(9): p. 11-17.
[4] RFID Technology: A Revolution in Library Management, By Dr. Indira Koneru.
[5] Shamsudin, T.M.W., M.J.E. Salami, and W. Martono. RFID-Based Intelligent Books Shelving System. In RFID Eurasia, 2007 1st
Annual. 2007.
[6] Jung-Wook, C., O. Dong-Ik, and S. Il-Yeol. R-LIM: an Affordable Library Search System Based on RFID. In Hybrid Information
Technology, 2006. ICHIT '06. International Conference on. 2006.
[7] Kuen-Liang, S. and L. Yi-Min. BLOCS: A Smart Book- Locating System Based on RFID in Libraries. in Service Systems and
Service Management, 2007 International Conference on. 2007.
[8]http://www.slideshare.net/TauszfJamal/library-management-systemlms

142

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Air Entrapment Analysis of Casting (Turbine Housing) for Shell Moulding


Process using Simulation Technique
Mr. Prasad P Lagad1
1 M.E (Mechanical-Design) Student, Department of Mechanical Engineering, Deogiri Institute of Engineering and Management
Studies, Aurangabad, Maharashtra, India.
Mail: - Prasad.lagad8@gmail.com

ABSTRACT
Casting simulation plays a very important role in predicting defect before going to actual trials in shell moulding process. Air
entrapment analysis, fluid flow analysis & solidification analysis generally performed in shell moulding. Fluid flow analysis to be
done to see Temperature distribution for molten metal during pouring, Air entrapment, Flow related defects cold shut, misrun.
Solidification/Thermal analysis to be done to simulate progressive solidification, Predict the solidification defects (porosity), Degree
of soundness of casting.
This paper describes the benefits of casting simulation for air entrapment analysis to understand the possibility of area where air
might be entrapped during solidification & give us solution to provide the flow off to avoid air entrapment related defect such as blow
holes in the foundries.

Key Words: Air entrapment, Blow Hole, Casting simulation, Flow off, Fluid flow, Shell moulding, Solidification.
1. INTRODUCTION
Shell molding is the process in which resin coated sand is allowed to come in contact with the heated pattern (cope &drag) so that
shell of mould is formed around the pattern & removed it with the help of ejector pins then both shell kept in a flask with necessary
back up material & then molten metal is poured. Some of the advantages of shell moulding process is close degree of tolerance can be
achieved intricate shaped casting can be easily manufactured. Some of the application of shell moulding is turbocharger parts such as
turbine housing,center housing, water cooled bearing housing can easily manufactured from shell moulding process.
Casting simulation is a process of designing a model of real system & performs number of experiment (Iterations) with this model for
the purpose of either understanding the behaviour of the system and/or evaluating various strategies for the operation of the system.
casting simulation is necessary for quality improvement by finding & minimising internal defects or external defects in the casting.
Casting simulation in shell moulding process play an very important role in reducing casting defects, optimize gating system &
finalizing casting design, casting simulation performs solidification analysis, air entrapment analysis, temperature distribution
analysis, fluid flow analysis etc. casting design simulation plays an important role in predicting output of the design.

1.1. Benefits of casting simulation:

143

1.

Increased productivity by reducing number of actual foundry trials.

2.

Improved product quality by minimising casting defects related to fluid flow & solidification.

3.

Less remelting and refinishing.

4.

Shortened lead time & increased production.

5.

First Time Right (casting free from defects).

6.

Predicting Metallurgy.

7.

Yield improvement by performing number of iteration in simulation.


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

8.

Less development time.

1.2. General process flow of shell moulding process

Raw Material

Inspection

Core making

Shell making

Melting

Pouring

Knocking
Metallurgy Inspection
External shot blasting

Internal shot blasting

Visual Inspection

Oiling

Packing
Dispatch

Figure 1. Entire general Process flow of shell moulding process

144

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.3. Steps involve in casting simulation for shell moulding:

Generation of casting model from 2D drawing

Core extraction from casting model

Selection of parting line of casting & core

Gating system

Casting bunch

Convert the 3D CAD model to STL format

Import the STL CAD data to simulation Software

Setup the meshing parameters for auto mesh generation

Set the analysis type (Fluid flow, Solidification, etc)

Specify the material properties for the mould and Casting

Analysis of the casting process

Post processing (involves various result extraction and viewing)


1.4. Gating system & its element for shell moulding process:
Gating system to be designed in a way that there should not be any turbulence in casting cavity while metal entering from gating
passage to the casting cavities. The main objective of gating system is too feed the material ensuring uniform, smooth &complete
filling.
145

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The importance element of gating system are down sprue, sprue well, runner bar, riser & ingate.

Figure 2. Major elements of gating system

i) Down Sprue - It is a circular cross-section minimizing turbulence and heat loss and its area is quantified from choke area and gating
ratio. Ideally it should be large at top and small at bottom.
ii) Sprue well: It is designed to restrict the free fall of molten metal by directing it in a right angle towards the runner. It aids in
reducing turbulence and air aspiration. Ideally it should be shaped cylindrically having diameter twice as that of sprue exit and depth
twice of runner.
iii) Runner - Mainly slows down the molten metal that speeds during the free fall from sprue to the ingate. The cross section are of a
runner should be greater than the sprue exit. It should also be able to fill completely before allowing the metal to enter the ingates. In
systems where more than one ingate is present, it is recommended that the runner cross section area must be lowered after each ingate
connection to ensure smooth flow.
iv) Ingate: It directs the molten metal from the gating system to the mold cavity. It is recommended that ingate should be designed to
reduce the metal velocity; they must be easy to fettle, must not lead to a hot spot and the flow of molten metal from the ingate should
be proportional to the volume of casting region.

As shown in red circle


area subjected to the
air entrapment related
defect (Blow hole)

146

Figure 3. Final stage of air entrapment analysis


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure .4 Blow Holes in cast component

Figure 5. Gating system for 2nd run simulation (Casting, Runner bar, Riser &Down sprue)

147

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The possibility of
air entrapment
defect (blow holes)
can be avoided by
providing proper
flow off also called
vent pin.

Figure 6. Final stage of Air Entrapment analysis (simulation) after flow off addition.

2. CONCLUSION:
Upper part of the product parts is more likely to have air bubble entrapment defects (Blow hole) because it is filled last, this predictable
defect can be avoided by providing proper vents also called flow off. Casting Simulation tool permits foundry development engineers to fill
gap between design & manufacturing, improve quality, increase productivity by minimizing number of foundry trials and also analyse
prediction of defects & experimenting different gating arrangements. Casting simulation helpful for rapid design & development of castings
by significant reduction in development time which is a need of todays foundry industry.

3.

ACKNOWLEDGMENT:

The authors wish to thank BPTL (Foundry Division) for permission to use their foundry. The authors also wish to thank Mr.D.U
Gopekar, Assistant Professor, Department of Mechanical Engineering, Deogiri Institute of Engineering and Management Studies,
Aurangabad, Maharashtra for their valuable guidance.

REFERENCES:
[1] Tresna Priyana Soemardi, Johny Wahyuadi Soedarsono, Rianti Dewi Sulamet-Ariobimo, The role of casting flow & solidification
simulation for the improvement of thin wall ductile iron quality, Indonesia.
[2] Marco Aloe, Dominique Lefebvre, ESI Group, France, Adi sholapurwalla, Sam Scott ESI NA USA. Advanced casting
simulation.
[3] Kimatsuka akihiko, Mould filling simulation for predicting gas porosity, Vol.40 No. 2 August 2007.
148

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[4] Introducing castings simulation in industry: The steps towards success, Marco Gremaud,Matthias Gaummann,calcom ESI SA,
Parc scientifique EPFL,CH -1015 Lausanne, Switzerland.
[6] http://www.metalwebnews.com/howto/shell-mzulding/shell-moulding.html.
[7] Rahul Bhedasgaonkar, Uday Dabade, Analysis of casting defects by design of experiments and casting simulation techniques,
Walchand Collage of engineering, Sangli, Maharashtra, India.416415.
[8] Introduction to simulation by Elena M. Joshi.The Pennsylvania State University

149

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design and Implementation of Advanced ARM Based Surveillance System


using Wireless Communication
Ms. Jadhav Gauri J.
Department of E & TC (VLSI and Embedded),
G. H. Raisoni COE&M,
Ahmednagar, India
Gaurijadhav89@gmail.com

AbstractThis paper evaluates development of a low cost surveillance system using different sensors built around the
microcontroller with fingerprint sensor module. The low power PIR detectors take advantage of pyroelectrcity to detect the change in
environment temperature through human temperature in our experiment. Also we are using Ultrasonic sensors and vibration sensors.
Ultrasonic sensor (Obstacles detection) detects the intruder on their physical presence .Vibration sensors detect sound of breaking or
senses vibration signal. Fingerprint sensor module is based on taking fingerprint of the user with the help of fingerprint sensor module
and matching it with the database details corresponding to the user fingerprint and displays it on the computer screen. Detecting the
presence of any unauthorized person it triggers an alarm and send sms to a predefine number through a GSM modem. This
surveillance system has a better percentage of security with respect to other security system available. Apart from this it is fast
processing less expensive and better probability, alter and copy of information between source and database.
Keywords GSM, ARM, PIR Sensor, Ultrasonic Sensor, Vibration Sensor, Fingerprint Module Sensor, RF Tx/Rx.

INTRODUCTION
In a situation where there is high level of theft, there is need for better security system. It is much safer to have a
system that monitors and communicates to the device owner without putting human life to risk in the name of
Watchman. This tends to utilize the availability of GSM network, mobile phone and electronics circuit to achieve an
automated system which is programmed to work as a thinking device to accomplish this purpose. To secure it against
theft, crime, etc a powerful security system is required not only to detect but also pre-emt hazards. Conventional security
systems use cameras and process large amounts of data to extract features with high cost and hence require significant
infrastructures. In this paper the alerting sensors with low-power consumption are placed near those home windows and
doors where an intruder must pass through. According to the sensors signals received by microcontroller, a call is
established to mobile station through a GSM modem and thus warns the presence of unauthorized user in the home to
owner-occupier. On the other hand, this security system remains in idle position and performs nothing if no one is in the
home. This paper is organized into eight sections, including this section. Section II discusses some related works and
section III presents a systems block diagram. Components of hardware and their operation details are in section IV and
section V shows software flowchart Advantages and applications are discussed in section VI and finally the conclusions
are presented in section VII.
RELATED WORKS
Now a days indoor security systems constructed with many and different sensors which included microwave detectors,
photoelectric detectors, infrared detectors, and many others. Every of these systems having their own limitations. As an
example, photo-electric beam systems detect the presence of an intruder by transmitting visible or infrared light beams
across an area, where these beams may be obstructed. But the drawback lies within it if the intruder is aware of the
presence of this system. Despite of having strong dependence on surrounding environmental status, pyroelectricity has
become a widely used detection parameter because of simplicity and privilege of interfacing to the digital systems. Also
ultrasonic sensors are widely use because of their good and relatively fast response. Vibration sensors are also use as they
150

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

senses any noise of breaking or sound of vibration. Now, it is extensively used for intruder detection, smart environment
sensing, and power management applications. Several works have been conducted in various applications. Intelligent
fireproof and theft-proof alarm system [1], GSM (Global System for Mobile) network based home safeguard system [2],
human tracking system [3] and intruder detection systems [4] are some notable works done previously based on
pyroelectricity and ultrasonic sensing technique. Our work introduces a low-cost security system solution. Utilization of
existing cellular network to alert and inform the system owner about the security breach is made to cope up with ever
increasing demand for cheap but reliable security system. Also we are using the fingerprint module sensor in combination
with other sensors which plays major role. Also using wireless communication this system provides very fast response as
compared to the traditional surveillance system.
SYSTEM ARCHITECTURE

Figure 1. System block diagram

The Fig. 1 shows block diagram of the system. It consist of fingerprint sensor module, PIR sensor, ultrasonic sensor,
vibration sensor, ARM processor, RF Tx/Rx Module, GSM Module and liquid crystal display.

COMPONENTS OF HARDWARE

MICROCONTROLLER
In this project the controller used is ARM7 LPC2138. LPC2138 CPU module is based on LPC2138 SOC from NXP is an
ideal platform for applications which such as Industrial control and monitoring device and any such application which
needs migration from 8 bit to 32 bit. This CPU module board supports peripherals such as ADC, SPI, I2C, RTC etc.
Board Specification:
151

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1. CPU: ARM7:
a) 65 MIPS at 60 MHz
b) Embedded ICE, Debug Communication Channel Support
2. Communication interface
a) SPI
b) I2C
c) UART
3. General peripherals:
a) 40 GPIO
b) 10bit ADCs
c) Timer/Counter
d) RTC
e) Programmable Vector Interrupt Controller
4. RAM:
a) 32 KB Internal SRAM
b) Flash:
c) 512 KB Internal Flash
Add on peripherals for LPC2138 CPU module board
a) ZigBee module
b) RF communication module
c) Thermal printer module
d) GPS module
e) GSM/GPRS module
f) Motor control module
FINGERPRINT MODULE SENSOR
A Fingerprint, as the name suggests is the print or the impression made by our finger because of the patterns formed on the skin of
our palms and fingers. It is fully formed at about seven months of fetus development and finger ridge configurations do not change
throughout the life of an individual. Each of our ten fingerprints is different from one another and from those of every other person.
With age, these marks get prominent but the pattern and the structures present in those fine lines do not undergo any change. Database
storage contains the fingerprint templates of persons along with their all details information (e.g. photo, fingerprint, name, age, sex,
identification mark, permanent address etc.). A person can scan their finger on the fingerprint sensor module If their fingerprint
matched with the fingerprints of the database which has made for authorized person, then the person can enter otherwise they will be
denied. Database is designed in such a manner that it can be updated manually and automatically for a period of time and also we can
add new entry and remove previous information of a person when it needed.
In this system fingerprint sensor module R305 is used. This is module with TTL UART interface for direct connection to
microcontroller UART through MAX232 or USBserial adapter.
Steps involved in Finger print identification:
1. Finger Print enrollment through system.
2. Enrolled user places his/her Finger on the Finger sensor for checking IN/OUT (Authentication).
3. The terminal compares live finger with the finger stored on database and checks for a match.
4 .When a match is found the Authentication is successful and the user is given access.

152

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 2. Fingerprint sensor module

Advantages of using fingerprints:


1. Prevents unauthorized use or access
2. Adds a higher level of security to an identification process
3. Eliminates the burden and bulk of carrying ID cards or remembering Pins
4. Heightens overall confidence of business processes dependent on personal identification.

PIR SENSOR
In this security system we are sensing human movements by means of PIR sensors and alerting the security and owner
simultaneously using GSM wireless network. PIR sensors are low cost, low power small components used to trigger alarm
in presence of human or moving objects by using concept of Pyroelectricity. Pyroelectricity is the ability of certain
materials to generate a temporary voltage when there is change in temperature. PIR is basically made of Pyroelectric
sensors to develop an electric signal in response to a change in the incident thermal radiation.

Figure 3. PIR sensor


ULTRASONIC SENSOR
Ultrasonic sensors work on the similar principle to the radar or sonar which evaluates attributes of a target by interpreting
the echoes from radio or sound waves respectively. Ultrasonic sensors generate high frequency sound waves and evaluate
the echo which is received back by the sensor. Sensor calculates the time interval between sending the signal and
receiving the echo to determine the distance to an object and display it on the LCD.Fig.3 shows diagram of ultrasonic
sensor.

153

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 4. Ultrasonic sensor

VIBRATION SENSOR
The ADXL335 is a small, thin, low power, complete 3-axis accel-erometer with signal conditioned voltage outputs.

The product measures acceleration with a minimum full-scale range of 3 g. It can measure the static acceleration of
gravity in tilt-sensing applications, as well as dynamic acceleration resulting from motion, shock, or vibration. The user
selects the bandwidth of the accelerometer using the CX, CY, and CZ capacitors at the XOUT, YOUT, and ZOUT pins.
Bandwidths can be selected to suit the application, with a range of 0.5 Hz to 1600 Hz for the X and Y axes, and a range of
0.5 Hz to 550 Hz for the Z axis. The ADXL335 is available in a small, low profile, 4 mm 4 mm 1.45 mm, 16-lead,
plastic lead frame chip scale package (LFCSP_LQ).

Features

3-axis sensing

Small, low profile package

4 mm 4 mm 1.45 mm LFCSP

Low power : 350 A (typical)

Single-supply operation: 1.8 V to 3.6 V

10,000 g shock survival

Excellent temperature stability

BW adjustment with a single capacitor per axis

RoHS/WEEE lead-free compliant

Applications

154

Cost sensitive, low power, motion- and tilt-sensing applications

Mobile devices

Gaming systems

Disk drive protection


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Image stabilization

Sports and health devices

RF Tx/Rx MODULE
An RF Module (Radio Frequency Module) is a (usually) small electronic circuit used to transmit and/or receive radio
signals on one of a number of carrier frequencies. RF Modules are widely used in electronic design owing to the difficulty
of designing radio circuitry. Good electronic radio design is notoriously complex because of the sensitivity of radio
circuits and the accuracy of components and layouts required achieving operation on a specific frequency. Design
engineers will design a circuit for an application which requires radio communication and then "drop in" a radio module
rather than attempt a discrete design, saving time and money on development. The RF Tx/Rx module will receive the
signal from microcontroller and send message to the predefined number through GSM module. Here we are using
CC2550 module which is lo cost 2.4 GHz transmitter designed for very low power wireless application. The RF
transmitter is integrated with a highly configurable baseband modulator. The modulator supports various modulation
formats and has a configurable data rate up to 500kBaud.

GSM MODULE
GSM (Global System for Mobile communication) is a digital mobile telephony system. With the help of GSM module
interfaced, we can send short text messages to the required authorities as per the application. GSM module is provided by
sim uses the mobile service provider and send sms to the respective authorities as per programmed. This technology
enables the system a wireless system with no specified range limits. When the intruder is detected by surveillance system
the sms is send to the predefined number through the GSM. In this system GSM SIM 300 module is used.

Figure 5. GSM Modem


155

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Features of GSM simcom300i.


Designed for global market, SIM300 is a Tri-band
GSM/GPRS engine.
ii.
Works on frequencies EGSM 900 MHz, DCS 1800 MHz and PCS 1900 MHz.
iii.
SIM100 features GPRS multi-slot class 10/ class 8 (optional) and supports the GPRS coding schemes
iv.
CS-1, CS-2, CS-3 and CS-4.With a tiny configuration of 40mm x 33mm x 2.85mm
v.
SIM100 can fit almost all the space requirements in your applications.
vi.
Such as smart phone, PDA phone and other mobile devices.
Applications GSM simcom300i.
Wireless data transfer
ii.
Energy industry monitoring
iii.
Traffic system monitoring
iv.
SMS based Remote Control & Alerts
v.
Security Applications
vi.
Intelligent house monitoring
vii.
GPRS Mode Remote Data Logging
viii.
Sensor Monitoring
ix.
Agricultural feeding monitoring
x.
Parking monitoring
xi.
Telecom monitors
xii.
Meter reading
xiii.
Dial back-up for broadband connections
xiv.
Residential lighting controls
xv.
Messages/alerts
xvi.
Personnel management

Features of GSM modem


i.
This GSM modem is a highly flexible plug and play quad band GSM modem
ii.
Reset button, power can be started automatically or manually started.
iii.
For direct and asy integration to RS232.
iv.
Supports features like Voice, Data/Fax, SMS,GPRS and integrated TCP/IP stack.
v.
Control via AT commands(GSM 07.07,07.05 and enhanced AT commands)
vi.
Use AC DC Power Adaptor with following ratings DC Voltage : 12V /1A
vii.
Current Consumption in normal operation 250mA, can rise up to 1Amp while transmission.
Interfaces
i.
RS-232 through D-TYPE 9 pin connector,
ii.
Serial port baud rate adjustable 1200 to115200 bps (9600 default)
iii.
SIM card holder
iv.
Power supply through DC socket
v.
SMA antenna connector and Wire Antenna ( optional)
vi.
LED status of GSM / GPRS module
Package Contents
i.
GSM Modem With Rs232
ii.
Antenna Single stand Wire Antenna SMA Connecter and Stud Antenna 150/- extra

LCD
LCD is used in a project to visualize the output of the application. We have used 16x2 LCD which indicates 16 columns
and 2 rows. So, we can write 16 characters in each line. So, total 32 characters we can display on 16x2 LCD.LCD can also
used in a project to check the output of different modules interfaced with the microcontroller. Thus LCD plays a vital role
156

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

in a project to see the output and to debug the system module wise in case of system failure in order to rectify the
problem.
SOFTWARE FLOWCHART

Figure 6. SOFTWARE FLOWCHART

157

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ADVANTAGES AND APPLICATIONS

ADVANTAGES
i.

Low cost.

ii.

Low power requirement.

iii.

Good response.

APPLICATIONS
It can be used anywhere like in home, shop,, mall bank and at any important places.

CONCULSION
The proposed approach uses the technique which combines the sensors along with fingerprint module and the concept of wireless
communication. Fingerprint provides a solution for protecting the privacy of the user; since the users true biometric feature is never
changed in the whole life. Fingerprint is used for the better security and accuracy. In the privacy and security domains, the proposed
method fulfils all requirements as to reject a forged person. Also if the unauthorized person tries to enter the sensitive area through
another way like by freaking window or door the another sensors will activate the surveillance system and the alerting message can be
send to the predefined number through GSM module. From the results obtained it is clear that the proposed approach provides very
high accuracy. Thus the approach is very much secured. This approach can be enhanced to higher level in order to further improve the
security. This common wireless security system can be extended in future by using several different types of required database that
will be very hard to break by the attackers , by using another advanced sensors and thus it can provide better security.

REFERANCES
[1] Mukesh Kumar Thakur, Ravi Shankar Kumar, Mohit
Kumar, Raju Kumar, Wireless Fingerprint Based
Security System Using ZigBee Technology. International Journal of Inventive Engineering and Sciences (IJIES) ISSN:
23199598, Volume-1, Issue-5, April 2013
[2] Zamshed Iqbal Chowdhury, Masudul Haider Imtiaz, Muhammad Moinul Azam, Mst. Rumana Aktar Sumi ,

Nafisa Shahera Nur Design and Implementation of Pyroelectric Infrared Sensor Based Security System Using
Microcontroller Proceeding of the 2011 IEEE Students' Technology Symposium ,14-16 January, 2011, lIT
Kharagpur
[3] Ying-Wen Bai, Zi-Li Xie and Zong-Han Li Design and Implementation of a Home Embedded Surveillance
System with Ultra-Low Alert Power, 0098 3063/11/$20.00 2011 IEEE
[4] Deepa Amarappa Hiregowda,
B.V.Meghana, Roopa Amarappa Hiregowda, Jayanth Design And
Implementation Of Home Embedded Surveillance System Using Pir, Piezo Sensor And Image Capturer.
Department,Dayananda Sagar College of Engineering
[5] D.NARESH,B.CHAKRADHAR, S.KRISHNAVENI Bluetooth Based Home Automation and Security System
Using ARM9 International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 9- Sep
2013
[6] Shinu N Yoannan, Vince T Vaipicherry , Don K Thankacha , Prof. Ram Prasad Tripathy Security System Based on
Ultrasonic Sensor Technology IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-ISSN: 22782834,p- ISSN: 2278-8735. Volume 7, Issue 6 (Sep. - Oct. 2013), PP 27-30 www.iosrjournals.org
158

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7] S. Dharanya, V. Divya1, S. Shaheen1 and A. Umamakeswari Embedded Based 3G Security System for Prison
[8] Indian Journal of Science and Technology | Print ISSN: 0974-6846 | Online ISSN: 0974-5645 www.indjst.org | Vol 6 (5) |
May 2013
[9] Q. Qu, Z. Guohao, W. Baohua, "Design of Home Safeguard System Based on GSM Technique", Electronic

Engineer, vol. 32, no. I I, pp. 76-78, Nov. 2006.


[10] M. Shankar, Burchett, Q. Hao, B. Guenther, "Human-tracking systems using pyroelectric infrared detectors",

Optical Engineering, vol. 10, no. 45, pp. 106401 (01-10), Oct. 2006.
[11] M. Moghavvemi and C.S. Lu, "Pyroelectric infrared sensor for intruder detection," in Proc. TENCON 2004 Conf., pp. 656659

159

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Efficient Scheme for Data Transfer Using XOR Network Coding


Er. Anjali Gupta
Lecturer, Department of Electronics & Communication Engineering
Universal Group Of Institutions. Lalru Mandi.
Punjab Technical University .Jalandhar
guptaanjali127@gmail.com
Abstract Compromised senders and pollution attacks are the key issues in designing and operating in any network either it is a
wired or wireless (especially in applications like sensor networks, lossy wireless networks etc.) XOR network coding, an extension of
Network coding is a new research area in information theory. It is a paradigm in which intermediate nodes are allowed to create new
packets by combining (XORing) the incoming packets which provides the possibility to maximize network throughput and reduce
number of transmissions. This paper explains the basic concept of network coding and XOR network coding, their applications and
related challenges
Keywords Network coding, XOR network coding, wireless sensor network, pollution attacks.
INTRODUCTION

Todays communication networks have the same operating principle as the data packets travelling over the internet, signals over the
mobile network, vehicle share the traffic highways in which resources used are same but information is different to individual.
In classical approach, information stream can be sent by breaking in data packets in a store-And-forward manner in which
intermediate nodes (router or relays) will duplicate the original message. With network coding (NC) intermediate nodes are allowed to
combine incoming packets with the help of opportunistic coding. Network coding is a generalization of routing and well suited for the
environments where the possibility of partial and uncertain information is high.
The remainder of paper is organized as follows: In section 2, we briefly describe the concept of XOR network coding and the related
work done. In section 3, we describe applications of XOR network coding such as throughput gain, packet latency etc. In section 4,
major issues and related approaches are described. In section 5 descriptions of various open challenges are given. In section 6, we
presents conclusion of this paper.

I.

OVERVIEW

A. XOR Network Coding


In XOR network coding, intermediate nodes are allowed to combine the incoming packets by applying the XOR operation not the
linear combination. The basic idea of XOR network coding is illustrated in Fig 1 where nodes A,B and C share the common wireless
medium as described by Ahlswede et al. [1]. Assume the capacity of network is 1 bit at a time. Due to capacity constraint, in fig 1(a)
node A will transmit data packet p1 to B which in turn transmit to node C. Similarly, node C will transmit data packet p2 to B which
in turn transmits p2 to node A. This whole process involves four transmissions.

160

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1.(a) No coding

(b) Network coding.

Now, consider fig1 (b) nodes A and C can transmit data packets p1 and p2 sequentially to node B which in turn combine two packets
by XORing and transmit it on the shared medium. As, both nodes A and C know their own packets hence can easily detect another
packet by XORing the known packet with broadcast packet. This whole process takes three transmissions. Number os transmissions
used to send same amount of information is reduced resulting in 25% less energy consumption [2].
B. Related Work
Recently, network coding has gained much popularity as a potential way to increase the throughput in networking field. Zunnun and
Sanjay [2], showed better performance in wired, wireless networks, multicast and broadcast protocols.
Network coding was first considered in the pioneering work by Alswede et al [1], which showed that a sender can communicate
information to set of receivers at the broadcast capacity by using network coding resulting in capacity gain in wire line systems.
Alswedes example which is generally considered as butterfly network is shown in fig2 is a multicast network from a single source to
two destinations.

Fig..2 Multicast over communication network.


(a) Network coding

(b) No coding

Source S, multicast two data bits b1 and b2 to nodes Y and Z by acyclic graph as shown in fig(2). In fig.2(b), one channel is used
twice so that minimum usage at least 10. Now, in fig.2 (a) depicts network coding approach in which all the 9 channels are used
exactly once. Later Li et al [3], showed that linear codes are sufficient for multicast traffic to achieve optimal throughput.
At the same time, Koetter and Medard [4] developed an algebraic framework and showed that coding and decoding can be done in
polynomial time. Ho et al [5] used algebraic framework to present concept of random linear network coding, which makes network
coding more practical especially in wireless networks.
Recently network coding has been applied to wireless networks and received a significant attention for research as means of
improving network capacity and coping with unreliable wireless links [6]. Majid et al [7] presented reliability gain of network coding
in lossy wireless networks. Work on improving throughput of wireless networks by using XOR network coding [8] showed a practical
application and showed high benefits.
161

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C. Grid Networks
Consider a wireless ad hoc network with n nodes. Each node is placed on the vertices of a rectangular grid. Suppose each node wants
to broadcast the information to all other nodes. For this purpose, each node will transmit the information to their four neighbors as
shown in fig 3.

Fig 3 Square grid

II.

APPLICATIONS

Main applications of network coding technique are in the area of content-distribution networks, peer-to-peer networks and wireless
networks. Most of the work has been done to show the capacity gain and throughput but recently reliability gain is also a good
consideration in research area [7].
A. Throughput
The capacity gain in wired and wireless networks has made a spark for researchers in multicast networks. Suppose we have X sources
and Y receivers. Each source wants to send information to other at a given rate. All Y receivers are interested to receive the
information and share the common resources. This, XOR network coding can help to make better throughput and to better sharing of
resources.
Throughput gain [8] can be defined as:

Where,

and

are the throughput of network with and without network coding.

Fig.4 Throughput variations with and without network coding [8]


162

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

B. Mean Latency and standard Deviation


One of the important concerns of performance evaluation in XOR networks is mean packet latency that coding and decoding
introduce. Amount of data to be aware of and processed by each node will increase with the increase of buffer size and producing high
decoding delay. And also if buffer size is not sufficiently large the node will in information required to perform coding. Borislava et
al. [9] presented packet latency aad standard deviation for XOR networks applied over 36 nodes. The results are shown in fig 5 for
both butterfly ang grid networks (described in section 2).

Fig.5 Mean latency and standard deviation for XOR coding (butterfly and grid networks) and no coding [9]
Mean latency and standard deviation is also dependent upon the channel parameters like data rate, delay etc. for fixed channel delay
added by XOR coding decreases with increase in transmission rate [9] as shown in fig.6.
.

Fig.6 Mean latency and deviation for XOR coding over different data rates [9]
III.

MAJOR ISSUES

In this paper we focus on challenges of XOR network coding in field of wireless networks.
A. Attacks

163

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

XOR network coding technique poses various new challenges in security system e.g. applications developed using this technique are
vulnerable to pollution attacks
in which forged senders can pollute the data packets and also generate forged packets. These attacks not only prevent the sinks from
recovering the source messages but also drain out the energy of forwarders. A more severe problem is pollution propagation on
network. Therefore, polluted messages should be filtered as soon as possible.
Big threat of attacks is present in resource constrained networks such as wireless sensor networks, in which sensor nodes are equipped
with limited computation capacity, restricted memory space, limited power resources etc.
Another challenge also appeared when sensor nodes are deployed in hostile environment like Military applications. In hostile
environment, sensor nodes are vulnerable to be captured and compromised by the adversary one. Thus an adversary can inject false
data in original stream resulting in false data injection attacks.
B. Approaches
Traditional approaches like RSA or MD5 based on hash functions are not suitable for network coding because encoding process
carried out by each forwarder can destroy sources signatures.
Chrtos and Pablo introduced a cooperative security approach [10], in which users not only cooperate to distribute contents but to
inform each other about malicious blocks also. Zhen et al. presented a signature based scheme [11] to detect and filter pollution
attacks for applications using network coding. Ho et al. [12] proposed a simple polynomial hash function to detect pollution attacks.
Jaggi et al. [13] presented polynomial time network coding algorithm for multicast network against pollution attacks.
These algorithms can be divided into two groups [14]: 1) Filtering the polluted messages at forwarders and sinks, such as [10]. [11]. 2)
Filtering the polluted messages at sinks, such as [12], [13].
Several existing algorithms for filtering false data reports either cannot deal with the dynamic topology or have limited filtering
capacity. Zhen and Yong [15] used Hiil Climbing approach to filter false data injection attacks in wireless sensor networks.
C. Approach for XOR network coding
The schemes described in part B can protect only normal network coding but none of them is able to secure XOR networks. Zhen et
al. [14] presented an efficient scheme to secure XOR networks by using probabilistic pre key distribution and message authentication
codes (MACs).
IV.

OPEN CHALLENGES

Much work has been done on designing in various coding algorithms (for XOR network coding also) but most of them has not been
practically implemented to show the real gain of net throughput of algorithms should be investigated.
In flooding mechanisms, draining out the particular packet after being flooded is a challenging issue. It is even more difficult in XOR
networks where it can be a part of various encoded packets (unintentionally).
Also, work has been done by assuming that source will cooperate in network without being compromised. As [14] also assumed that
forwarders can be compromised but sources can never be compromised. In future, work should be done to detect the compromised
senders and protect network against them.
V.

CONCLUSION

In this paper, we have presented a basic idea of network coding while focusing on XOR network coding. Applications of XOR
networks are described on account of throughput, mean latency and standard deviation. In addition, major issues are open challenges
are described.

REFERENCES:
[1] R. Ahlswede, N. Cai, S. R. Li, and R. W. Yeung, Network information flow, IEEE Trans. Inf. Theory, vol. 46, no. 4, pp. 1204
1216, Jul. 2000.
164
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[2] Zunnun Narmawala and Sanjay Srivastva, Survey of Applications of Network Coding in Wired and Wireless Networks,
[3] S. Li, R Yeung, and N.Cai, Linear Network Coding, in IEEE Transcations on Information Theory, Vol49, no.2, pp.371381,2003.
[4] [3] R. Koetter and M. Medard, An algebraic approach to network coding, IEEE/ACM transactions on networking, vol. 11, no. 5,
pp. 782795, 2003.
[5] T. Ho, M. Medard, R. Koetter, D. Karger, M. Effros, J. Shi, and B. Leong, A random linear network coding approach to
multicast, IEEE Transactions on Information Theory, vol. 52, no. 10, pp. 44134430, 2006
[7] Majid Ghaderi, Don Towsley and Jim Kurose, "Reliablity Gain of Network Coding in Lossy Wireless Networks," in IEEE
INFOCOM, 2008.
[8] Ihsan A. Qazi and Pratik Gandhi, "Performance Evaluation of Wireless Networ Coding under Practical Settings," in University of
Pittsburgh, PA 15260.

[9] Borislava Gjic, Janne Riihijarvi and Peti Mahonen, "Performance Evaluation of Network Coding: Effects of Topology and
Network Traffic for Linear and XOR Coding," in Journal of Communications, Vol.4, No.11, Dec-2009.
[10] Christos Gkan and Pablo Rodriguez, "Cooperative Security for Network Coding File Distribution," in IEEE 2006.
[11] Zhen Yu, Yawen Wei, Bhuvaneswari Ramkumar and Yong Guan, "An Efficient Signature Based Scheme for Securing Netwrok
Coding against Pollution Attacks," in IEEE INFOCOM, 2008.
[12] T.Ho, B. Leong, B. Koetteer, M. Meard, M. Effors and D. Karger, "Bynzantine Modification Detection in Multicast Networks
Using Randomized Network Coding," in ISIT, 2004.
[13] S. Jaggi, M. Langberg, S. Katti, T. Ho, D. Kattabi and M. Meard, "Resillient Network Coding in the Presence of Bynzantine
Adversaries," in IEEE INFOCOM, 2007.
[14] Zhen Yu, Yawen Wei, Bhuvaneswari Ramkumar and Yong Guan, "An Efficient Scheme for Securing XOR Netwrok Coding
against Pollution Attacks," in IEEE INFOCOM, 2009.
[15] Zhen Yu and Yong Guan, "A Dynamic En-Route Scheme for Filtering False Data Injection in Wireless Sensor Networks," in
IEEE, 2006

165

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A novel POCl3 catalysed expeditious synthesis and antimicrobial activities of 5subtituted-2-arylbenzalamino-1, 3, 4-thiadiazole
Shalini Jaiswal*, Shailja Sigh
Dr. Shalini Jaiswal
Department of Chemistry, AMITY UNIVERSITY
Greater Noida Campus, U.P. 201308
Email: shaliniajaiswal@gmail.com and sjaiswal@gn.amity.edu
Telephone: 09953217151
ABSTRACT : There is a growing need for more environmentally acceptable processes in the chemical industry. The fields of
combinatorial and automated medicinal chemistry have emerged to meet the increasing requirement of new compounds for drug
discovery. Microwave-assisted organic synthesis is an enabling technology for accelerating drug discovery and development
processes.
In the family of heterocyclic compounds nitrogen containing heterocycles are an important class of compounds in the medicinal
chemistry and also contributed to the society from biological and industrial point which helps to understand life processes.
Thiosemicarbazide belongs to thiourea group, whose biological activity is due to the presence of aldehyde or ketone moiety.
Thiosemicarbazide derivatives exhibit a great variety of biological activities, such as antitumor, antifungal , antibacterial , and
antiviral.
Here we developed a, novel, solvent free, microwave assisted synthesis of hitherto unknown 5-subtituted-2aryl benzalamino-1, 3, 4thiadiazole 4a-h with excellent yield.
KEY WORDS
5-subtituted-2aryl benzalamino-1, 3, 4-thiadiazole, green chemistry, microwave irradiation, antibacterial activity, gram positive and
gram negative bacteria.

INTRODUCTION
Conventional methods of organic synthesis are too slow to satisfy the demand for generation of such compounds. It is widely
acknowledged that there is a growing need for more environmentally acceptable processes in the chemical industry .The fields of
combinatorial and automated medicinal chemistry have emerged to meet the increasing requirement of new compounds for drug
discovery
The microwave region of the electromagnetic spectrum lies between infrared and radio frequencies 1, 2. Microwave-assisted organic
synthesis is an enabling technology for accelerating drug discovery and development processes. Microwave instruments are used
principally in three areas of drug research: the screening of organic drug, peptide synthesis, and DNA amplification. Microwave
include following advantages, over the conventional heating.

Uniform heating occurs throughout the material

Process speed is increased

High efficiency of heating

Reduction in unwanted side reaction

Purity in final product,

Improve reproducibility

Environmental heat loss can be avoided

Reduce wastage of heating reaction vessel

Low operating cost


166

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Resistance to antimicrobial agents has become an increasingly important and pressing global problem. Heterocyclic compounds are
abundant in nature and are of great significance to life because their structural subunits exist in many natural products such as
vitamins, hormones, and antibiotics 1, 2.In the family of heterocyclic compounds nitrogen containing heterocycles are an important
class of compounds in the medicinal chemistry and also contributed to the society from biological and industrial point which helps to
understand life processes 3.
Thiosemicarbazide belongs to thiourea group, whose biological activity is due to the presence of aldehyde or ketone moiety.
Thiosemicarbazide derivatives exhibit a great variety of biological activities 4, 5 , such as antitumor , antifungal , antibacterial , and
antiviral. Thiosemicarbazide are potent intermediates for the synthesis of pharmaceutical and bioactive materials and thus, they are
used extensively in the field of medicinal chemistry.
Thiadiazole is a 5-membered ring system containing two nitrogen and one sulphur atom. They occur in nature in four isomeric forms
viz. 1,2,3-thiadiazole; 1,2,5-thiadiazole; 1,2,4-thiadiazole and 1,3,4-thiadiazole. These different classes of thiadiazoles nucleus were
known to possess various biological and pharmacological properties6-9

N
N

(1)1,2,3-Thiadiazole

(2)1,2,4-Thiadiazole

N
S

(3)1,2,5-Thiadiazole

(4)1,3,4-Thiadiazole

1,3,4-thiadiazole exhibit diverse biological activities, possibly due the present of =N-C-S moiety10. 1, 3, 4-thiadiazoles are very
interesting compounds due to their important applications in many pharmaceutical, biological and analytical field 11, 12. The naturally
occurring B6-vitamins pyridoxine, pyrodoxal, pyridoxamine, and codecarbaxylase also contains thiadiazole nucleus.

Literature survey revealed that the 1, 3, 4-thiadiazole moiety have been widely used by the medicinal chemist in the past to explore its
biological activities. 1, 3, 4-Thiadiazole are very interesting compounds due to their important applications in many pharmaceutical
biological and analytical fields13, 14.
1,3,4-thiadiazole derivatives have been of interest to the medicinal chemists for many years because of their
anticancer15,16,antitubercular17, antibacterial18, antifungal19,20, anticonvulsant, analgesic21 , antisecretory22 ,antitumor23 and antimicrobial24activities.
The solvent free reaction or dry media techniques under microwave irradiation are one of the main fields of our research. Encouraged
by above reports and as part of our research programme for development of eco-friendly synthetic protocol for biologically active
compounds as well as in pursuing of our work on new solvent-free synthesis we developed a, novel, solvent free, microwave
activated synthesis of hitherto unknown 5-subtituted-2aryl benzalamino-1, 3, 4-thiadiazole (Scheme 1).The reaction time, yield, and
1
HNMR spectra are summarized in Table-1 and Table-2.

EXPERIMENTAL SECTION
All chemicals used in this study were purchase from Aldrich Chemicals and were used without further purification. Melting points
were determined by open glass capillary method and are uncorrected. A Laboratory Microwave Oven (Model BP 310/50) operating at
2450 MHz and power output of 600 W was used for all the experiments. The completion of reactions was monitored by TLC (Merk
silica gel). IR spectra were recorded on a Shimadzu FTIR-420 spectrophotometer. 1H NMR and 13C NMR spectra were recorded at
167

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

400oC on a Bruker AVANCE DPX (400 MHz) FT spectrometer in CDCl3 using TMS as an internal reference (chemical shift in ,
ppm). Mass spectra were recorded on JEOL SX-303 (FAB) mass spectrophotometer at 70ev. Elemental analyses were carried out
using a Coleman automatic C, H, N analyser. The yield and melting point are given in Table-1.

Scheme-I
S
Cl

OCH 2
COOH

NH2

NH2.NH

Ar.CHO

(1)

(2)

(3)

Phosphrous oxychloride, M.W.


(3-5 Min.)
Cl
F
N

N
CH

OCH2

Ar

4(a-g)

GENERAL METHODS AND MATERIALS


Microwave assisted synthesis of 5-subtituted-2arylbenzalamino-1, 3, 4-thiadiazole 4a-h:
(3-Chloro-4-flurophenoxy) acetic acid 1 (0.010 mol), thiosemicarbazide 2 (0.012 mol), aromatic aldehyde 3 (0.02 mol) and catalytic
amount of POCl3 were mixed thoroughly in a beaker and the mixture was heated in household microwave oven, operating at medium
power (600W) for the specified period (3-5 min) given in Table-1.
The completion of reaction was checked by TLC at every 30 sec. and after completion of reaction, the reaction-mixture was allowed
to attain room temperature. The reaction-mixture was cooled and poured on crushed ice, cooled to 10 0C. The solid separated was
filtered, treated with dil. NaOH to adjust PH 9-10. Finally resulting solid was washed with water and crystallized from DMF to obtain
the crude product 4 a-h.

168

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Thermal synthesis of 5-subtituted-2arylbenzalamino-1, 3, 4-thiadiazole 4a-h:


A mixture of (3-Chloro-4-flurophenoxy) acetic acid 1 (0.010 mol), thiosemicarbazide 2 (0.012 mol), and aromatic aldehyde 3 (0.02
mol) in 30 ml of ethanol (95%) was
Refluxed on a water bath at 90 0C for 4-5 hour. The completion of reaction was checked by TLC at every 1.5 hours and after
completion of reaction, the reaction-mixture was allowed to attain room temperature. The reaction-mixture was cooled and poured on
crushed ice, cooled to 10 0C. The solid separated was filtered, treated with dil. NaOH to adjust Ph 9-10. Finally resulting solid was
washed with water and crystallized from DMF to obtain the crude product 4 a-h.

RESULTS AND DISCUSSION


After the experiment it is concluded that the compounds which are synthesized in the project having good yield .The identification and
characterization of the compound determined on the basis of their1 melting Point ,TLC, 1HNMR and mass Spectroscopy. The spectral
and elemental analysis of newly synthesized compound is elaborated in Table-2, which confirms the structure of synthesized
compounds.
The 5-subtituted-2arylbenzalamino-1, 3, 4-thiadiazole derivatives were assayed for their antimicrobial activity against selected species
of gram-positive, gram negative bacteria .The antibacterial activity data reveals that the compounds 4c and 4d exhibited good
antibacterial activity against n gram positive (S.aureus and B.cereus) compared to standard drug. While all compound exhibited
lowest activity against gram negative bacteria as compared to standard drug Streptomycin.
Table-1
Melting point and Yield of Compound 4a-f
Compoun
d

Time

Yield (%)

M. P.
(0C)

Ar
MWI (min)
(hour)

Thermal

MWI
Thermal

4a

-C6H5

86

35

185

4b

2-HO- C6H4

80

37

225

4c

2-NO2- C6H4

85

35

215

4d

2-Cl-C6H4

88

36

180

4e

3-Cl,4-ClC6H3
3-MeO, 4HO-C6H3
3-NH2-C6H4

90

42

210

85

35

220

90

36

270

3-MeO,4MeO- C6H3

85

45

188

4f
4g
4h

169

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table-2
Physical and 1HNMR Spectra data of Compound 4a-h
Compd

Mol.Formulla

1 H-NMR

Elemental Analysis
Found (Calculated)

(CDCl3, , ppm)

MS (EI,
m/z (M+)

4a
C16H11ClFN3OS

7.3-7.5(m,5H,
ArH),8.1(s,1H,=CH),5.20(s,
2H,-CH2),6.847.18(m.3H,ArH)

C, 55.25(54.95); H,
3.19(3.24); N,
12.08(12.12)

347

C16H11ClFN3O2S

6.8-7.5(m,4H,
ArH),8.1(s,1H,=CH),5.20(s,
2H, -CH2),6.867.18(m.3H,ArH),5.0(s,1H,OH).
7.6-8.2(m, 4H,
ArH),8.1(s,1H,=CH),5.20(s,
2H, -CH2),6.897.16(m.3H,ArH).
7.2-7.6(m,4H,
ArH),8.1(s,1H,=CH),5.20(s,
2H, -CH2),6.897.16(m.3H,ArH).
7.2-7.6(m, 4H,
ArH),8.1(s,1H,=CH),5.20(s,
2H, -CH2),6.897.16(m.3H,ArH).
6.7-7.2(m, 3H,
ArH),8.1(s,1H,=CH),5.20(s,
2H, -CH2),6.887.16(m.3H,ArH),3.73(s,3H,OCH3),5.0(s,1H,-OH)
6.5-7.1(m, 4H,
ArH),8.1(s,1H,=CH),5.20(s,
2H, -CH2),6.847.18(m.3H,ArH),5.0(s,1H,NH2
6.7-7.0(m,
3H,
ArH),8.1(s,1H,=CH),5.20(s,
2H,
-CH2),6.847.18(m.3H,ArH),3.73(s,6H,OCH3),

C, 52.82(52.85); H,
3.05(3.10); N,
11.55(11.60)

363.

C, 48.92(48.95); H,
2.57(2.60); N,
14.26(14.20)

392

C, 50.28(50.25); H,
2.64(2.50); N,
10.99(10.85)
C, 46.12(45.95); H,
2.18(2.10); N,
10.08(10.15)

380

C, 51.85(52.10); H,
3.33(3.45); N,
10.67(10.58)

393

C, 52.97(52.86); H,
3.33(3.26); N,
15.44(15.30)

362

C, 53.01(52.96); H,
3.71(3.80);
N,
10.30(10.28)

407

4b

4c
C16H10ClFN4O3S

4d
C16H10Cl2FN3OS

4e

C16H9Cl3FN3OS

4f

C17H13ClFN3O3S

4g

C16H12ClFN4OS

4h

C18H15ClFN3O3S

414

ANTIMICROBIAL ACTIVITY
The compound were screened for their antibacterial activity against the gram positive bacteria (Bacillus cereus and Staphylococcus
aureus) and gram negative bacteria(Eschertia coli and Pseudomonas aeguginosa) by measuring inhibition of zone in mm.
Streptomycin (50g/ml) was used as a standard drug for antibacterial activity.
170
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

All the compound exhibited significant to moderate activity against gram positive and gram negative bacteria as compared to standard
drug Streptomycin. The Compounds 4c and 4d has exhibited higher activity against gram positive (S.aureus and B.cereus) .The higher
antibacterial activity of 4c and 4d due to the presence of electron withdrawing group(Chloro and Nitro) at its ortho position .The
compounds 4a and 4e has exhibited moderate activity against gram positive (S.aureus and B.cereus).While all compound exhibited
lowest activity against gram negative bacteria as compared to standard drug Streptomycin.
Table-3
Antibacterial data of 5-subtituted-2arylbenzalamino-1, 3,4-thiadiazole derivatives
Compound

4a
4b
4c
4d
4e
4f
4g
4h
Streptomycin

Antibacterial data in MIC (g.ml)


Gram + Bacteria
S.aureus
B.cereus
8
7
7
6
9
8
9
8
8
7
7
6
7
6
7
6
10
9

Gram Bacteria
E.coli
5
7
7
6
7
7
6
7
10

P.aeruginosa
4
6
7
7
6
6
6
7
12

CONCLUSION
In the recent year microwave assisted organic reaction has emerged as new tool in organic synthesis and this is considered as an
important approach toward green chemistry. This growth of green chemistry holds significant potential for a reduction of the by
product & waste production and a lowering of the energy costs. So as part of our research programme for development of ecofriendly synthetic protocol for biologically active compounds as well as in pursuing of our work on new solvent-free cyclisation
process we developed a, novel, solvent free, microwave assisted synthesis of hitherto unknown 5-subtituted-2aryl benzalamino-1, 3,4thiadiazole derivative which possess antimicrobial activities like antibacterial, antiviral and antifungal activities etc. The entire
compound exhibited significant to moderate activity against gram positive and gram negative bacteria as compared to standard drug
Streptomycin. The Compounds 4c and 4d has exhibited higher activity against gram positive (S.aureus and B.cereus) .
The compounds 4a and 4e has exhibited moderate activity against gram positive (S.aureus and B.cereus).While all compound
exhibited lowest activity against gram negative bacteria as compared to standard drug Streptomycin. The possible improvements in the
activity can be further achieved by slight modifications in the substituents on the basic thiadiazole nucleus. Although several method
are available for synthesis of thiadiazole but all these method have some disadvantage like long reaction period, low yield and use of
toxic organic solvents which pollute our environment.

ACKNOWLEDGEMENT
The authors would like to express their gratitude and thanks to Department of Chemistry, DIT, and School of Engineering (Part of
AMITY Education Group) Greater Noida for necessary facilities to carry out this research work and RSIC, CDRI Lucknow for
spectral analysis.

REFERENCES:
1.Varma RS, Aqueous N-heterocyclization of primary amines and hydrazines with dihalides: microwave-assisted syntheses of Nazacycloalkanes, isoindole, pyrazole, pyrazolidine, and phthalazine derivatives, Journal of Organic Chemistry, vol.71(1),135141,
(2006).
171

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. Kumar YD, and Varma RS, Revisiting nucleophilic substitution reactions: microwave-assisted synthesis of azides, thiocyanates,
and sulfones in an aqueous medium, Journal of Organic Chemistry, vol. 71(17), 66976700, (2006).
3. Valverde MG and Torroba T,Special issue: sulphur nitrogen heterocycles, Molecules, 10( 2), 318320, (2008).
4. Shipman C ,Smith
SH,
Drach JC, and Klayman DL,Thiosemicarbazones of 2-acetylpyridine, 2-acetylquinoline,1acetylisoquinoline, and related compounds as inhibitors of herpes simplex virus in vitro and in a cutaneous herpes guinea pig model,
Antiviral Research, 6( 4), 197222, (1986).
5. Quiroga AG, Perez JM, Lopez-Solera I, et al.,Novel tetra nuclear orthometalated complexes of Pd(II) and Pt(II) derived from pisopropylbenzaldehyde thiosemicarbazone with cytotoxic activity in cis-DDP resistant tumour cell lines. Interaction of these
complexes with DNA, Journal of Medicinal Chemistry, 4199), 13991408, (1998).
6. Cleric F, Pocar D, synthesis of 2-amino-5-sulfanyl-1, 3, 4-thiadiazole derivatives and evaluation of their antidepressant and
anxiolytic activity, J.Med.Chem, 44, 931-936 (2001).
7. Rajesh S, Jitendra S, and Subash Chandra C, 2-amino-5-sulfanyl-1, 3, 4-thiadiazole: a new series of selective cyclooxygenase-2inhibitors, Acta Pharma, 58, 317-326 (2008).
8. Smith N, Garg SP, Pramilla S, synthesis of some pyrazoles, pyrazolones, and oxadiazoles bearing 2-arylamino-5-mercapto-1,3,4thiadiazole nuclei as possible antimicrobial agents, Indian journal of heterocyclic chemistry, 12, 09-12 (2002).
9. Jitendra Kumar G, Rakesh Kumar Y,Rupesh D, Pramod Kumar S, Recent advancements in the synthesis and pharmacological
evaluation of substituted 1,3,4-thaidiazole derivatives, international journal of pharma tech research, 2(2), 1493-1507, (2010).
10. Oruc E, Rollas S, Kandemirli F, Shvets N and Dimoglo A, 1,3,4-thiadiazole derivatives- synthesis, structure elucidation and
structure antituberculosis activity relationship investigation. Journal of Medical Chemistry, 47, 6760-6767(2004).
11. Hadizadeh F and R Vosoogh. Synthesis of a-[5-(5-Amino-1, 3, 4-thiadiazol-2-yl)-2 imidazolylthio]-acetic acids. Journal of
Heterocyclic Chemistry, 45: 1-3(2008).
12. Lu S M and R Y Chen, Facial and efficient synthesis aminophosphate derivatives of 1, 3, 4-oxadiazole and 1, 3, 4- thiadizaole,
Organic Preparations and Procedures International, 32(3), 302-306,(2000).
13. Katritzky A, Rees CW and Potts KT (eds.), Comprehensive Heterocyclic Chemistry. Oxford-VCH, 6(1982).
14. Ahmed M, Jahan J and Banco S, A simple spectrophotometric Methods for the determination of copper in Industrial,
Environmental, Biological and Soil, samples using 2, 5-dimercapto 1, 3, 4-thiadiazole, J anal Sci,18,805-810,( 2002).
15. Terzioglu N and Gursoy A, Synthesis and anticancer evaluation of some new hydrazone derivatives of 2,6-dimethylimidazo[2,1b][1,3,4]thiadiazole-5-carbohydrazide,Eur. J. Med. Chem. 38, 781-86, (2003).
16. Holla, B.S.; Poorjary, K.N.; Rao, B.S.; Shivananda, M.K. New bis-minomercaptotriazoles and bis-triazolothiadiazoles as possible
anticancer agents, Eur. Journal of Medicinal Chemistry, 37, 511517,( 2002).
17. Gadad AK, Noolvi MN and Karpoormath RV, Synthesis and anti-tubercular activity of a series of 2-sulfonamide/trifluoromethyl6-substituted imidazo [2, 1-b] [1, 3, 4] thiadiazole derivatives, Bioorg. Med Chem, 12,5651-59,( (2004)).

172

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

18. Gadad AK, Mahajanshetti CS, Nimbalkar S and Raichurkar A, Synthesis and antibacterial activity of some 5guanylhydrazone/thiocyanato-6-arylimidazo[2,1-b]-1,3,4-thiadiazole-2-sulfonamide derivatives, Eur. J. Med. Chem, 35, 853-57,
(2000).
19. Andotra CS, Langer TC and Kotha A, Synthesis and antifungal activity of some substituted 1,3,4-thiadiazolo[3,2-a]-s-triazin-5phenyl-7-thiones and imidazo-[2,1-b]-1,3,4-thiadiazol-5-ones. J. Indian.Chem. Soc,74, 125-27, (1997).
20. Liu X, Shi Y, Ma Y, Zhang C, Dong W, Pan L,Wang B,Li Z, Synthesis, antifungal activities and 3D-QSAR study of N-(5substituted-1, 3, 4-thiadiazol-2-yl) cyclopropanecarboxamides.Eur. J. Med Chem, 44, 27822786,(2009).
21. Khazi IAM, Mahajanshetti CS, Gadad AK, Tarnalli AD and Sultanpur CM, Synthesis and anticonvulsant and analgesic activities
of some 6-substituted-imidazo[2,1-b][1,3,4]thiadiazole-2-sulfonamides and their 5-bromo derivatives. Arzneim-Forsch. Drug. Res,46,
949-52,( (1996)).
22. Andreani A, Leonia A, Locatelli A, Morigi R, Rambaldi M, Simon WA and Senn-Bilfinger J,Synthesis and antisecretory activity
of 6-substituted 5-cyanomethyl imidazo[2,1-b]thiazoles and 2,6-dimethyl-5-hydroxymethylimidazo [2,1-b][1,3,4]thiadiazole.
Arzneim-Forsch. Drug. Res,50,550-53,(2000).
23. Supuran, CT, Briganti F, Tilli S, Chegwidden WR, Scozzafava A, Carbonic anhydrase inhibitors: Sulphonamides as antitumor
agents, Bioorg. Med. Chem, 9, 703714,( 2001).
24. Deminbas N, Karaoglu SA, Demirbas A, Sancak K, Synthesis and antimicrobial activities of some new 1-(5-phenylamino[1,3,4]thiadiazol-2-yl)methyl-5-oxo-[1,2,4]triazole and 1-(4-phenyl-5-thioxo-[1,2,4]triazol-3-yl)methyl-5-oxo- [1,2,4]triazole
derivatives, Eur. J. Med. Chem,39, 793804,(2004)

173

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Node Failure Recovery in Wireless Sensor and Actor Networks (WSAN) using
ALeDiR Algorithm
G Siva Kumar, Dr. I. SanthiPrabha
JNTUK, gsk797@gmail.com, +91-9494664964

AbstractWireless sensor and actor networks (WSANs) refer to a group of sensors and actors linked by wireless medium to
perform distributed sensing and actuation tasks. In such a network, sensors gather information about physical world, whereas actor
takes decisions and perform appropriate actions upon the surroundings, that allows remote and machine-controlled interaction with the
surroundings. Since Actors have to coordinate their motion in order to keep approachable to every node, a strongly connected network
is needed all the time. However, a failure of an associated actor might cause the network to partition into disjoint blocks and would
therefore violate such a connectivity requirement. In this project, a new algorithmic rule is proposed. which is localized and distributed
algorithm that leverages existing route discovery activities within the network and imposes no extra pre-failure communication
overhead.

Keywords wireless sensor and actor networks(WSAN), Multiple node failure, Disjoint Blocks, Overhead Management, pre
failure, network recovery, Actor Movement.
INTRODUCTION

In recent years wireless sensor and actor networks gaining growing interest due to their suitableness for the applications in
remote and harsh areas where human intervention is risky. Samples of these applications includes disaster management, search and
rescue, fire observance, field intelligence operation, space exploration, coast and border protection, etc. WSANs comprised of varied
miniaturized stationary sensors and fewer mobile actors. The sensors acts as data acquisition devices for the powerful actor nodes that
analyses the sensor readings and gives an appropriate response to achieve predefined application mission.
For example, sensors could detect a high temperature and trigger a response from an actor that will activate air conditioner.
Robots and pilotless vehicles are example actors in observe. Actors work autonomously and collaboratively to attain the appliance
mission. For the cooperative actors operation, a powerfully connected inter-actor configuration would be needed at all times. Failure
of one or more nodes could partition the inter-actor network into disjoint blocks. Consequently, associate inter-actor interaction will
fail and the network would not be able to deliver a timely response to a significant event. Therefore, recovery from associate actor
failure have the most importance in this scenario.
The remote setup during which WSANs usually serve makes the readying of extra resources to switch failing actors
impractical, and emplacement of nodes becomes the simplest recovery possibility. Distributed recovery are going to be difficult since
nodes in separate partitions will not be ready to reach one another to coordinate the recovery method. Therefore, each node has to take
care of partial data of the network state. To avoid the excessive state-update overhead and to expedite the property restoration method,
previous work depends on maintaining one-hop or two-hop neighbour lists and predetermines some criteria for the node's involvement
within the recovery.
In contrast to previous work, this paper considers the property of restoration that subject to path length constraints. In some
applications, timely coordination among the actors is needed, and also lengthening the shortest path between two actors also would not
be acceptable.
Most of the existing approaches within the literature are strictly reactive with the recovery method initiated once the failure
F is detected. the most plan is to replace the unsuccessful node F with one in every of its neighbours or move those neighbours
inward to autonomously mend cut topology within the neighbourhood of F.

174

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig:1 An Example wireless sensor and actor network setup

SYSTEM MODEL AND PROBLEM STATEMENT


There are two types of nodes in WSANs: 1)sensors and 2)actors. actors are have more onboard energy when compared to
sensors and they are richer in computation and communication resources. Whereas sensors are highly constrained in energy and are
inexpensive. The transmission range of actors is finite. In this paper actor and node will be used interchangeably.
Based on the impact of the actors failure in the network, the nodes are classified into 2 types. The leaf node and critical
node. The leaf node is the one, on removal of the node there will be not much effect on the network. They are also regarded as
children nodes. Whereas the critical node is the one, on failure of that node the network will become into disjoint blocks. This critical
node is also called as cut vertex. For restoring network connectivity in partitioned off WSANs variety of schemes have recently been
proposed. All of those recovery methodologies have targeted on reestablishing cut links while not considering the impact on the length
of pre-failure knowledge ways. Some schemes recover the network by repositioning the existing nodes, whereas others fastidiously
place additional relay nodes. On the opposite hand, some work on device relocation focuses on metrics aside from property, e.g.,
coverage, network longevity, and quality safety, or to self-spread the nodes once non-uniform readying.
Existing recovery schemes either impose high node relocation overhead or extend a number of the inter-actor communication
path.

RELATED WORK
A number of schemes have recently been planned for restoring network connectivity in WSANs [1]. All of those schemes
have concentrated on reestablishing cut off links while not considering the impact on the length of pre-failure information methods.
Some schemes recover the network by placement the prevailing nodes, whereas others rigorously place extra relay nodes. Like our
planned DCR algorithmic program, DARA [6] strives to revive property lost as a result of failure of cut-vertex. However, DARA
needs additional network state in order to make sure convergence. Meanwhile, in PADRA [8], it determines a connected dominating
set (CDS) of the full network so as to discover cut-vertices. Although, they use a distributed algorithmic program, their resolution still
needs 2-hop neighbour's data that will increase electronic communication overhead.
Another work planned in [6] uses 2-hop data to discover cut-vertices. The planned DCR algorithmic program depends solely
on 1-hop data and reduces the communication overhead. Though RIM [13], C3R [7] and tape machine [15] use 1- hop neighbour data
to revive connectivity, they are strictly reactive and don't differentiate between critical and non-critical/children nodes. Whereas, DCR
could be a hybrid algorithmic program that proactively identifies crucial nodes and designates for them applicable backups. the
existing work on synchronic node failure recovery planned in [10] could be a mutual exclusion mechanism known as [14] so as to
handle multiple synchronic failures in a very localized manner.
The approach in this paper differs from MPADRA in multiple aspects. Whereas, it solely needs 1-hop data and every critical
node has just one backup to handle its failure.

PROPOSED SYSTEM
In this project, a new approach for the network recovery is proposed based on extra actor(Aggrandized Least Disruptive
topology Repair). Here the extra actor node will acts as a centralized node, which will control the node movements. This
methodologys main task is to overcome the multi-node failures. The performance of ALeDiR is simulated on NS2 tool.

175

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IMPLEMENTATION
1. Failure Detection
Actors continuously send heartbeat messages to their neighbors to make sure that they are functional and conjointly report
changes to the one-hop neighbors. Missing heartbeat messages will be used to observe the failure of actors.
After that it simply checks whether the failed node is critical node or not. If it is children node there will be not much effect
on the network. If it is Critical node, disjoint blocks will result within the network.

2. Smallest block identification


In this step the smallest disjoint block has to be taken. If it is small then it will scale back the recovery overhead within the
network.
- The smallest block is that the one with the smallest amount of nodes
- By finding the accessible set of nodes for each direct neighbor of the failing node then selecting the set with the fewest nodes.

3. Substitution of faulty node and children movement


Here in this step, the faulty node is to be substituted by extra actor and to restore the network quickly. When the node failure
is detected by heart beat message then extra actor node will move to that particular location and it will take care of the restoration, i.e.,
it will control the actor movements. It will find which nodes are affected by the failure and inform to that nodes to which position they
have to move. After restoration the extra actor will go back to its original position.

Fig: 2 Implementation Flow Chart


176

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RESULTS
Here the system performance analysis is done based on number of nodes involved in restoration, PDF(packet delivery
fraction),end to end delay and Overhead which are explained below.
The Fig.(3) shows the comparison of end to end delay in the network of the existing and proposed method. The X-axis
represents the protocol and Y-axis represents the delay in seconds. In the existing LeDiR method we have 1.5s delay where as in
proposed ALeDiR method we have only 0.2s delay.

Fig: 3DelayComparison between LeDiR&ALeDiR


The Fig.(4) represents number of nodes involved in the restoration of the network. Here X-axis represents protocol and Yaxis represents number of nodes. Here we can clearly observe that in LeDiR six nodes involved in restoration which will creates more
disturbances in the network but in ALeDiR only three nodes involved in the restoration.

Fig: 4 No. of nodes moved in LeDiR&ALeDiR

177

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig: 5 PDF Comparison between LeDiR&ALeDiR


The Fig.(5) represents packet delivery fraction of the network. In thisX-axis represents protocol and Y-axis represents
percentage of packets delivered. In multi node failure case LeDiR shown non-uniform packet delivery but in the proposed ALeDiR
method it shows maximum uniform packet delivery.
The Fig.(6) represents the Overhead of the network in LeDiR and ALeDiR. In this X-axis represents protocol and Y-axis
represents the number of overhead packets size. Compared to LeDiR the ALeDiR contains less Overhead .

Fig :6 OH Comparison between LeDiR&ALeDiR

CONCLUSION
Inter-actor network connectivity is essential in most of the WSAN applications to perform collaborative actions in an
efficient manner. Therefore, maintaining strong inter-actor connectivity throughout the network operation is crucial. This paper,
presents a local, distributed and movement efficient protocol which can handle the failure of any node in a connected WSAN.
Simulation results confirmed that the new approach performed very close to the optimal solution in terms of delay, PDF, overhead,
178

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

nodes involved while keeping the approach local and thus minimizing the message complexity. In addition, this approach
outperformed LeDiR in terms of travel distance which requires the knowledge of 2-hops for each node.
In the future, the travel distance performance can be improved by adapting a distributed dynamic programming approach
when determining the closest dominatee.

REFERENCES:
[1] M. Younis and K. Akkaya, Strategies and techniques for node placement in wireless sensor networks: A survey, J.Ad Hoc
Netw., vol. 6, no. 4, pp. 621655, Jun. 2008.
[2] A. Abbasi, M. Younis, and K. Akkaya, Movement-assisted connectivity restoration in wireless sensor and actor networks, IEEE
Trans. Parallel Distrib. Syst., vol. 20, no. 9, pp. 13661379, Sep. 2009.
[3] M. Younis, S. Lee, S. Gupta, and K. Fisher, A localized self-healing al-gorithm for networks of moveable sensor nodes, in Proc.
IEEE GLOBE-COM, New Orleans, LA, Nov. 2008, pp. 15.
[4] Muhammad Imran a,n, Mohamed Younis b, AbasMd Said c, HalabiHasbullah c , Localized motion-based connectivityrestoration
algorithms for wireless sensor and actor networks, Journal of Network and Computer Applications 35 (2012) 844856
[5] K. Akkaya, F. Senel, A. Thimmapuram, and S. Uludag, Distributed recovery from network partitioning in movable sensor/actor
networks via controlled mobility, IEEE Trans. Comput., vol. 59, no. 2, pp. 258271, Feb. 2010.
[6] Azadeh, Z. A hybrid approach to actor _actor connectivity restoration in wireless sensor and actor networks. In: Proceedings of the
8th IEEE international conference on networks (ICN 2009), Cancun, Mexico; March 2009.
[7] Tamboli, N and Younis, M. Coverage-aware connectivity restoration in mobile sensor networks. In: Proceedings of the IEEE
international conference on communications (ICC 2009), Dresden, Germany; June 2009.
[8] Ameer A. Abbasi, Mohamed F. Younis, Senior Member, IEEE, and Uthman A. Baroudi,Recovering From a Node Failure in
Wireless Sensor-Actor Networks With Minimal Topology Changes. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY,
VOL. 62, NO. 1, JANUARY 2013,256-271
[9] Abbasi, AA, Akkaya, K, Younis, M. A distributed connectivity restoration algorithm in wireless sensor and actor networks. In:
Proceedings of the 32nd IEEE conference on local computer networks (LCN 2007), Dublin, Ireland; October 2007.
[10] Akkaya, K, Thimmapuram, A, Senel, F, Uludag, S. Distributed recovery of actor failures in wireless sensor and actor networks.
In Proceedings of the IEEE wireless communications and networking conference (WCNC 2008), Las Vegas, NV; March 2008.
[11] K. Akkaya and M. Younis, COLA: A coverage and latency aware actor placement for wireless sensor and actor networks, in
Proc. IEEE VTC, Montreal, QC, Canada, Sep. 2006, pp. 15.
[12] F. Akyildiz and I. H. Kasimoglu, Wireless sensor and actor networks: Research challenges, Ad Hoc Netw. J., vol.2, no. 4, pp.
351367, Oct. 2004.
[13] Younis M, Lee S, Abbasi AA. A localized algorithm for restoring inter-node connectivity in networks of moveable sensors. IEEE
Transactions on Compu-ters 2010;99(12).
[14] Akkaya K, Senel F, Thimmapuram A, Uludag S. Distributed recovery from network partitioning in movable sensor/actor
networks via controlled mobility. IEEE Transactions on Computers 2010;59(2):25871.
[15] Imran, M, Younis, M, Said, AM, Hasbullah, H. Volunteer- instigated connectivity restoration algorithm for wireless sensor and
actor networks. In: Proceedings of the IEEE International Conference on Wireless Communications, Networking and Information
Security (WCNIS 2010), Beijing, China; June 2010

179

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Cognitive Radio Networks: Defense against PUEA


M.Mohamed Faisal
Asst.Prof /CSE,Anna University-chennai, faisalcsesrvec@gmail.coml ,9688110199.

Abstract A latest communication technology is Cognitive Radio Network that network in which an un-licensed user can use a
unbound channel in a spectrum band of approved user. Primary User Emulation Attack (PUEA) is one of the major threats to the
spectrum sensing, which reductions the spectrum access probability. The primary user emulation attacks in cognitive radio networks
in an un-licensed digital TV band. A reliable AES-assisted DTV scheme, in which an AES-encrypted reference signal is generated at
the TV transmitter and used as the sync bits of the DTV data frames. By allowing a shared secret between the transmitter and the
receiver, the reference signal can be regenerated at the receiver and used to achieve accurate identification of the authorized primary
users. When combined with the analysis on the autocorrelation of the received signal, the presence of the malicious user can be
detected accurately whether or not the primary user is present. The AES-assisted DTV scheme, the primary user, as well as malicious
user, can be detected with high accuracy and low false alarm rate under PUEA.

Keywords Cognitive Radio Network, Primary User Emulation Attack, DTV, AES-encrypted, DSA, CR networks, HDTV.

1. Introduction
In a cognitive radio network, a licensed user is called the primary user, whereas an unlicensed user is named the secondary user
(1). If secondary users sense that primary users do not transmit, they can then use the spare spectrum for communications; otherwise,
secondary users detect the presence of primary users and will restrain from transmitting. In this way, secondary users can make use of
precious spectrum without interfering with the transmission of primary users. The existence of cognitive networks is justified by the
fact that many spectra are not fully used by their dedicated users, and therefore allowing secondary users access will give the
opportunity to fully use the bandwidths and provide more spectrums to users. This is particularly true when part of the bandwidth is
reserved for applications that have not yet been developed. The time necessary for such applications to come on to market may be
long or may simply never occur and precious bandwidth may simply be wasted for a substantially long period.
Cognitive networks are radio networks where each band of frequency is occupied by two groups of users: the primary users that
form the primary network and the secondary users that form the secondary network. The primary users are supposed to have priority
over the secondary users: i.e. the performance of the primary network should be protected against the traffic of the secondary network.
By protection we mean that the performance of the primary network should be guaranteed independently of the demand from the
secondary network(2). Besides, the throughput and possession of the secondary network should vanish when the traffic load of the
primary network increases. In other words the secondary users are only allowed to take the blank periods left by the primary users.
The problem is that the protocol used by the primary users, in short the primary protocol, often comes after a standardization
process that ignores the secondary users. The connotation is that the design of the secondary protocol is sometimes harder and more
costly than the design of the primary protocol because the secondary protocol must indeed embed the features of the primary protocol
in order to knowledgeably give priority to primary user.
The cognitive radio technology will enable the users to determine which portions of the spectrum is (1) available and detect the
presence of licensed users when a user operates in a licensed band (spectrum sensing), (2) select the best available channel (spectrum
management), (3) coordinate access to this channel with other users (spectrum sharing), and (4) vacate the channel when a approved
user is detected (spectrum mobility).

180

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The main functions of Cognitive Radios are:


(i) Spectrum Sensing:
It refers to detect the vacant spectrum and sharing it without harmful interference with other users. It is an
important requirement of the Cognitive Radio network to sense spectrum holes, detecting primary users is the most efficient way to
detect spectrum holes.
(ii) Spectrum Management:
It is the task of capturing the best available spectrum to meet user Communication requirements.
Cognitive radios should decide on the best spectrum band to meet the Quality of Service requirements over all available spectrum
bands, therefore spectrum management functions are required for Cognitive radios, these management functions can be classified as:
Spectrum analysis
Spectrum decision
(iii) Spectrum Mobility: It is defined as the process when a cognitive radio user exchanges its frequency of operation. Cognitive
radio networks target to use the spectrum in a dynamic manner by allowing the radio terminals to operate in the best available
frequency band, maintaining seamless communication requirements during the transition to better spectrum.
(iv) Spectrum Sharing: It refers to providing the fair spectrum scheduling method, one of the major challenges in open spectrum
usage is the spectrum sharing.

2. AES-assisted DTV scheme


AES-assisted DTV scheme: The primary user generates a AES-encrypted reference signal (pseudo-random). It is used as the sync
bits in the field sync segments remain unchanged for the channel estimation purposes. At the receiving end, the reference signal is
regenerated for the detection of the primary user and malicious user.

2.1 DTV Transmitter


The DTV transmitter obtains the reference signal as follow: first, generating a pseudo-random (PN) sequence, then encrypting
the sequence with the AES algorithm. Note that a pseudo-random sequence is first generated using a Linear Feedback Shift Register
(LFSR) with a secure initialization vector (IV). Once sequence is generated, it is used as an input to the AES encryption algorithm and
a 256-bit secret key is used for the AES encryption so that the maximum possible security is achieved(3).Denote the PN sequence by
x, then the output of the AES algorithm is used as the reference signal, which can be expressed as:
s = E (k, x) ..(1)
Here k is the key, and E( , ) denotes the AES encryption operation. The transmitter then places the reference signal s in the sync
bits of the DTV data segments.
The secret key can be generated and distributed to the DTV transmitter and receiver from a trusted 3rd party in addition to the
DTV and the CR user. The 3rd party serves as the authentication Centre for both the primary user and the CR user, and can carry out
key distribution. To prevent impersonation attack, the key should be time varying.

181

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IV

LFSR

KEY
S

AES

Fig. 1.DTV Transmitter.

2.2 DTV Receiver


The receiver regenerates the encrypted reference signal, with the secret key and IV that are shared between the transmitter and the
receiver(3). A correlation detector is employed, where for primary user detection, the receiver evaluates the cross correlation between
the received signal r and the regenerated reference signal s; for malicious user detection, the receiver further evaluates the autocorrelation of the received signal r.
The cross-correlation of two random variables x and y is defined as:
Rxy =< , >= E xy ..(2)
PUEA, the received signal can be modeled as :
r = s + m + n..(3)
Where s is the reference signal, m is the malicious signal, n is the noise, and are binary indicators for the presence of the
primary user and malicious user, respectively. More specifically, = 0 or 1 means the primary user is absent or present, respectively;
and = 0 or 1 means the malicious user is absent or present, respectively.
KEY

REGENERATE
REF SIGNAL S

IV
P

CORRELATION
DETECTOR

U
Fig. 2.DTV Receiver.

182

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1) Detection of the Primary User: To detect the presence of the primary user, the receiver evaluates the cross-correlation between the
received signal r and the reference signal s,
Rrs = < , >= < s, s > + < m, s > + < , > = 2s ..(4)
Where 2s is the primary users signal power, and s, m, n are assumed to be independent with each other and are of zero mean.
Depending on the value of in , the receiver decides whether the primary user is present or absent.
2) Detection of the Malicious User: For malicious user detection, the receiver further evaluates the auto-correlation of the received
signal r,
Rrr = < , >= 2 < , > +2 < , > + < , >= 22s + 22m + 2n..(5)
Where 2m and 2 n denote the malicious users signal power and the noise power.

3. Primary user emulation attack(PUEA):


One of the major technical challenges regarding spectrum sensing is the problem of accurately distinguishing primary user signals
from secondary user signals. In cognitive radio networks, primary users possess the priority to access the channel, while secondary
users must always relinquish access to the channel over to the primary user and ensure that no interference is generated. Consequently,
if a primary user begins to transmit across a frequency band occupied by a secondary user, the secondary user is required to leave that
specific spectral band immediately. Conversely, when there is no primary user activity present within a frequency range, all the
secondary users possess equal opportunity to the unoccupied frequency channel (5). Based on this principle, there exists the potential
for malicious secondary users to mimic the spectral characteristics of the primary users in order to gain priority access to the wireless
channels occupied by other secondary users. This scenario is referred to in the literature as primary user emulation (PUE).

3.1 PUE Example


In this network, there are three normal secondary users, named D1, D2 and D3. They are communicating with each other using
the white space channels. D1 and D2 are using Channel 1, D2 and D3 are using Channel 2, and D1 and D3 are using Channel 3. At
this time, a malicious secondary user, i.e., a primary user emulator appears on Channel 3. Since this malicious secondary user mimics
the spectral characteristics of the primary users, D1 and D3 think that there is a primary user transmitting on this channel. According
to the criteria of dynamic spectrum access network, D1 and D3 have to leave Channel 3 immediately. However, the other two
channels are both occupied by the other users right now, so they cannot find any other available channels to continue their
communication, making the connection terminated.

D2

D1

PUE

P
U
E

D3

Fig. 3. Primary user emulation attack functions


.
183

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.2 Impact of PUE on DSA Networks:


There are several outcomes that can be incurred by PUE attacks in a dynamic spectrum access network:
Unstable Connections: A network is frequently attacked by a primary user emulator, the secondary users in this network always have
to leave their current channels and seek new channels. However, it is very likely that other channels are also occupied, so their
connections have to be terminated.
Spectrum Under-utilization: The original purpose of dynamic spectrum access is to address the problem of spectrum scarcity caused
by FCCs fixed spectrum allocation.
In DSA, the secondary users can temporarily borrow unoccupied licensed spectrum (6). However, if there are several primary
user emulators in the network; it is possible that all the available licensed channels are occupied by them, so the normal SU cannot
find any channels to borrow. If it is the case, then the problem of spectrum scarcity is not solved at all.
Denial of Service: When a secondary user wants to transmit some data, it has to go through a request and acknowledgement process.
However, if all the channels are occupied by the primary user emulators, the normal SU cannot even find a channel to send a request,
so their service will be denied.
Interference with Primary Users: Although the PUE attacks are solely aimed at secondary users, and the primary user emulators are
supposed to obey the rule that they will not cause any interferences with the primary users. However, in a dynamic spectrum access
network, the PU and SU exit in the same network, so any users activates would have some impact on the others. Especially since
primary user emulators mimic the spectral characteristics of the primary users, their transmission power is usually higher than that of
the normal secondary users, so it can cause an interference with the primary users.
It is noted that PUE is different from traditional jamming in wireless networks. The malicious users do not aim to cause
significant interference to the secondary users. The objective of the malicious users is to cause the secondary users to vacate the
spectrum by having them believe that primary transmission is in progress. Thus, when PUE is successfully detected, the secondary
users do not suffer degradation in the quality of their communication due to the transmission from the malicious users.

3.3 Classification of Attackers


Since the security problem caused by PUE attacks was identified, different types of PUE attacks have been studied (7).
We now introduce different types of PUE attackers associated with their classification criteria.

Selfish & Malicious Attackers: A selfish attacker aims at stealing bandwidth from legitimate SUs for its own transmissions.
Power-Fixed & Power-Adaptive Attackers: The ability to emulate the power levels of a primary signal is crucial for PUE
attackers, because most of the SUs employ an energy detection technique in spectrum sensing. A power fixed attacker uses
an invariable predefined power level regardless of the actual transmitting power of the Pus and the surrounding radio
environment.
Static & Mobile Attackers: The location of a signal source is also a key characteristic to verify the identity of an attacker. A
static attacker has a fixed location that would not change in all round of attacks.

3.3.1 Impact of PUE attacks on CR Networks


The presence of PUE attacks causes a number of troubles problems for CR networks. The list of potential consequences of PUE
attacks is:

184

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Bandwidth waste: The ultimate objective of deploying CR networks is to address the spectrum under-utilization that is caused by the
current fixed spectrum usage policy.
By dynamically accessing the spectrum holes are able to retrieve these otherwise wasted spectrum resources. However, PUE
attackers may steal the spectrum holes from the SUs, leading to spectrum bandwidth waste again.
QoS degradation: The appearance of a PUE attack may severely degrade the Quality-of-Service (QoS) of the CR network by
destroying the continuity of secondary services. For instance, a malicious attacker could disturb the on-going services and force the
SUs to constantly change their operating spectrum bands. Frequent spectrum handoff will induce unsatisfying delay and jitter for the
secondary services.
Connection unreliability: If a real time secondary service is attacked by a PUE attacker and finds no available channel when
performing spectrum handoff, the service has to be dropped. This real time service is then terminated due to the PUE attack. In
principle, the secondary services in CR networks inherently have no guarantee that they will have stable radio resource because of the
nature of dynamic spectrum access. The existence of PUE attacks significantly increases the connection unreliability of CR networks.
Denial of Service: Consider PUE attacks with high attacking frequency; then the attackers may occupy many of the spectrum
opportunities. The SUs will have insufficient bandwidth for their transmissions, and hence, some of the SU services will be
interrupted. In the worst case, the CR network may even find no channels to set up a common control channel for delivering the
control messages. As a consequence, the CR network will be suspended and unable to serve any SU. This is called Denial of Service
(DoS) in CR networks.
Interference with the primary network: Although a PUE attacker is motivated to steal the bandwidth from the SUs, there exists the
chance that the attacker generates additional interference with the primary network.

4. DEFENCE AGAINST PUE ATTACK:


A reliable AES-encrypted DTV scheme, in which an AES-encrypted reference signals, is produced. It is used as the sync bytes of
each DTV data frame. With the help of this a shared secret between the transmitter and the receiver, the reference signal can be
regenerated at the receiver. It can then be used to accomplish precise detection of authorized PUs. This proposal necessitates no
modification in hardware or system structure except of a plug-in AES chip. It can also be applied to todays DTV system directly to
diminish PUEA, and achieve efficient spectrum sharing.
In the DTV system, the generated AES encrypted reference signal is also used for synchronization purposes at the authorized
receivers. The proposed representation diminishes PUEA, enabling robust system operation, and guarantees resourceful spectrum
sharing. The efficiency of the proposed approach is verified through both mathematical derivations. The PU generates a
pseudorandom AES-encrypted reference signal there by highlighting that synchronization is definite in the proposed model.
4.1.

EVALUATION FOR PRIMARY USER DETECTION

The system performance for primary user detection, under H0 and H1, through the evaluation of the false alarm rate Pf and the
miss detection probability Pm. The false alarm rate Pf is the conditional probability that the primary user is considered to be present,
when it is actually absent,
Pf = Pr H1 H0 (6)

The miss detection probability Pm is the conditional probability that the primary is considered to be absent, when it is present,
Pm = Pr(H0|H1)...(7)
185

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4.2

EVALUATION FOR MALICIOUS USER DETECTION

The system performance for primary user detection(6), under H0 and H1, through the evaluation of False Alarm Rate and Miss
Detection Probability for Malicious User Detection(7).Define P f,0 and P f,1 as the false alarm rate when = 0 or = 1, respectively,
Pf, 0 = Pr
( H01| H00).. (8)
Pf, 1 = Pr( H11| H10)..(9)
The overall false alarm rate is given by:
Pf = P0Pf, 0 + 1 P0 P f, 1 (10)
whereP0 is the probability that = 0, i.e.,
P0 = (1 Pf )P( = 0) + Pm P( = 1)... (11)
Similarly, the miss detection probabilities can be defined as Pm,0 and Pm,1, when the primary user is absent and present, respectively,
Pm,0= Pr(H00|H01). ..(12)
Pm, 1 = Pr(H10|H11)....(13)
The overall malicious node miss detection probability is defined as:
Pm = P0Pm, 0 + (1 P0) Pm, 1(14)

EVALUATION OF PUE ATTACK


14
12
10
8
6
4
2
0

False alarm rate pf

Miss detection
probability pm

Fig. 4.Evaluation of primary user emulation attack.

186

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5. CONCLUSION
A reliable AES-assisted DTV scheme was proposed for robust primary and secondary system operations under primary user
emulation attacks. In the proposed scheme, an AES-encrypted reference signal is generated at the TV transmitter and used as the sync
bits of the DTV data frames. By allowing a shared secret between the transmitter and the receiver, the reference signal can be
regenerated at the receiver and be used to achieve accurate identification of authorized primary users. Moreover, when combined with
the analysis on the auto-correlation of the received signal, the presence of the malicious user can be detected accurately no matter the
primary user is present or not. The practically feasible in the sense that it can effectively combat PUEA with no change in hardware or
system structure except of a plug-in AES chip. Potentially, it can be applied directly to todays HDTV systems for more robust
spectrum sharing. It would be interesting to explore PUEA detection over each sub-band in multi-carrier DTV systems.

REFERENCE:
[1] Ahmed Alahmadi, Mai Abdelhakim, JianRen, and Tongtong Li, Defense Against Primary UserEmulation Attacks inCognitive
Radio Networks Using AdvancedEncryptionStandardIEEE IFS,MAY2014,pp. 772-781.
[2] Deepa Das, Primary User Emulation Attack in CognitiveRadio Networks: A Survey, IRACST- IJCNWC,june 2013,pp. 312318.
[3] Ms. Shikha Jain, Emulation Attack in Cognitive Radio Networks: AStudy, IRACST- IJCNWC,apr 2014,pp. 169-172.
[4] FCC, Spectrum policy task force report, Federal Commun. Commission,Columbia, SC, USA, Tech. Rep. ET Docket No. 02135, Nov. 2002.
[5] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, NeXt generation/dynamic spectrumaccess/cognitive radio wireless
networks:A survey, Comput. Netw., Int. J. Comput. Telecommun.Netw., vol. 50,no. 13, pp. 21272159, Sep. 2006
[6] M. Thanu, Detection of primary user emulation attacks in cognitiveradio networks, in Proc. Int. Conf. CTS, May 2012, pp. 605
608.
[7] R. Chen and J.-M.Park, Ensuring trustworthy spectrum sensing incognitive radio networks, in Proc.
Technol. Softw.Defined Radio Netw., Sep. 2006, pp. 110119.

IEEE Workshop Netw.

[8]
R. Chen, J.-M.Park, and J. Reed, Defense against primary useremulation attacks in cognitive radio networks, IEEE J. Sel.
AreasCommun., vol. 26, no. 1, pp. 2537, Jan. 2008.
[9]
Z. Yuan, D. Niyato, H. Li, and Z. Han, Defense against primaryuser emulation attacks using belief propagation of location
informationin cognitive radio networks, in Proc. IEEE WCNC, Mar. 2011,pp. 599604.
[10]
Z. Jin, S. Anand, and K. P. Subbalakshmi, Detecting primary useremulation attacks in dynamic spectrum access networks,
in Proc. IEEEInt. Conf. Commun., Jun. 2009, pp. 15.
[11]
R. Chen, J. M. Park, and J. H. Reed, "Defense againstprimary user emulation attacks in cognitive radio networks,"IEEE Jl. on
Sel. Areas in Commun.: Spl. Issue on CognitiveRadio Theory and Applications, vol. 26, no. 1, pp. 25-37, Jan.,2008.
[12] Z. Jin, S. Anand, Mitigating Primary User Emulation Attacks in DynamicSpectrum Access Networks using Hypothesis Testing,
Mobile Computing and Communications Review, Volume 13, Number 2,pp-74-85.
[13]
S. Anand, Z. Jin, and K. P. Subbalakshmi, Ananalytical model for primary user emulation attacksin cognitive radio
networks, Proc., IEEESymposium of New Frontiers in Dynamic SpectrumAccess Networks (DySPAN2008), Oct.2008

187

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Harmonic Mitigation of a Solar FED Cascaded H-Bridge inverter using


Artificial Neutral Network
Priya.G, Ramani.G, Revathy.G
PG Scholar, Nandha Engineering College, Erode
Associate Professor, Nandha Engineering college, Erode
Assistant Professor, Erode Sengunthar Engineering College, Erode

Abstract- A concept of application of Artificial Neural Network (ANN) for estimating switching angle in an 11 level full bridge
cascaded multilevel inverter with optimal pulse width modulation , which was powered by five varying dc input sources. A solar panel
was connected to each cascaded inverter. For a given modulation index the optimal switching angles with lowest THD is generated
using trained neural network by replacing look-up table is proposed in this paper. The odd harmonics( 5,7,11) in the inverter is
eliminated by using the trained network. Theoretical concepts have been validated in simulation results using artificial neural network
technique which shows the high performance and technical advantages of the developed system.
keywords- Active power filter, distributed energy resources (DERs), harmonics injection, optimal pulse width modulation (OPWM),
selective harmonics compensation (SHC), Artificial Neural Network(ANN)
INTRODUCTION
For implementations of medium and high power inverters the development of different types of distributed generation, such
as fuel cells, photo voltaics, and wind turbines, have provided. The switching losses and electromagnetic interferences caused by
high dv/dt is problem arised in this type of system. Thus, to overcome these problems, selective harmonic elimination (SHE) based
optimal PWM (OPWM) are used in multilevel inverters to minimise the switching frequency and the total harmonic distortion (THD)
[1][14].
Nowadays, multilevel power inverters are widely used in AC motor drives, uninterruptible AC power supplies (UPS), high voltage
and high power applications due to their lower switching frequency, lower switching losses, high voltage rating and lower
electromagnetic interfaces (EMI) than conventional two level inverters [1]-[3]. In most cases, low distortion sinusoidal output voltage
waveforms are required with controllable magnitude and frequency. Numerous topologies and modulation strategies have been
introduced and studied extensively for utility and drive applications in the recent literatures. In the family of multilevel inverters,
topologies based on series connected H-bridges are particularly attractive because of their modularity and simplicity of control [1], [2].
Several switching algorithm such as pulse width modulation (PWM), Sinusoidal Pulse Width Modulation (SPWM), space-vector
modulation (SVM), selective harmonic eliminated pulse width modulation (SHEPWM) or programmed-waveform pulse width
modulation (PWPWM) are applied extensively to control and determine switching angles to achieve the desired output voltage [4]-[5].
Among the mentioned techniques only SHE method is able to eliminate low order harmonics completely. In the SHE method,
mathematical techniques such as iterative methods or mathematical theory of resultant can be applied to calculate the optimum
switching angels such that lower order dominant harmonics are eliminated [3], [4]. The application of ANN is recently growing in
power electronics and drives area. In the control of dcac inverters, ANNs have been used in the voltage control of inverters for ac
motor drives. A feed forward ANN basically implements nonlinear input-output mapping. For any chosen objective function, the
optimal switching pattern depends on the desired modulation index.
In this paper, a new training algorithm is developed which is used as an alternative for the switching angles look-up table to
generate the optimum switching angles of multilevel inverters. The advantages of this method are simple control circuit, controlling
the magnitude of output voltage continuously versus modulation indexes and there is no need to any lookup table after training the
ANN. Without using a real time solution of nonlinear harmonic elimination equation, an ANN is trained off-line using the desired
switching angles given by solving of the harmonic elimination equation by the classical method, i.e., the Newton Raphson method.
Back Propagation training Algorithm (BPA) is most commonly used in the training stage. After the termination of the training phase,
the obtained ANN can be used to generate the control sequence of the inverter. The simulation results are presented
MATLAB/Simulink software package for a single phase seven-level cascaded multilevel inverter to validate the accuracy of estimated
switching angles generated by proposed ANN system

188

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 1-phase ML cascade inverter topology and ANN-based angle control


SELECTIVE HARMONIC ELIMINATION AND POWER GENERATION
A proposed model with the 11-level cascade H-bridge (CHB) inverter and control is shown in Fig. 1. It has a five-full-bridge seriesconnected with five solar panels as its input dc supply that may have different voltage levels.
A. Solar Cell Modeling
A suitable model was designed to simulate the PV module that reflects the curves of the solar panel with relative exactness. The
single-diode model is shown in Fig. 2 to simulate the PV module under different irradiance and temperature levels. The suitable
model becomes application dependent. The PV cell model used in this work is a more innate model based on the single-diode cell
(Fig. 2) .From the PV module data sheets the inputs are utilisied. This model greatly reduce the modeling task once the iterations and
nonlinear equations are solved. Equation (1) is the basic formula, and the solar panels data
sheet provides the parameters to solve for the unknowns
I = IPV I0[e( V +RsI Vta ) 1] V + RsI.
Rp

..........(1)

where,
I -PV module output current;
V -PV module output voltage;
IPV- PV current;
I0 -saturation current;
Vt -thermal voltage
B. SHE and Unequal DC Sources
189

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The contents of the output voltage at infinite frequencies is shown in equation 2. The module voltages VPV1VPV5 are associated to
their particular switching angles 15. This equation have only odd harmonics. The reason for that lies on the assumptions of wave
symmetry that cancels out the even components. The target harmonics can be capriciously set, a new data set can be found, and a new
ANN can be trained for the system. The selection of target harmonics is depend on the application requirements. Equation (2) is the
main equation and also the initial for SHE. The target harmonics in (2) will define the set of transcendental equations to be solved. It
is desired to solve (2) so

Fig. 2. PV cell single-diode model representation

maintained and the lowest harmonics (in this case, the 5th, 7th, 11th, and 13th) are cancel

Vab(wt) = [4 (VPV1 cos(n.1) + VPV2


n=1,5,7,11,. n

cos(n.2) + VPV3

cos(n.3) + VPV4 cos(n.4)+ VPV5 cos(n.5))]


Varying output voltages from many dc sources such as solar panel , fuel cells etc depends on varying sunlight intensity,
load, or other factors. Either a dcdc converter or the modulation index of the grid-interface inverter is used to regulate this dc voltage
in grid connection. For example, the solar panel output voltage may differ based on the amount of energy available during a day , and
the grid-interface system should be able to respond to this variation in the switching angles to keep the fundamental regulated at its
reference value and the low-order harmonics minimized. The approach in this work is to maintain the fundamental at the desired level
by means of choosing the low frequency switching angles in (2) as shown in Fig. 1. This paper uses a nondeterministic approach to
solve for the angles instead of using an analytic method to determine the angles offline. This method gives solution where analytical
solution cannot proceed .
ANN
ANNs are computational models that were motivated by the biological neurons. It has a series of nodes with interconnections, for
input/output mapping the mathematical functions are used. Due to its flexibility to lead in its domain and outside it, as well as work
with the nonlinear nature of the problem,the ANN is suitable. Although the data set presented to the ANN is not complete and not all
combinations were obtained by the GA, the ANN has flexibility enough to interpolate and extrapolate the results. Because of this
features, it make ANNs appropriate for problems commonly encountered in power electronics such as fault detection and harmonic
detection. If it is propoerly trained the time consuming will be fast to run and parallelized easily.
The fundamental network is shown in Fig. 3. The network is multilayer with one input stage, two hidden layers, and one output layer.
The computational model of a biological neuron is highlighted in Fig. 3, and the interconnections also shown in the network. Its
inputs are the five voltage magnitudes measured at the terminals, and its output is the input for all the neurons in the next layer. Each
neuron aj computes a weighted sum of its n inputs Vk, k = 1, 2, . . . , n, and generates an output as shown in
n
190

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

aj = tgsig(wkVk + bias)
K=1

....(3)

The output can be given by the tangent sigmoid of the final weighted sum that usually has a bias associated to it that can be considered
as an additional input.
A. knowledge From Data:
A network has to be found the desired output for the trained data and also should have the ability to simplify for points inside the
hypercube space determined by the data. By updating the network weights according to given data will generalize data set so it
helpful in learning for the computational neuron.

Fig. 3. Multilayer feedforward perceptron neural network model.


Performance is measured by calculating the mean-square error (mse) as shown in
1 p
e = y(i) d(i)2
....(4)
p i=1
where
p- number of training data entries;
y - ANN output vector-current ANN output;
d - desired output vector-switching angles.
To minimize the error obtained in (4) ,the ANN back propogation training algorithm is used. A well-trained ANN would output
switching angles that are very close to the desired values, giving an error near zero in (4), for a given set of input voltages The desired
switching angles are those that minimize the harmonic components.

B. SHE Data Set:


Based on H- bridge topology, the possible number of data set for ANN training is desired. For a two-full-bridge case (five levels) a
data set of four voltage levels for training, would generate a table of 42 rows. In a five-H-bridge converter with ten points equally
spaced between 50 and 60 V, it may generate 105 different combinations. Instead of permutation the problem is faced as combination
problem for reduce the size of the data set. In this way, the data set can be greatly reduced.
C. ANN Training:
The new data set was divided into three subsets: training, validation, and test. By using the scaled conjugate gradient algorithm the
first subset of the ANN is trainned. A validation subset is used to stop the training to avoid generalization. If the validation error starts
to increase, then results in over fitting data. A third subset is used to verify that the data are not poorly divided. When this error gets a
low value in a different iteration than the validation and training subsets, it might be an indication of poor data division. The
proportions adopted in this work were 55% for training, 30% for validation, and 15% for test. All the 32 different networks were
trained 50 times each, and their performance values are shown in Fig. 5. The ANN that was implemented was shown in Figs. 1 and 4
which is a feed forward multilayer acuity with one hidden layer of 20 neurons .It is configured with single- and multiple-hidden layer
191

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANNs. The two-hidden-layer performance is shown in Fig. 5. The two hidden layer was chosen because of better performance,
training time, memorization, and learning ability.

Fig. 5. ANN performance results for different numbers of hidden layer neurons
.
SIMULATION RESULTS:
The 5th, 7th, 11th and 13th harmonics are strongly suppressed it is cleared from the simulation result. The obtained switching angles
for various values of modulation index using ANN for 11 level inverter is shown in Table I.

Modulation

Switching Angles

Index (M)

1 (rad.)

0.6

0.0330

0.65

2 (rad.)

3 (rad.)

4 (rad.)

0.0665

0.5189

0.6717

0.7935

0.0423

0.1094

0.4929

0.6686

0.8402

0.7

0.0510

0.1494

0.4686

0.6658

0.8840

0.75

0.0591

0.1868

0.4458

0.6635

0.9249

0.8

0.0668

0.2216

0.4246

0.6615

0.9631

0.85

0.0740

0.2539

0.4048

0.6599

0.9988

0.9

0.0807

0.2840

0.3864

0.6586

1.0320

0.95

0.0870

0.3118

0.3692

0.6576

1.0630

1.0

0.0929

0.3377

0.3532

0.6568

1.0919

Table I-Switching angles generated by ANN for 11-level

The FFT spectrum for 11- level inverters is shown in Fig. 6 respectively.

192

www.ijergs.org

5 (rad.)

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 6. FFT analysis of 11 level inverter


The THD analysis of 11-level inverter is shown in Fig. 7.

Fig.7. THD analysis of 11 level inverter.


CONCLUSION:
In this paper, the ANN is proposed to solve the selective harmonics elimination problem in inverters. The multilevel inverter used to
generate staircase waveform by estimating the optimum switching angle using feed forward neural network was successfully
demonstrated in this paper. The voltage control and harmonic suppression of selective set is successfully done by using this technique.
The switching angles for eleven-level inverter is calculated based on SHE strategy in order to call off the 5, 7,11 and 13 harmonics.
Then, an ANN is trained offline to reproduce these switching angles without constrain for any value of the modulation index. After the
training process it is enough to obtain the network for real time control. Simulation results for a eleven-level inverter to authenticate
the accuracy of proposed approach to calculated the optimum switching angles which produce the lowest THD.
REFERENCE
[1] Faete Filho, Leon M. Tolbert, and Burak Ozpineci," Real-Time Selective Harmonic Minimization for Multilevel Inverters
Connected to Solar Panels Using Artificial Neural Network Angle Generation" ieee transactions on industry applications, vol. 47, no.
5, september2011.
[2] Mitali Shrivastava" Artificial Neural Network Based Harmonic Optimization of Multilevel Inverter to Reduce THD"2012.
[3] J. Rodriguez, J. Lai, and F. Z. Peng, Multilevel inverters: A survey of topologies, control and applications, IEEE Trans. Ind.
Electron., vol. 49,
no. 4, pp. 724738, Aug. 2002.
[4] A. Pandey, B. Singh, B. N. Singh, A. Chandra, K. Al-Haddad, and D. P. Kothari, A review of multilevel power converters, Inst.
Eng. J. (India), vol. 86, pp. 220231, Mar. 2006.
[5] J. R. Wells, P. L. Chapman, and P. T. Krein, Generalization of selective harmonic control/elimination, in Proc. IEEE Power
Electron. Spec. Conf., Jun. 2005, pp. 13581363.
[6] B. Ozpineci, L. M. Tolbert, and J. N. Chiasson, Harmonic optimization of multilevel converters using genetic algorithms, IEEE
Power Electron. Lett., vol. 3, no. 3, pp. 9295, Sep. 2005.
[7] J. N. Chiasson, L. M. Tolbert, K. J. McKenzie, and Z. Du, A unified approach to solving the harmonic elimination equations in
multilevel converters, IEEE Trans. Power Electron., vol. 19, no. 2, pp. 478490, Mar. 2004.
[8] J. N. Chiasson, L. M. Tolbert, K. J. McKenzie, and Z. Du, Elimination of harmonics in a multilevel converter using the theory of
symmetric
polynomials and resultants, IEEE Trans. Control Syst. Technol., vol. 13, no. 2, pp. 216223, Mar. 2005.
[9] Z. Du, L. M. Tolbert, J. N. Chiasson, and H. Li, Low switching frequency active harmonic elimination in multilevel converters
with unequal DC voltages, in Conf. Rec. IEEE IAS Annu. Meeting, Oct. 2005, pp. 9298.
193

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[10] Z. Du, L. M. Tolbert, and J. N. Chiasson, Active harmonic elimination for multilevel converters, IEEE Trans. Power Electron.,
vol. 21, no. 2, pp. 459469, Mar. 2006.
[11] D. Ahmadi and J. Wang, Selective harmonic elimination for multilevel inverters with unbalanced DC inputs, in Proc. IEEE
Veh. Power Propulsion Conf., Sep. 2009, pp. 773778.
[12] M. Dahidah and V. G. Agelidis, Selective harmonic elimination multilevel converter control with variant DC sources, in Proc.
IEEE Conf. Ind. Electron. Appl., May 2009, pp. 33513356.
[13] M. G. H. Aghdam, S. H. Fathi, and G. B. Gharehpetian, Elimination of harmonics in a multi-level inverter with unequal DC
sources using the homotopy algorithm, in Proc. IEEE Int. Symp. Ind. Electron., Jun. 2007, pp. 578583.
[14] T. Tang, J. Han, and X. Tan, Selective harmonic elimination for a cascade multilevel inverter, in Proc. IEEE Int. Symp. Ind.
Electron., Jul. 2006, pp. 977981

194

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Multiple Harmonics Elimination in Hybrid Multilevel Inverter Using Soft


Computing Technique
1

R.Ragunathan,1PG scholar, 2A.Rajeswari, 2Assistant Professor, Nandha Engineering Collage,


Email Id : ragunathanme@gmail.com,rajeswarianngappan@gmail.com

Abstract Multilevel inverters are extensively used in various voltage applications. Harmonic elimination in multilevel inverter
is a complex optimization problem. A fresh nine level hybrid multilevel inverter using harmonics elimination is presented in this
paper. This method involves less number of switches associated with more number of voltage levels. The stages with higher DC
link have more advantages like low commutation, reduced switching losses, increased efficiency and low input stages with more
number of output levels. In this method the bacterial foraging optimization technique (BFO) algorithm is used to determine the
switching angles of the inverter. The proposed method employs multicarrier pulse width modulation technique, this eliminated
harmonics component from inverter output. It can be easily implemented using a PIC micro controller. Simulation result reveals
the quality of feasible result.

Keywords Hybrid multilevel inverter, Harmonics elimination, MCPWM, Bacterial Foraging Optimization (BFO).
I.INTRODUCTION
The harmonics elimination is widely utilized on multilevel inverters. Multilevel inverters are applicable in average voltage
and high power applications because of their less switching losses and high efficiency than ordinary inverters. The desired output
voltage of this inverter is synthesized from several levels of dc voltages [1]. To control the output voltage and reduce the undesired
harmonics, different sinusoidal pulse width modulation (PWM)[8] and space-vector PWM schemes are suggested for multilevel
inverters; however, PWM techniques are not able to eliminate low-order harmonics completely. Another approach is to choose
switching angles [1] so that specific lower order dominant harmonics are suppressed. This method is known as harmonic elimination.
Multiple harmonic elimination methods are used by the harmonic elimination technique in high power inverters[3]; it offers
enhanced operations at low switching frequency while reducing cost and bulky size filters. They have been successful adopted in
different multilevel inverter topologies [6],[7]. Numerical iterative techniques, such as NewtonRap son method, are applied to
solve the harmonics problem; however, such techniques need a good initial guess that should be very close to the exact solution.
Although the NewtonRapson method works properly if a good initial guess is available, providing a good guess is very difficult
in most cases. This is because the search space of the harmonics problem is unknown, and one does not know the solution is there
or not, in case if exists; this is the good initial guess or not. A systematic approach to solve the harmonics problem based on
resultant theory method is proposed, where transcendental equations that describe the harmonics problem are transformed into a
corresponding set of polynomial equations [1],[2], [5]and then, resultant theory method is utilized to find all possible sets of
solutions for this equivalent problem.
Another approach to contract to the harmonics problem is based on modern stochastic search techniques such as genetic
algorithm (GA) and particle swarm optimization (PSO)[8],[10], bacterial forage optimization (BFO)[9]. However, by increasing the
number of switching angles, the complexity of search space increases dramatically and both the methods trap the local optima of
search space. Of course, the exact solution for the no. of switching angles are cannot be calculated by evolutionary based algorithms;
however, it can be said that as the number of switching angles increases, also decrease the finding optimum switching angles.
Recently, a new active harmonic elimination method is also proposed to eliminate higher order harmonics in multilevel inverters. In
this method, first, the switching angles to eliminate the lower order harmonics of staircase voltage waveform are calculated, which is
called fundamental switching frequency method. Then, residual higher order harmonics are eliminated by additional PWM switching
patterns. Unfortunately, this method uses very high frequency switching to eliminate higher order harmonics; also, it needs a very
complicated control procedure to generate the gate signals for power switches. Harmonic elimination
The main drawbacks of existing harmonics methods are their mathematical complexity and the heavy computational loads,
resulting high cost of the hardware needed for real-time implementation. The last problem is commonly circumvented by preliminary
off-line computation of the switching angles and the subsequent creation [4] of lookup tables to be stored in the microcontrollers
internal memory for real-time fetching. In the following, Section II deals with hybrid multilevel inverter, Section III describes the
determination of bacterial forging algorithms, and Section IV multicarrier pulse width modulation describes the procedure for the
firing angle. Section V shows some simulation results and deals with some analysis, while Section VI reports experimental results and
195

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

their conclusion. Good agreement is noted among experimental and simulation results, confirming the accuracy of the proposed
technique.

II.HYBRID MULTILEVEL INVERTER


There are several topologies such as neutral point clamped or diode clamped multilevel inverter, flying capacitor based
multilevel inverter, cascaded H-bridge multilevel inverter and hybrid H-bridge multilevel inverter. The main disadvantage of diode
clamped multilevel inverter topology is restriction to the high power operation. The first topology introduced is the series H-bridge
design,[7],[12] in which several configurations have been obtained. This topology consists of series power conversion cells which
form the cascaded Hbridge multilevel inverter and power levels may be scaled easily. An apparent disadvantage of this topology is the
requirement of large number of isolated voltage sources. The proposed topology for multilevel inverter has high number of steps
associated with low number of power switches. In addition, for producing the levels at the output voltage, a procedure for calculating
the required dc voltage source is proposed. The advantage of the hybrid multilevel inverter is modularized structure
The multilevel inverter has a general structure of the hybrid multilevel inverter is shown in figure. 1. Each of the separate voltage
source (1Vs, 2Vs, 4Vs) is connected in series with other sources via a special circuit associated with it. Each stage of the circuit
consists of only one active switching element and one bypass diode that make the output voltage as positive one with several levels
[12]. The basic operation of modified hybrid multilevel inverter for producing the output voltage as +1Vdc is to turn on the switch S1
(S2 and S3 turn off) and turning on S2 (S1 and S3 turn off) for producing output voltage as +2Vdc. Similarly other levels can be
achieved by turning on the suitable switches at particular intervals; It can be inferred that only one Hbridge is connected to get both
positive and negative polarity. The main advantage of modified hybrid multilevel inverter is high number of levels with reduced
number of switches.
The Figure below shows the typical output voltage waveform of a Hybrid Multilevel Inverter with 3 separate dc sources.

Fig 1. Typical output waveform for Hybrid Multilevel


Inverter

III.BACTERIAL FORAGING OPTIMIZATION


Bacterial Foraging Algorithm (BFA) was planned by Passino is inspired by the social foraging behavior of Escherichia coli.
BFA has been widely accepted as a global optimization algorithm of current interest for distributed optimization. BFA has already
drawn the attention of researchers because of its efficiency in solving real-world optimization problems in several application
domains. This optimization technique is relatively new to the family of nature-inspired optimization algorithms [9]. Soft computing
tools like Genetic algorithm (GA) and Simulated Annealing have become standard procedures for designing optimized antennas where
analytical optimization becomes tough and does not provide satisfactory result. GA ruled several years for antenna optimization
problems. But search for new computationally efficient algorithms to handle computationally large and complex problems are
continuing. Apart from the modification of GA new paradigms have been developed. These are Particle Swarm Optimization (PSO),
Ant Colony Optimization and Bacteria Foraging Algorithm (BFA). Among them, BFA being the latest trend that is efficient in
optimizing parameters of the structures. Nowadays Bacteria Foraging technique is gaining importance in the optimization problems.
Because

Philosophy says, Biology provides highly automated, robust and effective organism
Search strategy of bacteria is salutary (like common fish) in nature
Bacteria can sense, decide and act so adopts social foraging (foraging in groups)

Above all Search and optimal foraging decision making of animals can be used for solving engineering problems. To perform
social foraging an animal needs communication capabilities and it gains advantages that can exploit essentially the sensing capabilities
196

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

of the group, so that the group can gang-up on larger prey, individuals can obtain protection from predators while in a group, and in a
certain sense the group can forage a type of collective intelligence.

Working Principle
In this algorithm, the population comprises of a set of individuals, where each individual is called a bacterium. Bacteria
search for nutrients in a manner to maximize energy obtained per unit time[9]. Individual bacterium also communicates with others by
sending signals. The flow chart shows the working of BFA optimization algorithm.

Fig 2. Flow chart (BFA algorithm)


A bacterium passes through the following phases during its lifetime:
1. Chemotaxis
2. Swarming
3. Reproduction
4. Elimination-Dispersal
Chemotaxis:
The process, in which a bacterium moves by taking small steps while searching for nutrients, is called chemotaxis. The key
idea of BFOA is to mimic the chemotactic movement of virtual bacteria in the problem search space[9]. During chemotaxis, a
bacterium moves in one of the two different manners: swim, or tumble. If the bacterium changes its direction of motion, it is called
tumbling. If the bacterium moves ahead in the same direction, it is called swimming. Whereas, if it changes its direction of motion, it
is called tumbling. During movement, a bacterium aims at moving towards a nutrient gradient and tries to avoid noxious environment.
Generally a bacterium moves for a longer distance in a friendly environment. Let _i(j, k, l) represents ith bacterium at jth chemotactic,
kth reproduction and lth elimination-dispersal step. C(i) is the size of the step taken in a random direction specified by the tumble.
i(j + 1, k, l) = i(j, k, l) + C(i)(j)
(1)
Swarming:
An interesting group behavior has been observed for several species of bacteria including E. coli and S. typhimurium, where
intricate and stable spatio-temporal patterns (swarms) are formed by the cells in semisolid nutrient medium. A group of E. coli cells
arrange themselves in interesting patterns around the nutrient gradient. The cells release an attractant named aspertate, which helps
them to aggregate into groups and move as concentric patterns of swarms with high bacterial density. This cell-to-cell signaling in E.
coli is termed as swarming.

Reproduction
When the bacteria get enough food for growth, they increase in length and in presence of suitable temperature they break in
the middle to form an exact replica of itself. This phenomenon inspired Passino to introduce an event of reproduction in BFOA. In the
197

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

algorithm, the least healthy bacteria eventually die while each of the healthier bacteria (i.e., those with better fitness value) split into
two bacteria, which are then placed in the same location. In this way, the population size is kept constant. Thus, the reproduction step
consists of sorting the bacteria in the population i(j, k, l), i, i = 1, ..., S based on their objective function value f(i(j, k, l)) and
eliminating half of them with the worst fitness. The remaining half will be duplicated as to maintain a fixed population size.

Elimination and Dispersal


Gradual or sudden changes in the local environment, where a bacterium population lives, may occur due to adverse situations
like rise in temperature [8],[9]. Events may take place in such a way that all the bacteria in a region may be killed or a group may be
dispersed into a new location. To simulate this phenomenon in BFOA, some bacteria are liquidated at random with a very small
probability while the new replacements are randomly initialized over the search space [13],[15].
IV.MULTICARRIER PWM TECHNIQUE
Multicarrier PWM technique is the widely adopted modulation strategy for multilevel inverter. It is similar to that of the
sinusoidal PWM strategy except for the fact that several carriers are used [11]. Multicarrier PWM is one in which several triangular
carrier signals are compared with one sinusoidal modulating signal. The number of carriers required to produce m level output is m 1.
The reference waveform has peak to peak amplitude of Am and a frequency fm .The reference is continuously compared with each of
the carrier signals and whenever the reference is greater than the carrier signal, pulse is generated[11],[14]. Frequency modulation
ratio is defined as the ratio of carrier frequency and modulating frequency. Amplitude modulation ratio is defined as the ratio of
amplitude of modulating signal and amplitude of carrier signals.

Types of Multicarrier PWM Method


1.
2.
3.

There are a few types of the multicarrier PWM method. There are,
Alternate Phase Opposition Disposition (APOD)
Phase Opposition Disposition (POD)
Phase Disposition (PD)

Alternate Phase Opposition Disposition (APOD)


The carrier waves have to be displaced from each other by 180 degrees alternately as shown in Figure 3. In this modulation,
the inverter switching frequency and the device switching frequency is given by and respectively. It was found and can be
implemented in all types of multilevel inverter, but it is best suitable for NPC. It is do because each carrier signal can be related to
each semiconductor devices. It is not suitable for CHB as there is an uneven disturbance of power because each vertical shift related to
each carrier and level to a particular bridge.

Fig 3. Alternate Phase Opposition Disposition PWM

Phase Opposition Disposition (POD)


The carrier waveform are in all phase above and below the zero reference value; however there is 180 degree phase shift
between the ones above and below zero respectively as shown in figure 4.

198

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 4. Phase Opposition Disposition PWM

Phase Disposition (PD)


For the phase disposition technique, a zero point is set as the reference point. The carrier signals are set to be in phase for
above and below reference point (zero line). Figure 5 show that the phase disposition multicarrier PWM.

Fig 5. Phase Disposition PWM

V.SIMULATION RESULTS
The simulation model of proposed system is shown in the Figure 6. The nine level hybrid multilevel inverter to validate the
computational results for switching angles, a simulation is carried out in MATLAB/SIMULINK software tool for a nine level hybrid
multilevel inverter. The proposed method is used to get sinusoidal waveform and reduced harmonics with minimum number of
components. The three different dc input are used, It is in the order of V1, 2V1, 3V1. Given input dc voltage are V1=60v, 2V1=120v,
3V1=180v. Therefore the efficiency of the multilevel inverter is increased. Soft computing method gives the switching angles. The
low order harmonics are reduced significantly.

Fig 6. Simulation Diagram of Proposed System


199

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 7. Output waveform

Fig 8. FFT Analysis of a nine Level Hybrid Inverter

Fig 9. Percentage of Harmonics of a nine Level Hybrid Inverter

VI.CONCLUSION
In this paper, a nine level hybrid multilevel inverter is used to get sinusoidal waveform and also to increase the efficiency of
the inverter. The simulation results shows nine level hybrid multilevel inverter have been illustrated using MATLAB software.
Multilevel inverter is generally used to obtain a high resolution and produces near sinusoidal output waveform using reduced number
of switches and low switching losses. The harmonic reduction is achieved to a greater extent than the other conventional inverters. The
basic structure details and operating characteristics of hybrid multilevel inverter have been described by taking a nine level
configuration and also extend the design flexibility. These proposed control schemes have been demonstrated through simulation.

REFERENCE:
[1] Concettina Buccella, Carlo Cecati, Maria Gabriella Cimoroni, and Kaveh Razi, Analytical Method for Pattern Generation in
Five-Level Cascaded H-Bridge Inverter Using Selective Harmonic Elimination IEEE Trans. Ind. Electron., vol. 61, no. 11,
pp. 19631971, Nov 2014.
200

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[2] J. Napoles, A. Watson, J. Padilla, J. Leon, L. Franquelo, W. Patrick, and M. Aguirre, Selective harmonic mitigation
technique for cascaded Hbridge converters with non-equal DC link voltages, IEEE Trans. Ind. Electron., vol. 60, no. 5, pp.
19631971, May 2013.
[3] J. Napoles, J. I. Leon, R. Portillo, L. G. Franquelo, and M. A. Aguirre, Selective harmonic mitigation technique for highpower converters, IEEE Trans. Ind. Electron., vol. 57, no. 7, pp. 23152323, Jul. 2010.
[4] L. G. Franquelo, J. Napoles, R. Portillo, J. I. Leon, and M. A. Aguirre, A flexible selective harmonic mitigation technique to
meet grid codes in three-level PWM converters, IEEE Trans. Ind. Electron., vol. 54, no. 6, pp. 30223029, Dec. 2007.
[5] C. Buccella, C. Cecati, and M. G. Cimoroni, Investigation about numerical methods for selective harmonics elimination in
cascaded multilevel inverters, in Proc. Int. Conf. ESARS Propulsion, Bologna, Italy, Oct. 2012, pp. 16.
[6] B. Sanzhong and S. M. Lukic, New method to achieve AC harmonic elimination and energy storage integration for 12-pulse
diode rectifiers, IEEE Trans. Ind. Electron., vol. 60, no. 7, pp. 25472554, Jul. 2013.
[7] S. R. Pulikanti, G. Konstantinou, and V. G. Agelidis, Hybrid seven-level cascaded active neutral-point-clamped-based
multilevel converter under SHE-PWM, IEEE Trans. Ind. Electron., vol. 60, no. 11, pp. 47944804,Nov. 2013.
[8] F. Filho, H. Z. Maia, T. H. A. Mateus, B. Ozpineci, L. M. Tolbert, and J. O. P. Pinto, Adaptive selective harmonic
minimization based on ANNs for cascade multilevel inverters with varying DC sources, IEEE Trans.Ind. Electron., vol. 60,
no. 5, pp. 19551962, May 2013.
[9] R. Salehi, B. Vahidi, N. Farokhnia, and M. Abedi, Harmonic elimination and optimization of stepped voltage of multilevel
inverter by bacterialnforaging algorithm, J. Elect. Eng. Technol., vol. 5, no. 4, pp. 545551, Dec. 2010.
[10] M. T. Hagh, H. Taghizadeh, and K. Razi, Harmonic minimization in multilevel inverters using modified species-based
particle swarm optimization, IEEE Trans. Power Electron., vol. 24, no. 10, pp. 22592267, Oct. 2009.
[11] H. L. Liu, G. H. Cho, and S. S. Park, Optimal PWM design for high power three-level inverter through comparative
studies, IEEE. Trans. Power Electron., vol. 10, no. 1, pp. 3847, Jan. 1995.
[12] J. Kumar, B. Das, and P. Agarwal, Selective harmonic elimination technique for a multilevel inverter, in Proc. 15th NPSC,
Bombay, India, Dec. 2008, pp. 608613, IIT.
[13] L. Li, D. Czarkowski, Y. Liu, and P. Pillay, Multilevel selective harmonic elimination PWM technique in series-connected
voltage inverters, IEEE Trans. Ind. Appl., vol. 36, no. 1, pp. 160170, Jan./Feb. 2000.
[14] V. G. Agelidis, A. Balouktsis, and I. Balouktsis, On applying a minimization technique to the harmonic elimination PWM
control: The bipolar waveform, IEEE Power Electron. Lett., vol. 2, no. 2, pp. 4144, Jun. 2004.
[15] V. G. Agelidis, A. Balouktsis, and C. Cosar, Multiple sets of solutions for harmonic elimination PWM bipolar waveforms:
Analysis and experimental verification, IEEE Trans. Power Electron., vol. 21, no. 2, pp. 415421, Mar. 2006

201

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Implementation of Wireless Patient Body Monitoring System using RTOS


Gunalan .M.C1, Satheesh.A2
PG Scholar1, Prof. & DEAN/EEE2
M.E - Embedded System Technologies1, Department of Electrical and Electronics Engineering
Nandha Engineering College, Erode
guh1820@gmail.com1 ,asatheeshnec@rediffmail.com.com2

Abstract In the past decades, the requirement in the health care field is rising rapidly, and therefore we need a well-equipped
efficient monitoring systems for health care centers. In general, most of the hospitals, manual inspection is done in order to collect the
records of patients condition. Continuous and frequent monitoring of patients is required based on their health state. This leads to
disadvantages like long measurement time, low monitor precision, and deployment of more manpower, this paper provides a fully
automated and wireless monitoring system.
In this paper, a wireless network is created for remotely monitoring of patient's health parameters like Temperature, ECG, Heartbeat,
Coma recovery and saline level indication. All these parameters are continuously measured using appropriate efficient low cost
modules, which are designed for each parameter. The measured data from the patients are transferred to a central monitoring station
via a Zig-bee. In this a PC acts as a central monitoring station which runs LabVIEW for monitoring the parameters.
Terms RTOS, Wireless, Zig-bee, LabView, body monitoring, monitoring, patient monitoring
1. Introduction
In present scenario, patient health parameters are adopting rapidly. For implementing automated measurements each
patient is given a dedicated system and does not works on centralized mode of operation. If a patient is admitted in ICU a
regress monitoring of health parameters is done but consider if a patient is admitted in a normal ward there advance
measurement systems doesnt exist. In such cases nurse goes to ward and measures patients body parameters for every
certain interval of time, During this manual measurement there is chance of missing the accuracy during to inefficient
nurses, the measurement records which taken by nurses are can be analyzed by doctors as reference of disease diagnosis.
If the measurement goes wrong the diagnosis fails or misleads. This term of conventional not only wastes nurses massive
manpower, but also aggregation, query analysis to measurement result is miscellaneous, as well as cannot feedback in
time when patient appears special condition, which can cause delay of treatment time. Through analysis we can see, this
kind of style has bigger limitation, especially, to those patients with infectious diseases, monitoring personnel is
inconvenience to contact.
So, aiming to this problem, by sensor technology, single chip microprocessor technology, etc., we design a wireless
remote monitoring system. This system uses wireless communication (Zig-bee) technology, which eliminates the manual
measurements. Monitoring of each patient sub-system in real time, as well as communicating with central monitoring station, we can
increase work efficiency, and data reliability, etc
2. SYSTEM ANALYSIS
2.1 EXISTING SYSTEMS
In this chapter we have mentioned the existing and proposed system as follows
2.1.1 In-Home Wireless Monitoring Of Physiological Data for Heart Failure Patients
This system proposes an integrated system (hardware and software) for real-time, wireless, remote acquisition of cardiac and other
physiologic information from HF patients while in the home environment. Transducers for measurement of electrocardiogram (ECG),
heart rate variability (HRV), acoustical data are embedded into patient clothing for unobtrusive monitoring for early, sensitive
detection of changes in physiologic status. Sampling rate for this system is 1 kHz per channel. Signal conditioning is performed in
202

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

hardware by the patient wearable system, after which information is wirelessly transmitted to a central server located elsewhere in the
home for signal processing, data storage, and data trending. The dynamic frequency ranges for the ECG and heart sounds (HS) are
0.05-160 Hz and 35-1350 Hz, respectively. The range-of-operation for the current patient-wearable physiologic data capture design is
10010 feet with direct line-of-sight to the home server station. Weight measurements are obtained directly by the in-home medical
server using a digital scale. Physiologic information (ECG, HRV, HS, and weight) are dynamically analyzed using a combination of
the LabVIEW (National Instruments, Inc.; Austin, TX) and MATLAB (MathWorks, Inc.; Inc
Natick, MA) software strategies. Software-based algorithms detect out-of-normal or alarm conditions for HR and weight as defined by
the health care provider, information critical for HF patients. Health care professionals can remotely access vital data for improved
management of heart failure.
2.1.2 A wireless surface electromyography system
Surface electromyography (SEMG) systems are utilized throughout the medical industry to study abnormal electrical activity of the
human muscle. Historically, SEMG systems employ surface (skin) mounted sensors that transmit electrical muscle data to a computer
base via an umbilical cord. A typical SEMG analysis may exercise multiple sensors, each representing a unique data channel,
positioned about the patient's body. Data transmission cables are linked between the surface mounted sensor nodes and a backpack
worn by the patient. As the number of sensors increases, the patient's freedom of mobility decreases due to the lengthy data cables
linked between the surface sensors and the backpack. An N-channel wireless SEMG system has been developed based on the ZigBee
wireless standard. The system includes N-channels, each consisting of a wireless ZigBee transmitting modem, an 8-bit
microcontroller, a low-pass filter and a pre-amplifier. All channels stream data to a central computer via a wireless receiving modem
attached directly to the computer. The data is displayed to the user through graphical development software called LabView. The
wireless surface electromyography (WSEMG) system successfully transmits reliable electrical muscle data from the patient to a
central computer. The wireless EMG system offers an attractive alternative to traditional wired surface electromyography systems as
patient mobility is less compromised
2.1.3 Automatic Mental Health Assistant: Monitoring and Measuring Nonverbal Behavior of the Crew During Long-Term
Missions
This system presents a method for monitoring the mental state of small isolated crews during long-term missions (such as space
mission, polar expeditions, submarine crews, meteorological stations, and etc.) The research is done as a part of Automatic Mental
Health Assistant (AMHA) project which aims to develop set of techniques for automatic measuring of intra- and inter- personal states
in working groups. The method is focused on those aspects of psychological and sociological states that are crucial for the
performance of the crew. In particular, we focus on measuring of emotional stress, initial signs of conflicts, trust, and ability to
collaborate. The developed method is also currently tested by usage of a web-based platform.
2.1.4 DRAWBACKS
The above mentioned three systems were having some drawbacks as follows
Any one of the parameter is taken and measured
Long measurement time
Low monitor precision
Difficulty in monitoring patient
connection of many instruments are tedious process
difficulty in monitoring patient body temperature by thermometer
Heart beat is measured manually.
203

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Como patients should be monitored closely in person


No indication for saline level
2.2 PROPOSED SYSTEM
This paper uses a wireless medium for communication between sub-system and main monitoring station. In this system five
parameters are measured.
Body temperature
Saline level indication
Comma level indication
Heartbeat counter
EMG (Electro Myo Gram)
ECG (Electro Cardio Gram)
These parameters will be measured for a specific interval of time continuously and these data will be collected by the monitoring subsystem. Now the data will be sent from sub-system to the main monitoring station via Zig-bee network. The data will be fetched by
the software (LABVIEW) and the data will be processed by software. If the parameter goes beyond the predefined values at once it
sends an SMS to the concerned doctor that the patient is in serious stage.
The block is as follows

Fig 2.1 System Block Diagram


2.2.1 ADVANTAGES
Eliminates the manual system measurements and monitoring processes
Temperature measurement has high accuracy as LM35 is used
The patient status is sent effectively to the doctor via SMS
Very instantly the status of the patient is monitored with high accuracy
All the parameters are embedded in to single system which easy to handle by a normal person
2.3 FEASIBILITY STUDY
204

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In our analysis the methodology we use is more feasible than the existing methods of measuring the patient body parameters
2.3.1

ECONOMICAL FEASIBILITY

The existing methods are not so cheaper because it has many disadvantages like all systems are being designed for measuring
a specific parameter only. The systems which are existing today are also so costly. And these systems must be stored in a certain
temperature for a perfect working. So, for the maintaining of these system air conditioners will be used this consumes much electricity
and we have to pay for electricity board a lot.
2.3.2

OPERATIONAL FEASIBILITY

When compared with the existing methods the proposed system is not as complex as the existing methods because no manual
operations are carried out. All the equipment will be controlled from a pc by software. No manual attention is needed until the
emergency alarm rings. And an alert SMS is sent to the concerned doctor. For that we have to simply feed the mobile number of
doctor. And if anything badly occurs it will inform through a message to doctor.
2.3.3

TECHNICAL FEASIBILITY

The existing methods must have a trained person to operate that system any one cannot operate the easily. If problem comes to user
end it is not easy to solve. In our system it is very easy to operate and ordinary person who know to operate a pc can operate the
software very easily for monitoring purpose.
3 SOFTWARE DESCRIPTION
We are using two software in our paper they are explained below as follows
3.1 NI LabVIEW
LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench) is a platform and a development environment for a visual
programming language from National Instruments. The purpose of such programming is to automate the usage of decicision making
and measuring equipment in a laboratory setup. The graphical language named "G" was originally released for the Apple Mac
systems, LabVIEW is commonly used for data acquisition, complex processing, instrument control, industrial automation etc.. on a
various platforms including Microsoft Windows, UNIX, Linux, and Mac OS X. The recent versions of LabVIEW provides more
features and interface modules.
4. Paper Descriptions

4.1 PROBLEM DEFINITION

Long measurement time


Low monitor precision
Difficulty in automatic monitoring patient
connection of many instruments are tedious process
difficulty in monitoring patient body temperature by thermometer
Heart beat is measured manually
Como patients should be monitored closely in person
No indication for saline level

4.2 OVERVIEW OF THE PAPER


Now-a-days every instrument is automated. In medical field also the automation is developing very rapidly. Large hospital and
medical research centre are adopting towards automation. But the cost implementing automated systems is very huge. For each patient
an individual monitoring system should kept.
205

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

This drastically increases the implementing cost and also the system occupied space. To overcome this problem this paper has been
framed with methodologies which can be used for this monitoring system.
This paper presents a wireless patient body monitoring system in which Zig-bee is used for wireless communication. The subsystems are integrated with main monitoring server with a mesh network formed using Zig-bee communication. 4.3 MSP430F5438
MSP430F4538 microcontroller comes under MSP430 family of ultralow-power microcontrollers which is a product of Texas
Instruments. This device consists of several different sets of peripherals targeted for various applications. The architecture supports five
modes. These are optimized to extended battery life in portable high precision monitoring and control applications. This microcontroller
has a powerful 16-bit RISC CPU and constant generators that can produce maximum code efficiency.
The digitally controlled oscillator (DCO) allows the microcontroller wake-up from low-power modes to active high performance
mode in milli-seconds. The MSP430F5438 microcontrollers are integrated with a high performance analog-to-digital (A/D) converter,
universal serial communication interfaces (USCI), three 16-bit timers, real-time clock module with alarm capabilities, hardware
multiplier, DMA and up to 87 I/O pins.

Applications includes analog and digital sensor systems,digital timers, digital motor control, thermostats,
hand-held meters, remote controls, etc.
4.4 GSM MODEM (900/1800 MHZ)
GSM Modem can accept any GSM network operator SIM card and act just like a mobile phone with its own unique mobile number.
The advantage of using this modem will be that you can use its RS232 port to communicate and develop embedded applications.
Applications includes SMS Control, data transfer, remote control and logging can be developed easily.
This modem can either be connected to PC serial port directly or to any microcontroller through RS232. It can be used for sending
and receiving SMS and calls. It can be used in GPRS mode to interface with internet and perform applications for data logging, decision
making and control. In GPRS mode you can also connect to any remote FTP server and upload files for data logging.
This modem is a plug and play highly flexible quad band SIM900A GSM modem for direct and easy integration to RS232
applications.
4.4.1 Applications
SMS based Remote Control & Alerts
Security Applications
Sensor Monitoring
GPRS Mode Remote Data Logging
4.4.2 Features
Status of Modem Indicated by LED
Simple to Use & Low Cost
On board switching type power supply regulator
RS232 output
4.5 MicroC/OS II (COS II)
C/OS-II is a completely real-time, portable, preemptive, ROMable, scalable, multitasking kernel. C/OS-II is written in ANSI C
and contains a small portion of assembly language code to adapt it to different processor architectures. To date, C/OS-II has been
ported to different processor architectures.
C/OS-II is based on C/OS, The Real-Time Kernel that was first created. Millions of people around the world are using C/OS and
C/OS-II in all kinds of applications, such as cameras, highway telephone call boxes, avionics, high-end audio equipment, medical
instruments, musical instruments, network adapters, ATM machines, industrial robots, engine controls, and more. Numerous colleges
and universities have also used C/OS and C/OS-II to teach students about real-time systems.
C/OS-II is upward compatible with C/OS v1.11 (the last released version of C/OS) but provides many improvements. If you
currently having an application that runs with C/OS, it should run virtually on C/OS-II. All of the services (i.e., function calls)
provided by C/OS have been saved for back . You may, however, have to change include files and product build files to point to the
new filenames.
4.5.1 Features
Portable
ROMable
Scalable
206

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Preemptive
Multitasking
Deterministic Execution times
Task Stacks
Interrupt Management
Robust and Reliable

4.4

207

4.4.1

PATIENT SIDE CIRCUIT

4.4.2

PC SIDE CIRCUIT

PAPER SNAPS

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4.4.3

PC LABVIEW SNAP

5 CONCLUSION
The patient body monitoring system implemented with RTOS gives promising results than the other conventional methods. It
works effectively in term of automated systems compared to the existing method. However, it has room for improvement in this project.
In the future, the system will be intergrated with WWW (World Wide Web). so, that patient data can be accessed over internet from any
part of the world. As a result, medical prescriptions and precautions can be made easier. In a nutshell, this project is highly potential for
application purposes in ICU monitoring.

REFERENCES:
[1] Jain, N.P.; Jain, P.N.; Agarkar, T.P., "An embedded, GSM based, multiparameter, real-time patient monitoring system and
control an implementation for ICU patients," Information and Communication Technologies (WICT), 2012 World Congress on, vol.,
no., pp.987, 992, Oct. 30 2012-Nov. 2 2012
[2] Watrous, R.; Towell, G., "A patient-adaptive neural network ECG patient monitoring algorithm," Computers in Cardiology
1995, vol., no., pp.229, 232, 10-13 Sept. 1995 doi: 10.1109/CIC.1995.482614
[3] Varshney, U., "Enhancing Wireless Patient Monitoring by Integrating Stored and Live Patient Information," Computer-Based
Medical Systems, 2006. CBMS 2006. 19th IEEE International Symposium on, vol., no., pp.501, 506, 0-0 0 doi:
10.1109/CBMS.2006.84
[4] Niyato, D.; Hossain, E.; Camorlinga, S., "Remote patient monitoring service using heterogeneous wireless access networks:
architecture and optimization," Selected Areas in Communications, IEEE Journal on , vol.27, no.4, pp.412,423, May 2009 doi:
10.1109/JSAC.2009.090506

[5] Ping Wang, "The Real-Time Monitoring System for In-Patient Based on Zigbee," Intelligent Information Technology
Application, 2008. IITA '08. Second International Symposium on, vol.1, no., pp.587, 590, 20-22 Dec. 2008

208

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

VLSI Implementation of Nakagami Variate Generator


Santhosh Kumar
Department of Electronics and Communication Engineering,
University College of Engineering Kakinada, JNTUK, Kakinada
sanjntu@yahoo.co.in
Abstract The major Objective of simulating the radio propagation channels is to substantiate the design and performance of wireless

communication systems. In the early stage of analysis and performance of wireless transceiver design fading channel simulators plays
a vital role. The software simulators are painless to design, where as various hardware simulators have been proved that they offer
distinct speed over software based simulators. Rayleigh and Rician fading channels have been achieved by now using field
programmable gate arrays, hardware-based simulators of Nakagami fading channels have perceived distant less attention. Hence this
paper considers the implementation of Nakagami fading simulator on single field programmable gate array.
Keywords Field Programmable gate arrays, fading channels, channel simulators, Rayleigh fading, Rician fading, Nakagami fading,

software simulation, Hardware Simulation

INTRODUCTION

Wireless communication systems is used to operate over radio channels for a different types of environmental and different
states of weather , it is difficult to the prototype designing of a modern wireless communication system . Good voice quality and fault
less and eminent -rate data transmission are the basic prerequisites of good wireless communications system. In order to meet all these
system specifications, it must be able to yield better results in divergent environments where the radio propagation characteristics vary
considerably. In order to check the system performance we go for analysis simulation, prototyping and testing. But the testing process
a wireless communication system is a laborious process.
Simulation is a strikingly powerful tool widely adopted in virtually all fields of science to help develop a better understanding of
some phenomenon under investigation. In engineering, it is used, for instance, to successfully test equipment, Algorithms, and
techniques, and, to some extent and whenever applicable, to avoid or minimize time-consuming, costly, and inexhaustible field trials.
Wireless communications are no exception and in this challenging, lively, and unkind area, with systems becoming increasingly more
complex, both industry and academy engage themselves in developing simulators. Simulators for wireless communications almost
certainly include a block for the fading channel. The fading channel can be described by a number of models, and among those
available, the general models, namely the Gaussian, Rayleigh, Rician and Nakagami distributions have been applied to model and
simulate a variety of different[1].Henceforth we go for channel simulation. Simulation plays a prominent role in wireless
communication systems design and substantiation, in order to evaluate the prior verification and depiction of wireless transceiver
designs fading channel simulators are used [2].The advent of simulation is that it gives less expensive testing of designs. As the
software Simulation of fading channels is simple to evolve the hardware-based simulators have been. Exhibited distinct orders of
magnitude of speed-up over software- based fading channel simulators. The Speed is an especially important Factor at the time of
simulating different fading scenarios which were supported by recent Wireless standards Compared to a software compilation,
hardware compilation process is less stretchable, since each change of the system requires the synthesis of the design from a Register
Transfer Level (RTL) model and the place route operations on the FPGA [2]. But, once this is done, the simulation can run at a very
high speed and precise BER evaluation can be obtained. The testing, verification and evaluation of wireless systems is an important
but challenging endeavour. Such long-running simulations are an ideal target for FPGA. But the Hardware channel simulators are
desirable to test baseband processing units for performance analysis, and also for system design and verification [1]

IMPORTANCE OF NAKAGAMI FADING


The wireless environment is highly unstable and fading is due to multipath propagation. Multipath propagation leads to rapid
fluctuations of the phase and amplitude of the signal. The presence of reflectors in the environment surrounding a transmitter and
receiver create multiple paths that a transmitted signal can traverse. As a result, the receiver sees the superposition of multiple copies
of the transmitted signal, each traversing a different path. Each signal copy will experience differences in attenuation, delay and phase
shift while travelling from the source to the receiver [3]. Fading (small-scale signal power fluctuations) is a fundamental characteristic
of wireless channels. Due to the existence of a great variety of fading environments, several statistical distributions have been
209

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

proposed for channel modelling of fading envelopes under short-term, long-term, and mixed fading conditions. Short-term fading
models include the well-known Rayleigh, Rice, Hoyt, and Nakagami [3] Among of these distributions exists that well describe the
statistics of the mobile radio signal the Nakagami- distribution has been given a special attention for its ease of manipulation and wide
range of applicability. More importantly, the Nakagami- distribution has been found to be a very good fitting for the mobile radio
channel [7]
The Nakagami-m distribution has founded many applications in technical sciences. It has been shown by extensive empirical
measurement that this distribution is an appropriate model for radio links this kind of distribution has been used in many engineering
applications statistical distribution which can accurately model a variety of fading environments. It has greater flexibility in matching
some empirical data than the Rayleigh, Lognormal or Rice distributions owing to its characterization of the received signal.The
Nakagami fading is known to be a special case of Rayleigh fading and it possesses good auto correlation properties. This is used to
study the moderate to severe fading channels using distinct values of parameter m. the Nakagami fading is similar to Rayleigh fading
when the value of m=1. The augmentation of numerous Rayleigh-fading signals which were independent and identically distributed
(i.i.d.) yields a signal with Nakagami distributed amplitude [3]. The Nakagami distribution matches some empirical data better than
other distributions. In order to analyze the stats and ability of fading channel environment in complicate media like urban
environment, we go for the Nakagami distribution [7].

DESIGN APPROACH OF NAKAGAMI FADING CHANNEL SIMULATOR


In order to generate Nakagami variates first we generates a correlated Rayleigh variates, and. The hardware model
first generates correlated Rayleigh fading variates and then a sequence of logarithmic and linear domain. [1]
Segmentations along with piece-wise linear approximations is used to precisely implement the nonlinear numerical
functions used to transform the correlated Rayleigh fading process into Nakagami- variates [1]. In order to generate the
Rayleigh Variates from Rayleigh variate generator. In distinct types of variate generators can be yielded from the uniform
random number generators. To hatch the random numbers we use LFSR (Linear feedback shift register).Here LFSR is
used as a Random number generator.
GENERATION OF RANDOM NUMBERS USING LFSR
Random numbers are generated by various methods. The two types of generators used for random number generation are
pseudo random number generator (PRNG) and True random number generator (TRNG). The numbers generated are random because
no polynomial time algorithm can describe the relation amongst the different numbers of the sequence. Numbers generated by true
random number generator (TRNG) or cryptographically secure pseudo random number generator (CSPRNG). The sources of
randomness in TRNG are physical phenomena like lightning, radioactive decay, thermal noise etc. The source of randomness in
CSPRNG is the algorithm on which it is based.
A linear feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state. The only linear
function of single bits is xor, thus it is a shift register whose input bit is driven by the exclusive-or (xor) of some bits of the overall
shift register value [5]

Fig.1. Schematic of 12-bit LFSR [4]

210

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Here we are using 12-bit LFSR hence the number of random variables generated =4k (i.e.4096 numbers) In order to generate the
LFSR with large sequence we have to opt appropriate feedback polynomial which was presented by Xilinx. The construction of 12-bit
LFSR using simple D flip flops is as shown in Fig.1.

GENERATION OF RAYLEIGH VARIABLES


The generation of different types of random number by using LFSRs is shown in Fig.2 In order to generate different random
variables from uniform random number generator Box Muller algorithm and Inverse transformation method plays a vital role. The
conclusion of Box Muller algorithm is as follows [6].If U1 and U2 are two random variables of interval(0,1) then consult the two
random variables X1,X2 such that [6]

X1= 21 cos 22

(1)

X2= 2 ln 1 sin 22

(2)

Then (X1, X2) were the duo of random variables which follows Normal distribution with zero mean and unit variance. Coming to the
architectural design uniform random numbers were generated by 12-bit LFSRs then look up table (LUT) is used to calculate and store
the values of X1 and X2. As the LFSR yachted 4096 random numbers [4]
The LUT uses 4 RAMs such that each RAM can accommodate 1024 states (i.e. 1K X 16-bit) for each expression [4]. The value of
sigma is the variance ( 2 ) of the Rayleigh distribution. The non restoring divider is used to divide the Value of X2 with the value of
lambda () to yield the exponential distribution. The Rayleigh distributed output is having only magnitude(r) and it is converted into a
complex number (ci + jcq) with two variates ci and cq having zero mean and Equal variance. Such that the value of r R2 is modulus of the
complex number [2]

Fig2. Architecture of distinct random number generators [4]

211

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

DESIGN OF NAKAGAMI FADING CHANNEL SIMULATOR

Fig.3. Nakagami variate Generator [1]

A high-level block diagram of Nakagami variate generator is shown in above figure. First the Rayleigh Variate Generator block
generates a sequence of zero-mean unit-variance Complex Gaussian random variates c=ci + jcq and the Squared envelope rR2 from
the corresponding Rayleigh process is calculated as r R2 =ci2 +cq2 Then the transfer function g (rR2) is approximated in [1] and the inphase and quadrature components are found by multiplying the Transfer function by ci and cj respectively.

GENERATION OF NAKAGAMI VARIATES


A uniform random variable U can be transformed into a Nakagami- random variable nN using he nonlinear transformation of its
ICDF as in below equation [1].
nN= FN-1(U)

(3)

FN-1 is the inverse Nakagami-cdf in this process first Rayleigh RVs with the desired ACF are generated. A sequence of Rayleigh
random variates rR can be transformed into samples with a uniform distribution and the same ACF between samples using the inverse
CDF transformation [1].
2

U=FR(rR)= 1 e

r
R2
2

(4)

Then the uniform random variates transform into Nakagami variates using Eq.3. A useful rational proportional approximation for
FN-1(U) is proposed as follows [1]

FN-1(U) () +
where () =

1 + 2 () 2 + 3 ()3

(5)

1+1 + 2 ()2
1

(6)

For a given value of m, the five coefficients a1 ,a 2 ,a3 ,b1 and b2 are calculated to minimize the approximation error. Suitable
coefficient values for different values of m [7]. Now the generated Nakagami variates are
= g (rR2) * (ci + jcq)

(7)
r 2
R

Where the transfer function g (rR2) =

212

F N 1 (1e 22 )

www.ijergs.org

rR 2

(8)

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RESULTS AND SIMULATIONS


This architecture of random number generator was implemented and Nakagami variate generators are practically in Xilinx ISE.
The following shows the practical results of various random number generators, LFSR and Nakagami Variate Generator. The
complete architecture design was implemented on Xilinx. The results of simulation for the values of sigma=2 are observed in Fig.4
and Fig.5 It is clear from the simulation results that all the numbers for all distributions are generated in every clock cycle.

Fig.4. Simulation Results of 12-bit LFSR

Fig5. Simulation results of Rayleigh and Nakagami random variate generators.

Comparision of Theoritical and Practical Nakagami PDF


1.4
software simulation
hardware simulation

1.2

Nakagami PDF

0.8

0.6

0.4

0.2

0.5

1.5

2
r

2.5

3.5

Fig.6. the contrast between standard Nakagami pdf and simulated pdf

The Fig.6 shows the similarity between the Hardware simulated Nakagami pdf and the standard Nakagami pdf
213

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CONCLUSION
This discussion presents the study of the design approach of sophisticated and adequate performance of random variate generators
with accurate Nakagami distributions along with Nakagami fading channel simulator .In order to study and investigate the wireless
communications systems thoroughly we use these both Rayleigh and Nakagami distributions. The Rayleigh variates are generated
from the uniform Random number generator. Here we use LFSRs as a random number generator. And by processing the results of
Box Muller algorithm to the Look up Table (LUT) circuits and using the value of sigma and Rayleigh variates are generated. The
Nakagami simulator proposed here transforms the time correlated Rayleigh variates to Nakagami variates.

REFERENCES:
[1] A Alimohammad, M Saeed Fouladi Farad, S, and B.F. Cockburn, Hardware Implementation of Nakagami and Weibull Variate
Generators, IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 20, NO. 7, JULY
2012
[2] A. Alimohammad and B. F. Cockburn, Modeling and hardware implementation of Rayleigh and Rician fading channel
simulators, IEEE Trans. Veh. Technol., vol. 57, no. 4, pp. 20552069, Jul. 2012.
[3] Sarmad Fakhrulddin Ismael, Dr. Basil Shukr Mahmood Architectural Design of Random Number Generators and Their
Hardware Implementations, March 2014
[4] EfficientShiftRegisters,LFSRCounters,andLongPseudoRandomSequenceGenerators,http://www.xilinx.com/support/documentatio
n/application_notes/xapp052.pdf
[5] G. Box and M. Muller, A Note on the Generation of Random Normal Deviates, Annals Math. Statistics, Vol. 29, 1958, pp. 610611.
[6] Elina Pajala, Tero Isotalo, Abdelmonaem Lakhzouri,
Elena Simona Lohan, An improved simulation model for Nakagami-m
fading channels for satellite positioning applications, PROCEEDINGS OF THE 3rd WORKSHOP ON POSITIONING,
NAVIGATION AND COMMUNICATION (WPNC06)81
[7] A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, 4th ed. New York: McGraw-Hill, 2002.
[8] Xilinx Inc., San Jose, CA, Xilinx UG190Virtex-5 FPGA User Guide,2009
[9]
J. G. Proakis, Digital Communications, 4th ed. New York: McGraw- Hill, 2001.
[10]

.G. L. Stber, Principles of Mobile Communication. New York: Kluwer Academic Publishers, 2001

[11] .David Bishop, "Fixed point package users guide", http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/files.html,2006


[12] W. C. Jakes, Microwave Mobile Communications. Piscataway, NJ:Wiley-IEEE Press, 1994

214

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Computer Aided Design of Power Transmission System


Dr. Mahmoud M. A. SAYED
Assist. Prof. - Mechanical Engineering Department,
Canadian International College, Cairo, Egypt.
Phone: +202/ 24728989
Mobile: +20 1001028826 / +201120664464
m_m_sayed@cic-cairo.com

ABSTRACT -It is widely recognized that todays demands in mechanical power transmission systems, (MPTS), are
characterized on the world wide basis by an increasing variety of system combinations . This calls for the development
and use of computer programs for designing such systems quickly and accuratelly .
An important reason for using computer - aided design, (CAD) of integrated in the design of MPTS is that, offers the
opportunity to develop components, units and drives, constructing the MPTS. It is goal of the CAD of MPTS, not only to
automated the design of these components and drive units individually, but also to automated the design of the integrated
MPTS as a whole. This work porposed expert system of CAD of MPTS should be designed in a modular way in order to
make it applicable both in an integrated form as in a stand alone mode . which is capable to choose the suitable units and
drives constructing the MPTS according to the prespecified design data and design them .

KEY WORDS
Computer - aided design (CAD), computer - aided manufacturing (CAM), integrated systems (IS), computer integrated
design, (CID), computer aided construction (CACON), computer - aided calculation (CACAL) , computer - aided data
program (CADA), speed ratio (SR), machine design, and mechanical power transmission system, (MPTS).

INTRODUCTION
In general, the Mechanical Power Transmission System, MPTS, is used in many fields to fulfill the requirements offered
by the industrial as well as civilian or military applications as to transmit the mechanical power from the prime mover
(PM), as the power source to the load, as the power consumer, see Fig . (1).
To Load
Mechanical
Power
From PM
Transmission
System
Fig . (1) Functional location
of the mechanical power transmission system

MPTS
215

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The
mechanical
power
transmission
system
may
be
a
single
unit;
Belt,
Chain, Gear Drive, or as a combination from these different types of drives and others . Gears are the essential
components in the general area of power transmission in spite of using Belts, Chains or sometimes Couplings to transmit
the power according to the prespecified design data, namely the center distance, the speed ratio and the angle between the
input and output shafts . One should carefully evaluate the merits and disadvantages of gear drives as compared to belt
and chain drives before incorporating either into a mechanical power transmission system .
Design work is a process of translating the original description of yet non - existing equipment into a form defining how
this can be built. The design process involving man - machine interaction is referred to as Computer - Aided Design.
CAD/CAM has been uitilized in engineering practice in many ways including drafting, design, simulation, analysis and
manufacturig [1]. Generally, CAD is the use of the computer to do the following jobs :
- Designing of machine and structural elements,
- Optimizing the design and modifying it in a relatively very small time with an easy way, and
- Modeling and simulation of a system.

For many people, the words Computer - Aided Design confuse up an image of an engineer sitting at a graphic display
terminal creating and analyzing the physical behaviour of new designs, or performing their corresponding calculations,
while this is certainly an important or even indespensible application, this type of activity represents a relatively small
fraction of all computer related work in mechanical design and in fact does little to impact the most critical problem
facing most designers; namely the synthesis, analysis, construction and design of the MPTS feeding multi - loads with
mechanical power from a PM. It has been asserted that engineering systems may be modeled as modular subsystems with
identifiable inputs and outputs to be interconnected in network fashion. Mathematical concepts of this type are studied as
graph - theory . It is not surprising therefore, to see substantial use of graph - theoretical concepts as computer aided
design of integrated systems, CAD of IS, and it seems natural to use graph - theoretical concepts as basis for
understanding the data structure of any MPTS. It is the goal of the CAD of MPTS not only to automate the design of these
components and drive units individually, but also to automate the design of the integrated MPTS.
This system should not only enable the user to solve special design problems, but also include data and algorithms
necessary for different purposes . CAD of MPTS should, therefore, be designed in a modular way in order to make it
applicable both in an integrated form as in a stand alone mode .
For this reason, a CAD program has been developed as an expert system, ES. The developed expert system to design
mechanical power transmission systems is capable to choose the suitable units or drives, which are necessary to be used in
a mechanical power transmission system according to the prespecified design data, namely the power to be transmitted,
single, or multi speed ratios, center distance and the angle between the input and output shafts in the drive, see Fig. (2).
Also, the presented expert system solves the design problem of gear boxes through power - pathes, speed stage analysis
and finally through geometrical analysis and check calculations for tooth root and surface strength against bending and
wear failure respectively for each gear stage in the gear box under investigation .

216

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design data

Power to be

Analysis and design

transmitted

of MPTS

Suitable units or
drives included in
the MPTS

Layout of input
and output
shafts

Constant or
variable speed

Speed of the
input shafts

GB

An expert system
for the
Speed or speeds of
the output shaft

Center distance
between input
and output shafts

design of MPTS

No. of design
data for the
necessary stages
in the GB

Specifications for
each gear stage

Operating
conditions

Fig . (2) An Expert System for Design of Mechanical Power Transmission System ES-D-MPTS

AUTOMATION OF THE DESIGN PROCESS


An important reason for using CAD of multi - loads MPTS is that it offers the opportunity to develop the components,
units, and drives necessary for this respective MPTS . In the conventional design of such transmission systems, the
synthesis and analysis as well as the MPTS construction were carried out by the designer . And as we know, this was both
time consuming and involved duplication of effort by design personnel . The most effective way to improve the synthesis,
analysis, construction and design of the suitable components particularly for the multi - loads MPTS is to completely
eliminate the interactive component-selection from the CAD of the multi - loads MPTS by selecting these components
suitably and entirely by the computer using an overall program aimed at atomizing the requisition cycle of the multi loads MPTS. The maximum benefit of CAD of multi - loads MPTS is only possible by an integration of the synthesis,
217

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

analysis, construction and design of the suitable components necessary for this MPTS . This work presents a methodology
for automatically performing the complete design process for multi - loads MPTS from the loads data Matrix, which can
select, construct and design the suitable mechanical components in this MPTS satisfying both loads layout and speed
requirements without any human intervention . Computer Integrated Design, CID, systems have emerged as a means of
upgrading the complete design process; synthesis, analysis, construction and design calculations, to assist the design
personnel to improve the overall design process . CACON / CACAL implies an integrated process where the computer
technology is incorporated in the construction and design of multi - loads MPTS .

The system should not only enable the user to solve special design, but also include data and algorithms for different
purposes . CAD of IS should therefore be designed in a modular way in order to make it applicable both in an integrated
form as in a stand alone one . Any component should be programmed as an independent unit and used whenever it is
required . In an integrated CAD system, direct links are established between the different components and drive units
constructing the transmission system as well as between the different design phases . It is the goal of CAD of IS not only
to automate the design of the components and drive units individually, but also to automate the design of the integrated
system as a whole .
Expert systems are software abilities used to display the modules-modular subsystems with identifiable inputs and
outputs-interconnected in a network fashion describing the engineering system. MPTS, by virtue of such nature, are good
candidates for Expert System applications . An Expert System combines the designer experience-gained in selection and
design of MPTS, with the computer facilities (speed ) memory and computational ability, which reduces the required skill,
improves the overall efficiency as compared to manual or even semi-automated design process, eliminate the duplication
of data and save a great effort and cost.

DESCRIPTION OF COMPUTER PROGRAM


The strategy for automated design of MPTS, or Computer Integrated Design CID systems, illustrated in Fig . (3), shows
the following three systems :
-The first system is a computer aided construction, CACON, which is aimed at developing subprograms for the
selection of the mechanical components in the MPTS,
-The intermediate system is a computer aided data program, CADA, which proposes a technique aimed to fill the gap
between the foregoing CACON and the next CACAL .
-The last system is a computer aided calculation system, CACAL, which proposes a technique aimed at providing
computed aided calculation sequence planning applied to the components in the MPTS, which reduces the skill, hence
improves the overall efficiency as compared to the human activity .
Such selection procedure is repeated if multi-loads are to be fed with power from a prime mover, using the same
setup . This work is incorporated within a computer program .

218

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Selection
CACON
Of necessary
components

system

CID
system
s

in the MPTS
Generation Of
the design data
for each
component in
the MPTS

CADA
system

Design of
CACAL

necessary
components in
the suitable
MPTS

system

Fig . (3) Computer Integrated Design, CID, systems for MPTS

CASE STUDY
Based on the input data given for MPTS, output speeds 500, 200, 20, rpm and 3000 rpm as an input one, as shown in table
1.
Table 1. Iutput data
Center distance

Angle between

(m)

input &output shafts

Number of out put speed

The speed ratio for the power pathes, SR(J) and these for the gear stages in each path U(I,J) as well as the input speed for
each stage are displayed in table 2..

Table 2. Output data from the program


Pathes, J

Speed ratio, SR

15

150

Speed ratio

4.9

4.9

Max. Speed

3000

3000

3000

Stage No.=1

219

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Stage No.=2

Stage No.=3

Speed ratio

3.06

7.5

Max. Speed

612.2

612.24

Speed ratio

4.08

Max. Speed

81.6

CONCLUSIONS
An important reason for using CAD of integrated systems in the design of MPTS is that, it offers the opportunity to
develop components, units and drives etc. constructing the MPTS . It is the goal of the CAD of MPTS, not only to
automate the design of these components and drive units individually, but also to automate the design of the integrated
MPTS as a whole . CAD of MPTS should, therefore, be designed in a modular way in order to make it applicable both in
an integrated form as in a stand alone mode . For this reason, a CAD program has been developed as an Expert system,
which is capable to choose the suitable units and drives constructing the MPTS according to the prespecified design data
as well as to design them .

The structure of the problem under investigation; the design of the MPTS feeding multi-loads with mechanical power
from a prime mover, could be now thought to be formulated and then treated in 3-dimensions :
-One dimension would contain the necessary components, units and drives constructing the MPTS for a certain load arranged in the power flow direction.
-2nd dimension is introduced to spread those for the rest of the loads fed from the same prime mover.
-The 3rd dimension would be dealing with the depth of the design process to include the synthesis, analysis and
construction as well as the calculations as applied to the multi-loads MPTS as a whole.

This work presents a methodology for automatically performing the complete design process for multi-loads MPTS,
which can select, construct and design the suitable mechanical components in the MPTS satisfying both load layout and
speed requirements without any human intervention . Computer Integrated Design, CID, systems have emerged as a
means of upgrading the complete design process; synthesis, analysis, construction and design calculations .
CACON/CACAL implies an integrated process, where the computer technology is incorporated in the construction and
calculations of multi-loads MPTS.
The strategy for automated design of MPTS or computer Integrated Design, CID, systems contains 3-system as :
-The 1st system is a computer aided construction, CACON, which is aimed at developing two subprograms for the
selection of the mechanical components in the MPTS,
-The intermediate system is a computer aided data program, CADA, which proposes a technique aimed to fill the gap
between the foregoing CACON system and the next CACAL one .

220

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

-The 3rd system is a computer aided calculation, CACAL, system which proposes a technique aimed at providing a
computer aided calculations sequence planning applied to the components in the MPTS, which reduces the skill, hence
improves the overall efficiency as compared to the human activity .

REFERANCES
[1] Ibrahim Zeid, Mastering CAD/CAM. Mc Graw - Hill NEW YORK, 2005.
[2] V. DOBROVOLSKY, Machine elements . Mir publ., MOSCOW, K . 1968 .
[3] FERDRICK E.BRICKER . Automobile guide . D. B.Taraporevale Sons & Co. Private Ltd. INDIA, 1976.
[4] ROBERT H.CREAMAR, Machine design. ADDISON - WESLEY Publishing company, INC.1976 .
[5] GUSTAV NIEMANN, Machine elements SPRINGER-VERLAG Berlin. Heidelberg. NEW YORK, 1978.
[6] FRANKLIN D. JONES, Machinery handbook . Industrial press INC. NEW YORK, 1979.
[7] M . SAVAGE,

Optimal tooth numbers for compact standard spur and gear sets . ASME journal of mechanical

design D.P.TOWNSEND. vol. 104, No. 3 oct. 1982 .


[8] N .E .ANDERSON, Design of spur gears for improved efficiency. S.H.LOEWENTHAL ASME journal of
mechanical design vol. 104, No. 3, oct. 1982 .
[9] T.BBAUMEISTER,Standard handbook for mechanical engineers. E A.AVALLONE,

Mc GRAW - Hill NEW

YORK, 1983 .
[10] KIYOHIKO UMEZAWA,Influence of gear error on rotational and TAICHI SATO,
of JSME,vol.28,

1985.

[11] J. E. SHIGLEY, Mechanical engineering design. Mc Graw - Hill NEW YORK, 1986

221

www.ijergs.org

vibration. Bulletin

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The Vibrational Spectroscopic (FT-IR & FT Raman, NMR, UV) study


and HOMO& LUMO analysis of Phthalazine by DFT and HF Studies
C. C. Sangeetha 1, R. Madivanane 2, V. Pouchaname 3
1

Department of Physics, Manonmaniyam Sundaranar University,Tirunelveli,Tmail nadu,India.

Department of Physics, Bharathidasan Government College for Women, Puducherry, India.

Department of Chemistry, Bharathidasan Government College for Women, Puducherry, India.


E.Mail : carosangee@gmail.com
Mobile number: 09994313341

Abstract :In this work the FT-IR FT-Raman, UV-Visible absorption and 1H NMR spectra of Phthalazine were registered, assigned
and analyzed. The spectra were interpreted with aid of normal coordinate analysis based on DFT/B3LYP and HF methods using
standard 6-311++G(d,p) basis set. After scaling there is good agreement between the observed and the calculated frequencies. Bond
lengths, angles and dipole moments for the optimized structures of Phthalazine were also calculated. The calculated first order
hyperpolarizability shows that the molecule is an attractive molecule for future applications in non linear optics. The calculated
Homo-Lumo energies show that charge transfer occurs within the molecule. Mullikan population analysis on atomic charges is also
calculated. The study is extended to study the thermodynamic properties of Phthalazine. The 1H nuclear magnetic resonance
(NMR) chemical shifts of the molecule were calculated by the gauge independent atomic orbital (GIAO) method and
compared with experimental results. UVVis spectrum of the compound was recorded and electronic properties were
performed. Finally, the calculated results were applied to simulated infrared and Raman spectra of the title compound which
show good agreement with observed spectra.

Keywords :Phthalazine, Vibrational analysis, FTIR, FT-Raman, DFT calculation, Homo-Lumo analysis, thermodynamical
properties,GIAO,UVvis spectrum,Gaussian 09.

INTRODUCTION

Phthalazine is a diazanaphthalene with two adjacent N atoms. Phthalazines are examples of nitrogen heterocycles that possess
exciting biological properties.[1-3].The numerous studies published on their applicability in different areas, especially as Drugs[4,5].
Phthalazines have been reported to possess, anticonvulsant, [6] cardiotonic, [7] antimicrobial, [8] antitumor, [9-12] antihypertensive,
[13,14]

antithrombotic,[15]

antidiabetic,

[16,17]

antitrypanosomal,[18]

anti-inflammatory,[18-22]

and

vasorelaxant

activities[23].Additionally, Phthalazines have recently been reported to potentially inhibit serotonin reuptake and are considered antidepression agents.[24].This compound have wide range of applications as therapeutic agents. Phthalazine and its derivatives
222

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

possessing triazines nucleus has attracted great attention in recent years due to extensive variety of biological activity, particularly
anticonvulsant activity. They are used as an intermediate in the synthesis of antimalarial drugs and its derivatives are used as
antimicrobial agents.phthalazine shows anti tumor activity. The new Phthalazine substituted urea and thiourea derivatives inhibited the
hcAs I AND II enzyme activity.Therefore, our results suggested that the compounds are likely to be adopted as candidates to treat
Glaucoma. Amio derivatives of 2,3 dihydrophthalazine -1,4-dione were used in the treatment of various diseases in humans, in
particular ,anti inflammatory immuno correcting activity in the treatment of ulcer. The 4-Arylphthalazones bearing
benzenesulfonamide is used as anti-inflammatory and anti cancer agents. Thus, owing to the industrial and biological importance
extensive spectroscopic studies on Phthalazine were carried out by recording the FTIR and FT-Raman spectra, NMR and UV-Visible
and subjecting them to normal coordinate analysis. Literature survey reveals that, to the best of our knowledge no HF and B3LYP
level calculations of Phthalazine have been reported so far. In the present work, the experimental and theoretical FT-IR and FT-Raman
spectra of Phthalazine have been studied. The HF and B3LYP level with 6-311++G (d,p) basis set have been performed to obtain the
ground state optimized geometries and the vibrational wave numbers of the different normal modes as well as to predict the
corresponding intensities for the different modes of the molecule.The present research work was undertaken to study the vibrational
modes and also to carry out HOMO-LUMO, Polarizability, Hyper polarizability, Mullikans charge density and thermodynamical
properties for the title molecule.

2.

Experimental setup and measurements:


2.1 Spectral details:
The fine sample of Phthalazine provided by the Sigma Aldrich Chemical Co.(USA), with a stated purity of greater than 98%,

was used as such for the spectral measurements. At the room temperature a Fourier Transform IR spectrum of the title compound was
measured in 3500-0 cm-1 region at a resolution of 2 cm-1 using Bruker IFS -66V Fourier transform spectrometer. The FT-Raman
spectrum of Phthalazine was recorded on the same instrument equipped with an FRA -106 FT Raman accessory. The spectrum was
recorded in the 3500-0 cm-1 with Nd: YAG laser operating at 200 mW power. The reported wave numbers are expected to be accurate
within 2 cm-1. 1H nuclear magnetic resonance (NMR) (400 MHz;CDCl3) spectra were recorded on a Bruker HC400
instrument. Chemical shifts for protons are reported in parts per million scale downeld from tetramethylsilane. NMR spectra
are recorded using 2BRUKER 500 MHz AVANCE III instruments using CDCl3 as solvent, with TMS as an internal standard. Proton
(1H) spectrum at 500 MHz is recorded at room temperature. The Ultra- violet absorption spectrum of Phthalazine, dissolved in CdCl 3
solution, was recorded in the range 200.00 to 400.00 nm using SHIMADZU UV-1601 PC, UV-1700 Series spectrometer. The

223

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

theoretically predicted FT-IR and Raman spectra at three parameter hybrid functional Lee-Yang-Parr (B3LYP) using 6-311++G(d,p)
basis set level of calculations along with experimental FT-IR and FT-Raman spectra are shown in Figures.
2.2 Computational Details
Using the version of Gaussian 09 W (revision B.01) program, the DFT and HF calculations of the title compound were
carried out on Intel core2 duo /2.20 GH processor. Becke-3-lee-yang-parr (B3LYP) functions were used to carry out Ab-initio analysis
with the standard 6-311++G(d.p) basis sets. The normal coordinate analysis of the compound Phthalazine has been computed at the
fully optimized geometry. For the simulated IR and Raman spectra pure Lorentzian band shapes with the band width of 10 cm-1 was
employed using the Gabedit version 2.3.2. In order to improve the calculated values in agreement with the experimental values, it is
necessary to scale down the calculated harmonic frequencies. Hence, the vibrational frequencies calculated at DFT level are scaled by
0.98 and at HF level the frequencies are scaled by 0.96 [25, 26]. After scaled with the scaling factor, the deviation from the
experiments is less than 10 cm-1 with a few exceptions.The animation option of the Gauss view 05 graphical interface for Gaussian
program was employed for the proper assignment of the title compound and to check whether the mode was pure or mixed. The idea
of using multiple scale factors in the recent literature [28] has been adapted for this study and it minimized the deviation between the
computed and the experimental frequencies. Most of the scale factors are much closer to the unity for DFT and HF studies which
implies that B3LYP/6-311++G (d, p) computations yield results much closer to the experimental values. UVVis spectra electronic
transitions vertical excitation energies, absorbance and oscillator strength were computed with TD-DFT method. Finally the Nuclear
Magnetic Resonance (NMR) chemical shifts were performed using Gauge induced Atomic Orbital (GIAO) method [29,30].
Prediction of Raman intensities:
The Raman activities (SRa) calculated with GAUSSIAN 03 program [31] converted to relative Raman intensities (IRa) using
the following relationship derived from the intensity theory of Raman scattering [32,33],
Ii = f (0-i)4 Si / i[1-exp (-hc i/kT)]
Where, 0 is the laser exciting wave number in cm-1.(here 0 = 9398.5 cm-1).
i vibrational wavenumber of the ith normal mode.(cm-1)
Si Raman scattering activity of the normal mode i.

224

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

fi is a constant.(equal to 1012) It is a suitably chosen common normalization factor for all peak intensities,h,k,c and T
are Planck and Boltzmann constants and speed of light and temperature in Kelvin, respectively.
3.

Result and Discussions:

3.1 Molecular Structure:

Figure 1: Optimized molecular structure of Phthalazine.

225

www.ijergs.org

HF
Parameters
International Journal of Engineering Research and General
Science Volume 2, Issue 6, October-November, 2014

HF/6-311++G(d,p)

ISSN 2091-2730

DFT(B3LYP)

DFT(B3LYP)
6-

6-311++G(d,p)

Parameters

311++G(d,p)

6-311++G(d,p)
DIHEDREL
ANGLE ( 0 )

BOND LENGTH
N1-N2

1.3657

1.3471

C10- N1-C2- C3

N1-C10

1.3101

1.2818

C2- N1-C10-C9

N2-C3

1.3101

1.2818

C2- N1-C10-16

180

-180

C3-C4

1.423

1.4278

N1- N2- C3-C4

C3-H11

1.0875

1.0771

N1- N2-C3-H11

-180

180

C4-C5

1.4137

1.4113

N2-C3-C4-C5

180

-180

C4-C9

1.4148

1.39

N2-C3-C4-C9

C5-C6

1.377

1.3627

H11-C3-C4,C5

C5-12H

1.0846

1.0755

H11-C3-C4-C9

180

180

C6-C7

1.4147

1.4136

C3-C4-C5-C6

180

180

C6-H13

1.084

1.0751

C3-C4-C5-H12

C7-C8

1.377

1.3627

C9-C4-C5-C6

C7-H14

1.084

1.0751

C9-C4-C5-H12

180

180

C8-C9

1.4137

1.4113

C3-C4-C9-C8

180

180

C8-H15

1.0846

1.0755

C3-C4-C9-C10

C9-C10

1.423

1.4278

C5-C4-C9-C8

226

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C10-H16

1.0875

1.0771

BOND ANGLE ( 0 )

C5-C4-C9-C10

-180

180

C4-C5-C6-C7

N2- N1-10C

119.5093

120.1254

C4-C5-C6-H13

180

-180

N1- N2- C3

119.5093

120.1254

H12-C5-C6-C7

180

180

N2-C3-C4

124.6255

124.0542

H12-C5-C6-H13

N2-C3-11H

115.4224

116.0187

C5-C6-C7-C8

C4- C3-H11

119.952

119.9271

C5-C6-C7-H14

180

180

C3-C4-C5

124.3686

124.1785

H13-C6-C7-C8

180

180

C3-C4-C9

115.8652

115.8205

H13-C6-C7-H14

C5-C4-C9

119.7663

120.001

C6-C7-C8-C9

C4-C5-C6

119.5889

119.3888

C6-C7-C8-H15

180

180

C4-C5-12H

119.577

119.7767

H14-C7-C8-C9

180

180

C6-C5-H12

120.834

120.8345

H14-C7-C8-H15

C5C-C6-C7

120.6448

120.6101

C7-C8-C9-C4

C5-C6-H13

120.0181

120.1084

C7-C8-C9-C10

180

180

C7-6C-H13

119.3371

119.2815

H15-C8-C9-C4

-180

180

C6-C7-C8

120.6448

120.6101

H15-C8-C9-C10

C6-C7-H14

119.3371

119.2815

C4-C9-C10- N1

C8-C7-H14

120.0181

120.1084

C4-C9-C10-H16

180

180

C7C-C8-C9

119.5889

119.3888

C8-C9-C10- N1

180

-180

227

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C7-C8-H15

120.834

120.8345

C9-C8-H15

119.577

119.7767

C4-C9-C8

119.7663

120.001

C4-C9-C10

115.8652

115.8205

C8-C9-C10

124.3686

124.1785

N1-C10-C9

124.6255

124.0542

N1-C10-H16

115.4224

116.0187

C9-C10-H16

119.952

119.9271

C8-C9-C10-H16

Table 1: Comparison of the geometrical parameters of Phthalazine from DFT and HF studies.

The general molecular structure and atom numbering of Phthalazine molecule, under investigation is represented in Figure
1.The geometry of Phthalazine under investigation possessing C1 point group symmetry. The 42 fundamental modes of vibrations are
present in Phthalazine molecule. Determining the optimized molecular structure is the first task of the computational work. The
numbering schemes of the atoms were obtained from Gauss view programs[34].The optimized structural parameters such as bond
length ,bond angles and dihedral angles of Phthalazine molecule are determined by B3LYP and HF level with 6-311++G (d,p) as basis
set. Geometric properties of structure were calculated by B3LYP/6-311++G (d,p) and HF/6-311++G (d,p) levels of calculation and
depicted in Table 1.The bond lengths of C-C are greater than C-H bond lengths. The bond lengths related to Nitrogen atoms are with
the values as 1.31. The density functional calculation gives almost same bond angles. The dihedral angles of our title molecule shows
that our tested molecule was planar. The optimized bond length and bond angles are slightly smaller than the experimental values.
This is due to the fact that all the theoretical calculations belongs to isolated molecule were done in gaseous state and the experimental
results were belongs to molecule is in solid state.
3.2 Vibrational Assignments:
The harmonic vibrational frequencies (unscaled and scaled) calculated at HF and B3LYP levels using the triple split valence basis set
along with the diffuse and polarization functions, 6-311++G(d,p) and observed FT-IR and FT-Raman frequencies for various modes
228

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

of vibrations have been presented in Table 2. The experimental and theoretical FTIR and FT-Raman spectra are shown in Figures. 2
and 3.The functional group frequencies of various bonds are discussed below.

Figure 2: Experimental (top) and theoretical (bottom) FTIR spectra of Phthalazine.

229

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3: Experimental (top) and theoretical (bottom) FT-Raman spectra of Phthalazine.

230

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

S.no

Experimental

B3LYP/6311++G**

IR

Raman

intensity

Activity

HF/6-311++G**
Calculated
wavenumber

IR

RamanA
ctivity

Assignment

intensity

Calculated
wavenumber
IR

Rama

unscaled

scaled

unscale

n
1

scale
d

178

168

157

2.197

0.314

186

178

1.960

0.319

Ring deformation

181

198

173

173

0.001

195

195

0.012

CC+ C-C-C

353

350

354

354

0.397

0.017

378

359

0.383

0.013

N-C-C

360

382

386

382

5.975

0.335

450

451

451

0.004

435

435

4.295

1.037

C-C-C + NCC

476

480

480

18.290

0.043

493

478

0.228

C-C-C

516

516

0.393

14.144

534

517

26.657

0.001

CCC+ CCH +
NCC

528

522

0.653

8.098

546

524

0.511

9.362

C-C-C + CNC
+ NCC

564

564

0.718

7.315

CCC+ CC+
CCH+ CN

652

0.132

C-C-C

8.845

1.426

C-C-C + CH

0.623

C-C-C + S CH
+RING
BREATHING

0.132

CH +CCC RING
BREATHING+HN
CC

5
6

473

515

522

570

10

622

11

645

12

13

761

791

14

15

816

651

761

646

626

0.247

694

668

654

6.570

1.586

710

765

781

781

797

810

811

822

765

781

57.251

0.038

0.252

793

0.797

37.781

813

1.546

0.227

231

16

C-C-C and C-N-C

811

842

667

770

791

70.903

S CH + S CN

867

814

1.841

29.745

CH S +CC RING
BREATHING

0.316

CCC RING
BREATHING +
CH

www.ijergs.org

868

878

879

879

0.006

872

837

3.626

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

17

953

18

969

19

1017

20

1008

21

1034

22

23

1095

24

1137

25

946
972

1013

0.186

CH + CN

1010

2.229

0.073

CCC+ CH

1024

1013

17.728

0.923

CH

0.005

1048

1006

0.060

CH

0.207

1040

1040

19.595

4.329

CCC + S CH

921

912

18.380

0.105

964

954

946

946

0.349

1042

964

964

13.912

0.073

971

971

1.885

983

973

3.432

1089

1008

1008

0.071

1102

1090

0.041

17.317

CCC + CC
+RING
DEFORMATION

1135

1038

1038

2.294

23.807

1097

1094

3.216

0.857

N-N +CCC
TRIGONAL BEND

1160

1136

0.957

2.674

1118

1097

0.668

CH + CCC

1156

1177

1153

4.227

1.510

S C=C + S CN

26

1208

1213

1232

1207

6.252

13.246

1221

1221

8.448

1.691

CH + S CN

27

1244

1229

1278

1239

4.815

4.149

1243

1243

0.552

3.550

CH + S C=C

28

1237

1259

1289

1237

16.075

2.356

S C-C

29

1277

1294

1328

1258

0.909

0.821

S CC

30

1302

1339

1301

0.001

17.275

CC+ CN+ CCN

31

1322

1318

1301

1318

1.889

9.909

CN

32

1370

1351

1367

1367

15.703

2.199

S CC

33

1400

1378

1378

1378

4.427

2.865

CN+ S CC

34

1432

1396

35

1451

36

1437
1483

1403

1403

12.938

113.464

1444

1429

11.479

55.260

S CC

1453

1438

5.656

69.15

1461

1431

2.974

3.279

S CC + S CN

1464

1464

0.595

2.732

1519

1488

3.306

10.866

37

1487

38

1532

39

1555

1575

1598

40

1601

1610

1613

41

1646

1618

1659

42

1740

232

S CC
1498

1483

7.121

147.524

S C=C

1587

1539

11.276

95.543

CC

5.356

0.510

1587

1555

0.322

0.193

C=C S + CC

1613

2.648

20.773

1646

1613

7.441

19.379

S C=C

1642

1.326

9.664

1582

S C=C + S C-C
1775

www.ijergs.org

1739

11.458

0.831

CC+ HCC

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

43

1788

44

1869

45

2968

46
47

3026

48
49

1.901

35.034

RING+ CC+
HCC

1829

1829

0.431

3.150

RING

3143

2985

23.505

18.465

as CH

3014

3145

3019

0.420

204.529

s CH

3090

3167

3090

0.674

25.707

s CH

3143

3172

3162

0.774

135.782

s CH

3183

3055

16.728

59.697

3193

3129

11.877

305.432

3068
3123

51

3227
3230

53

1783

2784

50

52

1783

3300

3320

3120

20.448

9.837

as CH
as CH

3322

3211

0.855

123.563

as CH

3325

3222

1.795

43.672

as CH

3333

3227

2.714

146.144

as CH

54

3470

3345

3233

22.447

38.157

as CH

55

3674

3355

3355

13.568

263.702

as CH

Table 2: Experimental and calculated (B3LYP/6-311++G(d,p) and HF/6-311++G(d,p) levels) vibrational frequencies (cm-1) ,
IR Intensity (KM Mol-1) , Raman Activity (amu-1) of Phthalazine.
N- N vibrations:
The NN stretching mode is reported at 1093 cm-1 experimentally for Phthalazine derivatives [35]. In our present study the N-N
stretching vibrations are observed at 1095 cm-1 in FTIR and the theoretically computed N-N vibrations in the region 1098 cm-1 by HF
method show good agreement with experimental value.
C-N Vibrations:
The identification of CN vibrations is a very difficult task since mixing of several bands is possible in this region. However, from
the help theoretical calculations, the CN vibrations are identified and assigned in this study. Silverstein et al., assigned the CN
stretching vibrations in the region 13821266 cm-1 for aromatic amines [36]. The frequencies 1598- 1411 cm-1 in both FT-IR and FTRaman spectra have been assigned to C-N, C=N stretching vibration, respectively [37].Referring the previous reference the band at
1598-1468 cm-1,has been designed to C-N stretching mode. Hence in the present investigation, the symmetric CN vibrations are
233

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

observed FTIR in 1208,1322,1400 and 1451 cm-1 and in Raman it is observed at 797,1156,1213,1318,1378 and 1437 cm -1 Whereas its
corresponding calculated scaled values are 793,1153,1207 and 1438 in DFT method and 1221,1318 and 1378 cm-1 in HF method. The
in plane bending vibrations of C-N bending in found in 953 cm-1 in IR and 946 cm-1 in raman spectrum.
C-C /C=C vibrations:
The ring carboncarbon stretching vibrations usually occur in the region 16001400 cm-1[38,39]. The ring carbon- carbon stretching
vibrations in benzene ring occur in the region 1625-1430 cm-1.For aromatic six-membered rings,e.g., benzene and pyridines, there are
two or three bands in this region due to skeletal vibrations, the strongest usually being at about 1500 cm-1. In the case where the ring is
conjugated further a band at about 1580 cm-1 is also observed [38].The symmetric C-C stretching vibrations are observed at Socrates

[38] mentioned that the presence of conjugate substituent such as C=C causes a heavy doublet formation around the region 1625-1575
cm-1.In the present work the symmetric C-C stretching vibrations are found in
1244,1237,1277,1370,1400,1432,1451,1535,1555,1740cm-1inFTIR.In
1259,1294,1351,1396,1351,1396,1351,1396,1437,1483,1575,1788 cm-1 in Raman spectrum.Likewise the symmetric C=C stretching
vibrations are found in 1244,1601 in FTIR and in 1156,1229,1610 cm-1 in Raman spectrum. Out of plane bending vibrations of C-C
are observed at 1089 cm-1 in Raman and at 1302 cm-1 in FTIR spectrum. The bands occurring at 515,622,645,761,969,1034 and 1137
cm-1 in the infrared and at 522,570,651,761,781,878,972 and 1089 cm-1 in Raman spectrum are assigned to the in plane bending CCC
in-plane bending modes of Phthalazine.

C-H vibrations:
Aromatic compounds commonly exhibit multiple weak bands in the region 31003000 cm-1 [40] due to aromatic C-H stretching
vibrations. All the C-H stretching vibrations are very weak in intensity.The bands due to C-H in-plane bending vibrations are observed
in the region 1390990 cm-1 [41]. The bands due to the C-H in-plane deformation vibrations, which usually occurs in this region are
very useful for characterization and are very strong indeed [42]. When there is in-plane interaction above 1200 cm-1, a carbon and its
hydrogen usually move in opposite direction [43]. The out of plane CH vibrations are strongly coupled and occur in the region of
1000700 cm-1 [44].In our present Phthalazine molecule the symmetric and asymmetric C-H stretching vibrations are found in
between 3026- 3674 cm-1in FTIR spectrum and also it appears in Raman spectrum it is in between 3014-3300 cm-1 which shows the
correlation with the literature survey. The in plane bending vibrations of C-H are verified at 1008,1095,1137,1208,1244 cm-1 in FTIR
and at 1013,1135,1213 and 1229 cm-1 in Raman spectrum. The out of plane bending vibrations are found in 645,761,791and 969 cm -1
234

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

in FTIR and at 651,761,781,878 and 972 cm-1 in Raman spectrum. For most of the remaining ring vibrations, the overall assignments
are satisfactory. Small changes in frequencies observed for these modes are due to the changes in force constants or reduced mass ratio
resulting mainly due to the extent of mixing between ring and substituent.
3.3 Non linear optical effects:
NLO effects arise from the interactions of electromagnetic fields in various media to produce new fields altered in phase, frequency,
amplitude or other propagation characteristics from the incident fields [45]. NLO is at the forefront of current research because of its
importance in providing the key functions of frequency shifting, optical modulation, optical switching, optical logic, and optical
memory for the emerging technologies in areas such as telecommunications, signal processing, and optical interconnections [46-49].
Dipole moment, polarizability and hyperpolarizabilities of organic molecules are important response properties. There has
been an intense investigation for molecules with large non-zero hyperpolarizabilities, since these substances have potential as the
constituents of non-linear optical materials. In presence of an applied electric field, the energy of a system is a function of the electric
field. The first hyper polarizability is a third-rank tensor that can be described by a 3 3 3 matrix. The 27 components of the 30
matrix can be reduced to 10 components due to the Kleinman symmetry[50]. The components of 0 are defined as the coefficients in
the Taylor series exponents the energy in the external electric field. When the external electric field is weak and homogeneous, this
expansion becomes
E=E0-F - 1/2FF - 1/6FFF+.
where E0 is the energy of the unperturbed molecules, F is the field at the origin and , and are the components of
dipole moment, polarizability and the first-order hyperpolarizabilities respectively.
In present study, the electronic dipole moment, molecular polarizability, anisotropy of polarizability and molecular first
hyperpolarizabiliy of present compound were investigated. The polarizability and hyperpolarizability tensors [xx ,xy, yy, xz, yz, zz
and xxx, xxy, xyy, yyy, xxz, xyz, yyz, xzz, yzz, zzz) can be obtained by a frequency job output file of Gaussian. The total static dipole
moment (), the mean polarizability(0), the anisotropy of the polarizability ( ) and the mean first-order hyperpolarizability (0),
using the x, y, z components they are defined as follows:
total = 0 = 1/3 (xx+yy+zz)
= [(xx - yy)2 + (yy - zz)2 +(zz - xx)2 + 6 2xz+6 2xy+6 2yz]1/2

235

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

0 = (x2 + x2+ x2)1/2


=

[(xxx + xyy + xzz) 2+ (yyy + xxy + yzz ) 2+( zzz + xxz+ yyz )2]1/2

=[(xx-yy)2+(yy-zz)2+(zz-xx)2/2]1/2
The and values of the Gaussian 05 output are in atomic units (a.u) and these calculated values converted into
electrostatic unit (e.s.u) ( : 1 a.u = 0.148210

-24

esu; for : 1 a.u =8.63910

-33

esu; ) and these above polarizability values of

Phthalazine are listed in Table (3). The total dipole moment can be calculated using the following equation.
= (x2 + y2 + z2)1/2
To study the NLO properties of molecule the value of urea which is prototypical molecule is used as threshold value for the purpose
of comparison. Urea is the prototypical molecule used in the study of the NLO properties of the molecular systems.
The total molecular dipole moment and first order hyperpolarizability are 5.3032Debye and 0.249683130510 -30 cm5/esu,
respectively and are depicted in Table 3. Total dipole moment of title molecule is approximately four times greater than that of urea
and first order hyperpolarizability is 0.1 times lesser than that of urea ( and of urea are 1.3732 Debye and 0.3728 x10 -30 cm5/esu
obtained by HF/6-311G(d,p) method [51]. These results indicate that the title compound is a good candidate of NLO material.
DFT B3LYP/6-311++G(d,p)

HF /6-311++G(d,p)

xx

54.7857473

131.520057

xy

-0.000000489249679

0.00000000759618116

yy

145.958031

103.675498

xz

0.0000000000379414449

-0.0000000549565194

yz

0.0000000217153642

0.0000000266667497

zz

111.953395

53.9545493

104.2323911

96.3833681

Parameters (a.u)

236

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

79.80161726

68.0577788

xxx

-0.00000248429697

-41.8501250

xxy

52.0398273

-0.000000430891865

xyy

0.00000357237014

15.2046515

yyy

-82.2554657

0.000000224286613

xxz

0.0000000197723763

-0.00000154879012

xyz

-0.000000163513452

0.000000761285762

yyz

0.00000170415196

-0.00000227357450

xzz

0.00000381123329

55.5473291

yzz

-4.10579237

-0.0000000703065283

zzz

0.00000000865477438

-0.000000886088144

34.32143077 a.u

28.9018556a.u

2.96502840410-31

2.49683130510-31

-5.3032

5.4630

0.0000

0.0000

0.0000

0.0000

total (Debye)

5.3032

5.4630

Table 3:The electric dipole moment, polarizability and first order hyperpolarizability of
Phthalazine.

237

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The theoretical calculation of components is very useful as this clearly indicates the direction of charge delocalization. In xzz
direction, the biggest values of hyperpolarizability are noticed and subsequently delocalization of electron cloud is more in that
direction. The maximum value may be due to electron cloud movement from donor to acceptor which makes the molecule highly
polarized and the intra molecular charge transfer possible.
3.4 Frontier molecular orbital analysis:
To understand this phenomenon in the context of molecular orbital theory, we examined the molecular HOMOs and
molecular LUMOs of the title compound. When we see the first hyperpolarizability value, there is an inverse relationship between
first hyperpolarizability and HOMOLUMO gap, allowing the molecular orbitals to overlap to have a proper electronic
communication conjugation, which is a marker of the intra molecular charge transfer from the electron donating group through the pconjugation system to the electron accepting group [52,53].
The total energy, energy gap and dipole moment affect the stability of a molecule. Surfaces for the frontier orbital were
drawn to understand the bonding scheme of present compound and it is shown in Figure 4. The Frontier orbital gap helps to
characterize the chemical reactivity kinetic stability, chemical reactivity, optical polarizability, chemical hardness, softness of a
molecule [54].

Figure 4: Patterns of the principle highest occupied and lowest unoccupied molecular
orbitals of Phthalazine.

238

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) are known as frontier
molecular orbitals (FMOs). The HOMO is the orbital that primarily acts as an electron donor and the LUMO is the orbital that largely
acts as the electron acceptor, and the gap between HOMO and LUMO characterizes the molecular chemical stability. The energy gap
between the HOMO and the LUMO molecular orbitals is a critical parameter in determining molecular electrical transport properties
because it is a measure of electron conductivity. The chemical activity of the molecule is also observed from eigen values of LUMO
and HOMO and from the energy gap value calculated from them. (HOMOLUMO) separation, which is the result of a significant
degree of intermolecular charge transfer (ICT) from the endcapping electron-donor to the efficient electron acceptor group through pconjugated path. [55, 56].
The computed energy values of HOMO and LUMO in gas phase are -0.31251 eV and -0.19726 eV respectively. The energy
gap value is -0.11525 eV for Phthalazine molecule. The energy values of the frontier orbitals are presented in Table 4. By using
HOMO and LUMO energy values for a molecule, the ionization potential and chemical hardness of the molecule were calculated
using Koopmans theorem [57] and are given by = (IP EA)/2 where IP~E(HOMO), EA~E(LUMO), IP = Ionization potential (eV), EA
= electron affinity (eV)

= ( LUMO - HOMO).

The hardness has been associated with the stability of chemical system. Considering the chemical hardness, large HOMOLUMO gap
means a hard molecule and small HOMOLUMO gap means a soft molecule. One can also relate the stability of molecule to
hardness, which means that the molecule with least HOMOLUMO gap means it is more reactive. The electron affinity can be used in
combination with ionization energy to give electronic chemical potential, = ( LUMO + HOMO). Chemical softness(S) = 1/ describes
the capacity of an atom or group of atoms to receive electrons and is the inverse of the global hardness [58]. The soft molecules are
more polarizable than the hard ones because they need small energy to excitation. A molecule with a low energy gap is more
polarizable and is generally associated with the high chemical activity and low kinetic stability and is termed soft molecule [59]. A
hard molecule has a large energy gap and a soft molecule has a small energy gap [60]. It is shown from the calculations that
Phthalazine has the least value of global hardness (0.057625eV) and the highest value of global softness (17.353579 eV) is expected to
have the highest inhibition efficiency. The global electrophilicity index, = 2/2 is also calculated and these values are listed in
Table 4.

239

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

DFT-B3LYP/6Molecular properties

311++G(d,p)

ELUMO+1 (eV)

-0.17640

ELUMO (eV)

-0.19726

EHOMO (eV)

-0.31251

EHOMO-1 (eV)

-0.33801

EHOMO-LUMO (eV)

-0.11525

EHOMO-LUMO+1 (eV)

-0.13611

EHOMO-1 LUMO (eV)

-0.14075

EHOMO-1 LUMO+1 (eV)

-0.16161

Global hardness()

0.057625

Chemical softness(S)

17.353579

Electronic chemical potential()

0.254885

Global electrophilicity index()

0.5636994

Table 4: Calculated energy values of Phthalazine in its ground state.


3.5 Mullikan analysis:
In the application of quantum mechanical calculations to molecular system, the calculation of effective atomic charges plays
an important role. The results are given in Table 5.The magnitude of two nitrogen atoms is 0.The carbon atomic charges found to be
either positive or negative, were noted to change from -0.1 to 1.11. All the hydrogen atoms have positive values.
S.NO

ATOMS

HF
B3LYP/6-

240

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

311++G(d,p)
1

1N

0.000758

-0.02334

2N

0.000758

-0.02334

3C

-0.5852

-0.60769

4C

1.118006

1.115744

5C

-0.8961

-0.96304

6C

-0.1654

-0.143

7C

-0.1654

-0.143

8C

-0.8961

-0.96304

9C

1.118006

1.115744

10

10C

-0.5852

-0.60769

11

11H

0.20307

0.228608

12

12H

0.150086

0.181445

13

13H

0.17478

0.211277

14

14H

0.17478

0.211277

15

15H

0.150086

0.181445

16

16H

0.20307

0.228608

Table 5: Mulliken atomic charges of Phthalazine for B3LYP and HF with 6-311++G(d,p) basis sets.
3.6 Thermodynamic properties:

241

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The values of some thermodynamic parameters (such as zero point vibrational energy, thermal energy, specific heat capacity,
rotational constants, rotational temperature) of title molecule by DFT/B3LYP/6-311++G(d,p) and HF/ B3LYP/6-311++G(d,p)
methods are listed in Table 6. On the basis of vibrational analysis, These parameters are listed out based on the statistically
thermodynamic functions : heat capacity(C), enthalpy changes(H) and entropy (S) for the title compound. Here, all thermodynamic
calculations were done in gas phase and listed in Table 6.
Parameters

B3LYP/6-311++G(d,p)

Dipole moment (Debye)

5.3032

Zero point energy

HF
5.4630

321641.8

(Joules/Mol)

76.87424 (Kcal/Mol)

345011.5
82.45973 (Kcal/Mol)

Entropy (Cal/Mol-Kelvin)
Total

81.877

80.108

Translational

40.501

40.501

Rotational

28.871

28.826

Vibrational

12.504

10.781

0.15705

0.15963

0.05938

0.06024

0.04309

0.04374

Rotational temperature (Kelvin)

Rotational constants (GHZ)

242

3.27233

3.32623

1.23722

1.25524

www.ijergs.org

(Joules/Mol)

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

0.89778

0.91133

Total

81.081

86.358

Translational

0.889

0.889

Rotational

0.889

Vibrational

79.304

Thermal Energy (KCal/Mol)

0.889
84.581

Molar capacity at constant volume


(Cal/Mol-Kelvin)
Total

27.391

25.057

Translational

2.981

2.981

Rotational

2.981

2.981

Vibrational

21.430

19.096

Table 6 : Theoretically computed Dipole moment(Debye), energy(au), zero point vibrational energy(kcal mol-1), entropy(cal
mol-1k-1), rotational temperature(Kelvin), rotational constant(GHz), thermal energy(Kcal/Mol) and Molar capacity at constant
volume(Cal/Mol-Kelvin)of Phthalazine.
3.7

H NMR analysis

The NMR experimental and theoretical chemical shifts are used to identify the organic compounds and ionic species. It is recognized
that accurate predictions of optimized molecular geometrics are essential for reliable calculations of magnetic properties [61].
GIAO(Gauge Including Atomic Orbital) procedure is somewhat superior since it exhibits a faster convergence of the calculated
properties upon extension of the basis set used. Taking into account the computational cost and the effectiveness of calculation, the
GIAO method seems to be preferable from many aspects at the present state of this subject. On the other hand, the density functional

243

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

methodologies offer an effective alternative to the conventional correlated methods, due to their significantly lower computational
cost.
Application of the GIAO [62] approach to molecular systems was significantly improved by an efficient application of the method to
the ab initio SCF calculations, using techniques borrowed from analytic derivative methodologies. GIAO method is one of the most
common approaches for calculating isotropic nuclear magnetic shielding tensors. [63] .1H NMR chemical shifts calculations of the
title compounds have been carried out by using B3LYP functional with 6-311G++(d,p) basis set. The NMR spectra calculations were
performed by using the Gaussian03 program package. Experimental and theoretical chemical shifts of Phthalazine in 1H NMR spectra
were recorded and the obtained data are presented in Table 7. The combined use of Experimental NMR and computer simulation
methods offers a powerful way to interpret and predict the structure of large biomolecules.
1

H and NMR spectra are shown in Figure 5.

Figure 5: Experimental (upper) and Theoretical(bottom) 1H NMR spectrum of Phthalazine.

244

www.ijergs.org

The theoretical and experimental

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Experimental

Calculated chemical shift

S.NO

ATOMS

Chemical shift)

by B3LYP method

11H

9.562

9.696

12H

7.966

7.953

13H

7.956

7.953

14H

7.950

7.953

15H

7.945

7.953

16H

9.549

9.696

Table 7: The observed (in CDCl3) and predicted 1H NMR isotropic chemical shifts (with respect to TMS, all values in ppm)
for Phthalazine.
3.8 UV spectrum and electronic properties:
The lowest singlet- singlet spin-allowed excited states were taken into account for the TD-DFT calculation in order to
investigate the properties of electronic absorption of Phthalazine molecule. The energies of four important molecular orbitals of
Phthalazine : the second highest and highest occupied MOs (HOMO and HOMO1), the lowest and the second lowest unoccupied
MOs (LUMO and LUMO + 1) were calculated and are presented in Table 4.The experimental max values are obtained from the
UV/Visible spectra recorded in CHCl3. The Figure.6. depicts the observed and the theoretical UVVisible spectra of Phthalazine. The
calculations were also performed with CHCl3 solvent effect. The calculated absorption wavelengths (max) and the experimental
wavelengths are also given in Table 8.

245

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 6: Experimental UV spectra of Phthalazine.


In the electronic absorption spectrum of Phthalazine, there are three absorption bands with a maximum 390.60,357.20,291.00
nm. The strong absorption band 357.20 nm is caused by the n* [64,65] and the other two moderately intense bands are due to
* transitions [66,67]. The * transitions are expected to occur relatively at lower wavelength, due to the consequence of the
extended aromaticity of the ring and high energy transitions.
Then in Phthalazine molecule the n* transition is more significant due to the presence of lone pair of electrons in the
nitrogen atoms. The 3D plots of important molecular orbitals are shown in Figure. 4. The energy gap between HOMO and LUMO is a
critical parameter in determining molecular electrical transport properties [68].The energy gap of HOMOLUMO explains the
eventual charge transfer interaction within the molecule, and the frontier orbital energy gap in case of Phthalazine is found to be
0.11525 eV obtained at TD-DFT method using 6-311++G(d,p) basis set.

246

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Experimental

STATES

(nm)

Log ()

390.60

0.005

357.20

291.00

Calculated by B3LYP/6-311++G(d,p)

(nm)

E (eV)

(f)

34 -> 35

475.01

2.6101

0.0000

0.011

34 -> 36

352.63

3.5160

0.0029

0.328

34 -> 35

271.77

4.5620

0.0000

34 -> 37

Table 8: Theoretical electronic absorption spectra of Phthalazine (absorption wavelength (nm), excitation energies E (eV)
and oscillator strengths (f) using TDDFT/B3LYP/6-311++G(d,p) method in gas phase.

4. CONCLUSION
In this study, we attempted to clarify the characterization of Phthalazine by means of both experimental and
computational methods. Bond lengths and angles were calculated by using DFT and HF methods and compared with each
other. All compared data were shown to have in a good agreement with each other. The non linear optical property of the
compound also calculated with the hyperpolarizability values. Moreover, after frontier molecular orbitals and electronic
structure, energy band gap and Mullikan charges of the title molecule were investigated and interpreted. Atomic charges,
thermodynamic properties, NMR spectra and UVVis spectra were also determined for the identification of the molecule.
Theoretical 1H chemical shifts were found to be in good agreement with the experimental determines. In conclusion, all the
calculated data and simulations not only show the way to the characterization of the molecule but also help for the application
in Pharmaceutical industries and fundamental researches in chemistry and biology in the future.

247

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
1. Al-Assar, F.; Zelenin, K. N.; Lesiovskaya, E. E.; Bezant, I. P.; Chakchir, B. A. Pharm. Chem. J. 2002, 36, 598-603.

2. Carling, R. W.; Moore, K. W.; Street, L. J.; Wild, D.; Isted, C.; Leeson, P. D.; Thomas, S.; OConnor, D.;McKernan, R. M.;
Quirk, K.; Cook, S. M.; Atack, J. R.;Wafford, K. A.; Thompson, S. A.; Dawson, G. R.; Ferris, P.; Castro, J. L. J. Med.
Chem.2004, 47, 1807-1822.

3. Jain, R. P.; Vederas, J. C. Bioorg. Med. Chem. Lett. 2004, 14, 3655-3658.
4. Chorghade, M. S. In Drug Discovery and Development; John Wiley & Sons: Hoboken, New Jersey, 2007; Vol. 2.
5. Lednicer, D.; Mitscher, L. A. In The Organic Chemistry of Drug Synthesis; John Wiley & Sons: New York,
Chichester, Brisbane, Toronto, 1977; Vol. 1.

6. Grasso, S.; De Sarro, G.; De Sarro, A.; Micale, N.; Zappala, M.; Puia, G.; Baraldi, M.; De Micheli, C. J. Med.Chem.
2000, 43, 2851-2859.

7. Nomoto, Y.; Obase, H.; Takai, H.; Teranishi, M.; Nakamura, J.; Kubo, K. Chem. Pharm. Bull. 1990, 38, 2179-2183
8. Cardia, M. C.; Distinto, E.; Maccioni, E.; Delogu, A. J. Heterocycl. Chem. 2003, 40, 1011.
9. Cockcroft, X.; Dillon, K. J.; Dixon, L.; Drzewiecki, J.; Eversley, P.; Gomes, S.; Hoare, J.; Kirrigan, F.; Natthews,I.; Menear,
K. A.; Martin, N. M. B.; Newton, R.; Paul, J.; Smith, G. C. M.; Vile, J.;Whittlr, A. J. Bioorg. Med.Chem. Lett. 2006, 16,
1040-1044.

10. Cockcroft, X.; Dillon, K. J.; Dixon, L.; Drzewiecki, J.; Kirrigan, F.; Loh, V. M.; Martin, N. M. B.; Menear, K. A.;Smith, G.
C. M. Bioorg. Med. Chem. Lett. 2005, 15, 2235-2239.

11. Kim, J. S.; Lee, H.; Suh, M.; Choo, H. P.; Lee, S. K.; Park, H. J.; Kim, C.; Park, S. W.; Lee, C. Bioorg. Med.Chem. 2004, 12,
3683-3686.

12. Haikal, A.; El-Ashery, E.; Banoub, J. Carbohydr. Res. 2003, 338, 2291-2299.
13. Demirayak, S.; Karaburun, A.; Beis, R. Eur. J. Med. Chem. 2004, 39, 1089-1095.
14. Watanabe, N.; Kabasawa, Y.; Takase, Y.; Matsukura, M.; Miyazaki, K.; Ishihara, H.; Kodama, K.; Adashi, H. J.Med. Chem.
1998, 41, 3367-3372.

15. Johnsen, M.; Rehse, K.; Petz, H.; Stasch, J.; Bischoff, E. Arch. Pharmacol. 2003, 336, 591-597.
16. Madhavan, G. R.; Chakrabarti, R.; Kumar, S. K.; Misra, P.; Mamidi, R. N.; Balraju, V.; Ksiram, K.; Babu, R.K.; Suresh, J.;
Lohray, B. B.; Lohray, V. B.; Iqbal, J.; Rajagopalan, R. Eur. J. Med. Chem. 2001, 36, 627-637.

248

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

17. Lenz, E. M.; Wilson, I. D.; Wright, B.; Partidge, E. A.; Roddgers, C. T.; Haycock, P. R.; Lindon, J. C.; Nicholson,J. K. J.
Pharm. Biomed. Anal. 2002, 28, 31-43.,

18. Dogruer, D. S.; Kupeli, E.; Yesilada, E.; Sahin, M. F. Arch. Pharmacol. 2004, 337, 303-310.
19. Napoletano, M.; Norcini, G.; Pellacini, F.; Marchini, F.; Moraazzoni, G.; Ferlenga, P.; Pradella, L. Bioorg. Med.Chem. Lett.
2000, 10, 2235-2238.

20. Chakraborti, A. K.; Gopalakrishnan, B.; Sobhia, E.; Malde, A. Bioorg. Med. Chem. Lett. 2003, 13, 2473-2479. Van der Mey,
M.; Hatzelmann, A.; Van Klink, G. P.; Van der Lann, I. J.; Sterk, G. J.; Thibaut, U.; Timmerman,H. J. Med. Chem. 2001, 44,
2511-2522.

21. Van der Mey, M.; Hildegard, B.; Dennis, C.; Hatzelmann, A.; Timmerman, H. J. Med. Chem. 2002, 45, 2526-2533.
22. Shubin, K. M.; Kuznetsov, V. A.; Galishev, V. A. Tetrahedron Lett. 2004, 45, 1407-1408.
23. del Olmo, E.; Barboza, B.; Ybarra, M. I.; Lopez-Perez, J. L.; Carron, R.; Sevilla, M. A.; Boselli, C.; Feliciano, A.S.
Bioorg. Med. Chem. Lett. 2006, 16, 2786-2790.

24. Cashman, J. R.; Voelker, T.; Johnson, R.; Janowsky, A. Bioorg. Med. Chem. 2009, 17, 337-343.
25. D.C. Young, Computational Chemistry: A Practical Guide for Applying Techniques to Real World Problems
(Electronic), John Wiley & Sons Inc.,New York, 2001.

26. M. Karabacak, A. Coruh, M. Kurt, J. Mol. Struct. 892 (2008) 125131


27. Sundaraganesan, S. Illakiamani, H. Saleem, P.M. Wojciechowski, D.Michalska, Spectrochim. Acta 61A (2005)
2995.

28. T. Prabhu, S. Periandy, S. Ramalingam, Spectrochim. Acta A 83 (2011) 816.


29. M.J. Frisch, J.A. Pople, J.S. Binkley, J. Chem. Phys. 80 (1984) 32653269.
30. R. Ditchfield, Mol. Phys. 27 (1974) 789807
31. M.J. Frisch et al., Gaussion 03 Program, Gaussian, Inc., Wallingford, CT, 2004.
32. G. Keresztury, S. Holly, J. Varga, G. Besenyei, A.Y. Wang, J.R. Durig, Spectrochim.Acta 9A (1993) 20072026.
33. G. Keresztury, in: J.M. Chalmers, P.R. Griffith (Eds.), Raman Spectroscopy: Theory, Hand book of Vibrational
Spectroscopy, vol. 1, John Wiley & Sons, Ltd.,New York, 2002.

34. M.J. Frisch et al., Gaussian 09, Revision A.1, Gaussian, Inc., Wallingford CT,2009.
35. Panicker, C. Y.; Ambujakshan, K. R.; Varghese, H. T.; Mathew, S.;Ganguli, S.; Nanda, A. K.; Van Alsenoy, C. J.
Raman Spectrosc. 2009, 40,527536
249

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

36. M. Silverstein, G. Clayton Basseler, C. Morill, Spectrometric Identification of Organic Compounds, Wiley, New
York, 1981.,

37. N. Sundaraganesan, K. Satheshkumar, C. Meganathan, B.D. Joshua, Spectrochim. Acta A 65 (2006) 1186-1196.
38. G. Socrates, Infrared and Raman Characteristic Group Frequencies e Tables and Charts, third ed., John Wiley &
Sons, Chichester, 2001.

39. S. Jeyavijayan, M. Arivazhagan, Indian J. Pure Appl. Phys. 48 (2010) 869-874.


40. V. Krishnakumar, R.J. Xavier, Indian J. Pure Appl. Phys. 41 (2003) 597
41. A. Srivastava, V.B. Singh, Indian J. Pure. Appl. Phys 45 (2007) 714720.
42. E.B. Wilson, J.C. Decius, P.C. Cross, Molecular Vibrations, McGraw Hill, 1978.
43. J. Mohan, Organic Spectroscopy Principles and Applications, second ed.,Narosa Publishing House, New Delhi,
2001.

44. N.P.G. Roeges, A Guide to the Complete Interpretation of Infrared Spectra of Organic Structures, Wiley, New
York, 1994.

45. Y.X. Sun, Q.L. Hao, W.X. Wei, Z.X. Yu, L.D. Lu, X. Wang, Y.S. Wang, J. Mol. Struct.:Theochem. 904 (2009) 74
82

46. C. Andraud, T. Brotin, Garcia, F. Pelle, P. Goldner, B. Bigot, A. Collet, J. Am. Chem. Soc. 116 (1994) 2094.
47. V.M. Geskin, C. Lambert, J.L. Bredas, J. Am. Chem. Soc. 125 (2003) 15651.
48. M. Nalano, H. Fujita, M. Takahata, K. Yamaguchi, J. Am. Chem. Soc. 124 (2002) 9648.
49. D. Sajan, I.H. Joe, V.S. Jayakumar, J. Zaleski, J. Mol. Struct. 785 (2006) 43
50. X. Sun, Q.L. Hao, W.X. Wei, Z.X. Yu, D.D. Lu Wang, Y.S. Wang, Mol. Struct.thermo chem. 74 (2009) 901904.
51. S. Muthu, J. Uma Maheswari, Spectrochimica Acta Part A 92 (2012) 154 163
52. M.C. Ruiz Delgado, V. Hernandez, J. Casado, J.T. Lopez Navarre, J.M. Raimundo,P. Blanchard, J. Roncali, J. Mol. Struct.
151 (2003) 651653.

53. J.P. Abraham, D. Sajan, V. Shettigar, S.M. Dharmaprakash, I. Nemec, I.H. Joe, V.S.Jayakumar, J. Mol. Struct. 917 (2009)
2736.

54. D.A. Kleinman, Phys. Rev. 126 (1962) 19771979


55. B. Kosar, C. Albayrak, Spectrochim. Acta 78A (2011) 160167.
250

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

56. L. Xiao-Hong, Z. Xian-Zhou, Comput. Theor. Chem. 963 (2011) 3439.


57. T.A. Koopmans, Physica 1 (1934) 104113.
58. Pearson R G, Inorg Chem, 1988; 27: 734-740.
59. I. Fleming, Frontier Orbitals and Organic Chemical Reactions, (John Wiley and Sons), NewYork, 1976
60. Obi-Egbedi N O, Obot I B, El-Khaiary M I, Umoren S A and Ebenso E E, Int J Electro Chem Sci., 2011; 6:56495675

61. N. Subramanian, N. Sundaraganesan, J. Jayabharathi, Spectrochim. Acta A 76 (2010) 259.


62. R. Ditchfield, J. Chem. Phys. 56 (1972) 56885692.
63. T. Schlick, Molecular Modeling and Simulation: An Interdisciplinary Guide, vol.21, second ed., Springer, New
York, 2010.

64. R.H. Holm, F.A. Cotton, J. Am. Chem. Soc. 80 (1958) 56585663.
65. F.A. Cotton, C.W. Wilkinson, Advanced Inorganic Chemistry, third ed., Interscience
Publisher, New York, 1972.

66. W. Barnum, J. Inorg. Nucl. Chem. 21 (1961) 221237.


67. R.M. Silverstein, G.C. Bassler, T.C. Morrill, Spectrometric Identification of Organic Compounds, fifth ed., John
Wiley & Sons, Inc., New York, 1981.

68. K. Fukui, Science 218 (1982) 747

251

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Operation and control of Hybrid Microgrid


1

V.Yamuna Parkavi (PG Scholar), 2R.Vijayalakshmi (Assistant Professor/EEE)


Nandha Engineering College, Email id: parkavi.yamuna@gmail.com

Abstract In recent years, hybrid microgrid comprises of dc and ac subgrids interconnected by power electronic interfaces unlike in
existing microgrids which are purely ac or dc. The main objective is to manage the power flow among all sources distributed
throughout the different types of subgrids. The hybrid grid reduces the process of multiple dc-ac-dc or ac-dc-ac conversions in an
individual grid and reduces the number of converter stations for converting ac to dc or dc to ac power. This hybrid can operate in both
grid connected and standalone mode. The proposed grid can operate in both standalone and grid connected mode. Proportional integral
controller is used for smooth power transfer.
Keywords hybrid microgrid, proportional integral controller, converter station, standalone, grid connected mode.

I. INTRODUCTION
The increasing number of Renewable Energy Sources and Distributed Generation (DG) requires new technique for the
operation and management of the electricity grid to enhance proper power sharing. In the present power scenarios, when distributed
generations are mentioned for small scale generations to meet the various customer demand. The coordination of these small scale
generations may consist of photovoltaic, batteries, wind and fuel cells which are formed as Micro-grid [1],[2]. Interest on microgrid
is rapidly increasing as it is based on the renewable energy sources, which connects to utility grid and various types of loads. In the
grid tied mode, it is connected with utility whereas in case of autonomous mode it is totally disconnected. In case of autonomous mode
it becomes totally independent and fulfills the demands of customers from the renewable energy sources [3],[4]. By using renewable
energy sources we can get pollution less environment and it is available enormously in nature. The distributed generation and in
integration of Renewable Energy Sources into the grid provides power quality problems.
AC Microgrid

Interfaces between ac and dc bus

PV

DC/AC

AC/DC

Battery

DC/AC

AC/DC

DC Microgrid

DC/DC

DC/DC

PV

Battery

DC
Load

AC
Load

External
AC grid

External
DC grid

AC/DC

AC bus

DC bus

Fig. 1 Representation of ac dc hybrid microgrid


The main objective of constructing a micro grid is to provide reliable, high quality electric power to digit a societies with an
environmentally friendly and sustainable way. Recently the most important and advanced futures of a smart grid is the advanced
structure which can smooth the progress of the connections of various ac and dc generation systems, energy storage options, and
various ac and dc loads with the most advantageous and benefit utilization and operation efficiency [5]. In microgrid the power
electronics technology plays a most important role for interfacing different sources and loads to a grid to achieve the goal.
AC micro grids have been proposed to assist to tie the renewable power sources to conventional ac systems. On the other
hand, dc output power from photovoltaic (PV) panels must be converted into ac for using ac loads. This can be made using dc/dc
boosters and dc/ac inverters. In an ac grid, for various home and office facilities embedded ac/dc and dc/dc converters are necessary
for supplying different dc voltages. AC/DC/AC [6],[7] converters are commonly used as drives for speed control of ac motors in
various industrial plants. In recent times, dc grids are resurging due to the development of renewable dc power sources and their
inbuilt benefit for dc loads in industrial, commercial, and residential applications. The dc micro-grid has been proposed to integrate
various distributed generators [8]. Conversely, ac sources have to be converted into dc before connecting it to a dc grid. For
conventional ac loads dc/ac inverters are mandatory. [9].

252

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In individual AC or DC grids, multiple reverse conversions AC to DC and then DC to AC, and/or vice versa are essential,
which may add extra loss to the system operation and it will make the system more complex. In proposed hybrid AC/DC microgrid
multiple reverse conversions are reduced
by various AC and DC distributed generators (DGs) and loads than in an individual AC or DC microgrid. In micro-grids the voltage
and frequency control is also one of the most significant issues. The control schemes are considered in order to manage the voltage
and frequency of the microgrid.

2. SYSTEM CONFIGURATION AND


DESIGN
A simple hybrid microgrid as shown in Fig.4 is modeled using the Simulink in the MATLAB to simulate system operations
and controls. Forty kW PV arrays are associated to dc bus through a dc/dc boost converter to simulate dc sources. A capacitor C is to
smother high frequency ripples of the PV output voltage.

DC Bus
PV

AC Bus

Boost
Converter

DC/AC

PV

Bidirectional
Converter

PI
Controller

To DC load

Utility Grid

To AC load

Fig. 2 Block diagram for proposed system


A hybrid microgrid system arrangement consist a mixture of ac and dc sources and loads which are connected to the
equivalent dc and ac networks. The dc and ac links are connected together through bi-directional converters and two transformers. The
dc loads are connected to dc bus and ac loads are connected to ac bus. The utility grid is connected to ac network of hybrid microgrid
system. The hybrid microgrid can operate in two different modes standalone mode and grid connected mode.

Grid connected mode


In grid connected mode, the main converter is to afford stable dc bus voltage and to switch over power between the ac and dc
buses which also provide the required reactive power. The maximum power is maintained by a boost converter. When the total load at
dc side is greater than a power generation at the dc side, the converter injects power from the ac to dc side. When the output power of
the dc loads is less than the dc source, the converter acts as an inverter and injects power from dc to ac side. When the total load is
less than the total power generation in the hybrid grid, power is injected into the utility grid. Or else, the hybrid grid will receive
power from the utility grid. In the grid tied mode, the battery converter is not essential in system operation as the power is balanced by
the utility grid. In standalone mode, both power balance and voltage stability mainly depend on battery. According to different
operating conditions battery converter or boost converter maintains the DC bus voltage stable. The main converter is controlled to
provide a stable and high quality ac bus voltage. Based on system operating necessities, PV can operate in both on MPPT or offMPPT mode. Variable solar irradiation is applied to the arrays correspondingly to simulate variation of power of ac and dc sources
and test the MPPT control algorithm.

Standalone mode
In autonomous mode, hybrid microgrid becomes electrically cut off from the rest of the utility grid. The battery plays a very
vital role for balancing the power and to obtain the voltage stability. Battery or boost converter is implemented to maintain DC bus
voltage stable, according to different operating conditions. By controlling the main converter a stable and high quality ac bus voltage
is provided.

3. MODELING OF SOLAR PANEL


In a grid-connected PV system, PV power varies with operational conditions such as irradiance, temperature, light incident
angle, reduction of sunlight transmittance on glass of module, and shading. A photovoltaic array is an interconnection of modules
which in turn is made up of many PV cells in series or parallel. The power produced by single module is not enough hence the
modules in a PV array are generally first connected in series to attain the desired voltages then the individual modules are connected in
parallel to allow the system to produce more current.
253

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 3 Equivalent circuit of pv cell


Generally the equivalent circuit of a general PV model consists of a photocurrent, a diode, a parallel resistor which expresses
a leakage current, and a series resistor which describes an internal resistance to the current flow [10]. The current output equation of a
solar cell is given as
= [exp
((/)(/ + ) ) 1]
= + .
=

3 exp

1000

1 1
.

4. MODELING OF BOOST CONVERTER


The dc-dc boost converter is a step up converter that steps up the input voltage by storing energy in an inductor for a certain
time period, and then uses this energy to boost the input voltage to a higher value.

Fig. 4 Circuit diagram of Boost Converter


The relationship between the input and output voltages is given by
Vinton + (Vin Vout) toff = 0

The boost converter is average model type meaning that switching action is absent. This model uses controlled current and
voltage sources, whose input is reference signals, instead of power electronic devices to generate boosted voltage across output
terminal. The output of boost converter is the input to the dc/ac inverter.

5. MODELING OF BATTERY
Terminal voltage Vb and state of charge (SOC) are the two important factors to describe the state of a battery.

= + . .
+ . exp
+
254

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

= 100 (1 +

6. PI CONTROLLER
In various industrial applications, proportional integral (PI) controller is one of the
famous controllers used in a wide range. The PI controller output is in time domain and is defined by the following equation:

= +

()
0

Fig 5: Block diagram of PI control system


Adding the integral part to the proportional controller is one of the major advantages to eradicate the steady state error in the
controller variable. On the other hand, one of the main drawback of the integral controller is if the error does not change its direction
subsequently it gets saturated after a while. By introducing a limiter in the circuit this incident can be avoided at the integral part of the
controller ahead of adding its output to the output of the proportional controller. The input to the PI controller is the speed error (e),
while the output of the PI controller is used as the input of reference current block as shown in fig 5.

7. THE CONTROL STRATEGY


The different sources and converters have to be coordinately controlled with the utility grid to obtain a continuous, high
efficiency, and high quality power to variable ac and dc loads under variable solar irradiation when the hybrid grid operates in both
grid tied and standalone modes. This section presents the various control algorithms for those converters.

A. Grid-Connected Mode
When the hybrid microgrid operates in grid connected mode, the main purpose of the boost converter is to track the MPPT of
the PV array by regulating its terminal voltage. The bidirectional converter is controlled to regulate flow of current to achieve MPPT
and to coordinate with ac grid. The excess energy of the hybrid grid can be sent to the utility system. The excess energy is stored in
battery but the utility grid balances the excess power hence battery is less sufficient in this hybrid microgrid. The main purpose of
battery is to eradicate regular power transfer between the ac and dc link. The dc/dc converter of the battery can be controlled as the
energy buffer using the technique. The main converter is designed to operate as bidirectional converter to integrate harmonizing
characteristic of solar sources. The main control objectives of the bidirectional converter is for variable load conditions a stable dc-link
voltage must be maintained and to synchronize with the ac link and utility system [11],[12].
The power flow equations at the dc and ac links are,
Ppv + Pac = PdcL + Pb
Ps = Pw PacL - Pac

B. Standalone mode
In isolated mode, the bidirectional dc to dc converter operates either in charging or discharging mode based on power balance
in the system [13]. By using either battery or boost converter the dc-link voltage is maintained based on different system operating
conditions.
Ppv + Pw = PdcL + Ploss + Pb

255

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.6 Control diagram for boost and main converter

Fig.7 Equivalent circuit representation for the three converters.


The time average equivalent circuit representation of the booster and main converter is shown in Fig. 6 based on the basic
principles and descriptions for booster and inverter respectively [16].

8. SIMULATION RESULT
This section presents the simulation results of the conventional and the proposed methods in order to certify the performance of
the control scheme. Software simulation has been done using MATLAB/SIMULINK simulation package. The full diagram of the
control methodology and the modulation is shown.
The Fig. 8 shows the terminal voltage of solar panel. The Fig. 9 shows that the voltage drops at 0.25 s and recovers quickly
by the controller. Fig.11 shows the voltage (voltage times 0.2 for comparison) and current responses at the ac side of the main
converter under variable solar radiation. Fig12. Shows the voltage and current response when the dc load increases from 20 kW to 40
kW at 0.25 s with a xed irradiation level 750 W/m2 . It can be seen from the current direction that power is injected from dc to ac
grid before 0.25s and reversed after 0.25 s. Fig. 13 shows the voltage response at dc side of the bidirectional converter under the same
conditions.

Fig.8 The terminal voltage of the solar panel

256

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.9 PV output power

Fig. 10 Solar radiation


Different resource conditions and load capacities are tested to validate the control methods. The simulation results show that
the hybrid grid can operate stably in the grid-tied or isolated mode. Stable ac and dc bus voltage can be guaranteed when the operating
conditions or load capacities change in the two modes. The power is smoothly transferred.

Fig.11. AC side voltage and current of the main converter with variable solar radiation

Fig.12. AC side voltage and current of the main converter with constant solar radiation

257

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.12 Voltage transient response.

9. CONCLUSION
A hybrid microgrid is proposed and briefly studied. The hybrid microgrid can decrease the processes of dc/ac and ac/dc
conversions in an individual ac or dc grid. The efficient models and control schemes are proposed for the converters to maintain stable
system operation under various load and resource conditions. Various control methods have been incorporated to manage the
maximum power from dc and ac sources and to harmonize the power exchange between dc and ac grid. From the simulation results it
is known that the hybrid grid can be stable in both the grid-tied and standalone mode. When there is a change in the operating
conditions it is assured to have stable ac and dc bus voltage in both the modes. When the load condition changes, smooth power
transfer is obtained.
There are some sensible problems in implementing the hybrid grid based on the current ac subjugated infrastructure. Mainly
the total system efficiency depends on the reduction in various conversion losses and the increase in an additional dc link. The hybrid
grid is mainly implemented where some small customers need to install their own PV systems on the roofs and are wish to use LED
lighting systems. The hybrid microgrid is practicable for some isolated industrial plants with PV system as their major power supply.

REFERENCE:
[1] Y. Ito, Z. Yang and H. Akagi,DC micro-grid based distribution power generation system, in Proc. IEEE Int. Power Electron.
Motion Control Conf., Aug. 2004, Vol. 3, pp. 1740-1745.
[2] H. Zhou, T. Bhattacharya, D. Tran, T. S. T. Siew, and A. M. Khambadkone, Composite energy storage system involving battery
and ultracapacitor with dynamic energy management in microgrid applications, IEEE Trans. Power Electron., vol. 26, no. 3, pp. 923
930, Mar. 2011.
[3] T.L.Vandoorn, B.Renders, L.Degroote, B.Meersman, and L.Vandevelde, Active load control in islanded microgrids based on the
grid voltage, IEEE Trans. Smart Grid, vol. 2, pp. 139151, Mar. 2011.
[4] A. Ghazanfari, M. Hamzeh, and H. Mokhtari, A control method for integrating hybrid power source into an islanded microgrid
through CHB multilevel inverter, in Proc. IEEE Power Electron., Drive Syst. Technol. Conf., Feb. 2013, pp. 495500.
[5] D. J. Hammerstrom, AC versus DC distribution systems-did we get it right?, in Proc. IEEE PowerEng. Soc. Gen. Meet., Jun.
2007, pp. 15.
[6] Y. Ito, Z.Yang, and H. Akagi,DC micro-grid based distribution power generation system, in Proc. IEEE Int. Power Electron.
Motion Control Conf., Aug. 2004, vol. 3, pp. 17401745.
[7] A. Sannino, G. Postiglione, and M. H. J. Bollen, Feasibility of a DC network for commercial facilities, IEEE Trans. Ind. Appl.,
vol. 39, no. 5, pp. 14091507, Sep. 2003.
[8] A. Sannino, G. Postiglione, and M. H. J. Bollen, Feasibility of a DC network for commercial
facilities, IEEE Trans. Ind. Appl., vol. 39, no. 5, pp. 14091507, Sep. 2003.
[9] X. Liu, P. Wang and P.C. Loh, A hybrid AC/DC microgrid and its coordination control, IEEE Trans. Smart Grid, vol. 2, no. 2,
pp. 278286, 2011.
[10] V. Madhavi, J. V. R.Vithal Modeling and Coordination Control of Hybrid AC/DC Microgrid
International Journal of Emerging Technology and Advanced Engineering Volume 4, Issue 8, August 2014
[11] S. A. Daniel and N. Ammasai Gounden, A novel hybrid isolated generating system based on PV fed inverter-assisted winddriven induction generators, IEEE Trans. Energy Conv., vol. 19, no. 2, pp. 416422, Jun. 2004.
[12] C. Wang and M. H. Nehrir, Power management of a stand-alone wind/ photovoltaic/fuel cell energy system, IEEE Trans.
Energy Conv., vol. 23, no. 3, pp. 957967, Sep. 2008.
[13] L. A. C. Lopes and H. Sun, Performance assessment of active frequency drifting islanding detection methods, IEEE Trans.
Energy Conv., vol. 21, no. 1, pp. 171180, Mar. 2006.
[14] L. Jong-Lick and C. Chin-Hua,Small-signal modeling and control ZVT-PWM boost converters, IEEE Trans. Power Electron.,
vol. 18, no. 1, pp. 210, Jan. 2003.
[15]Y. Sozer and D. A. Torrey, Modeling and control of utility interactive inverters, IEEE Trans. Power Electron., vol. 24, no. 11,
pp. 24752483, Nov. 2009
258

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An Advanced Topology for Cascade Multilevel


Inverter Based on Developed H-Bridge
1
2
Jesline Naveena .A (PG Scholar), Mr.B.Ramraj ( Assistant Professor )
Nandha Engineering College,Erode.
Email ID:ajesline@gmail.com1.rajuvcet@gmail.com2.

Abstract In this paper, an advanced topology for cascaded multilevel inverter using developed H-bridges is proposed. The
proposed topology requires a lesser number of dc voltage sources and power switches ,which results in decreased complexity and
total cost of the inverter. Moreover, a Bee algorithm(BA) to determine the magnitude of dc voltage sources is proposed. It is used
to solve the transcendental equations for finding the switching angles. This algorithm can be used for any number of voltage levels
without complex analytical calculations. Simulation results for 15-level inverter verify the validity and effectiveness of the proposed
algorithm.

Keywords.

Cascaded multilevel inverter, Developed H-bridge, Multilevel inverter, Bee Algorithm, Multicarrier PWM
technique,Voltage source inverter.
I.INTRODUCTION

Basically Inverter is a device that converts DC power to AC power at desired output voltage
because of other
advantages such as high power quality, lower order harmonics, lower switching losses, and better electromagnetic interference
[1],[2]. and frequency. Demerits of inverter are less efficiency, high cost, and high switching losses. To overcome these demerits,
were going to multilevel inverter[3].Multilevel inverter output voltage produce a staircase output waveform, this waveform look
like a sinusoidal waveform..The multilevel inverter output voltage having less number of harmonics compare to the
conventional bipolar inverter output voltage[4]. If the multilevel inverter output increase to N level, the harmonics reduced to the
output voltage value to zero. The multi level inverters are mainly classified as Diode clamped, Flying capacitor inverter and
cascaded multi level inverter[5].The cascaded multilevel control method is very easy when compare to other multilevel inverter
because it doesnt require any clamping diode and flying capacitor. Moreover, abundant modulation techniques have been
developed in cascade multilevel inverter and reducing the power losses.The most attractive features of multilevel inverters are as
follows.
1.They can generate output voltages with extremely low distortion and lower order harmonics.
2. They draw input current with very low distortion.
3.In addition, using sophisticated modulation types of methods, CM voltages can be eliminated.
4.They can operate with a less switching frequency.
II.CASCADE MULTILEVEL INVERTER
The concept of this inverter is based on connecting H-bridge inverters in series to get a sinusoidal voltage output. The
output voltage is the sum of the voltage that is generated by each cell. The number of output voltage levels are 2n+1,
where n is the number of cells. The switching angles can be chosen in such a way that the total harmonic distortion is
minimized. One of the advantages of this type of multilevel inverter is that it needs less number of components comparative to the
Diode clamped or the flying capacitor, so the price and the weight of the inverter is less than that of the two types.Figure.1
shows the power circuit for one phase leg of a three-level and five-level cascaded inverter. In a 3-level cascaded inverter each
single-phase full-bridge inverter generates three voltages at the output: +Vdc, 0, -Vdc (zero, positive dc voltage, and negative
dc voltage). This is made possible by connecting the capacitors. The resulting output ac voltage swings from -Vdc to +Vdc
with three levels, -2Vdc to +2Vdc. The output voltage of an M-level inverter is the sum of all the individual inverter
outputs. Each of the H- Bridges active devices switches only at the fundamental frequency, and each H-bridge unit generates a
quasi- square waveform by phase- shifting ts positive and negative phase legs with switching timings.Further each switching
259

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure.1. Single Phase Structures Of Cascaded Inverter (A)3-Level,(B)5-Level


device always conducts for 180 (or 1/2 cycle)regardless of the pulse width of the quasi-square wave so that this
switching method results in equalizing the current stress in each active device. This topology of inverter is
suitable for high voltage and high power inversion because of its ability of synthesize waveforms with better harmonic
spectrum and low switching frequency. Considering the simplicity of the circuit and advantages, Cascaded H-bridge
topology is chosen for the presented work. A multilevel inverter has four main advantages over the conventional
bipolar inverter. First, the voltage stress on each switch is decreased due to series connection of the switches.The major
advantage of this topology and its algorithms is related to its ability to generate a considerable number of output
voltage levels by using a low number of dc voltage sources and power switches but the high variety in the
magnitude of dc voltage sources is their most remarkable disadvantage

Fig. 2.Proposed seven level inverters


(a).First proposed topology,(b). Second proposed topology

TABLE I
OUTPUT VOLTAGES OF THE PROPOSED SEVEN- LEVEL INVERTERS

260

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In this paper, in order to increase the number of output voltage levels and reduce the number of power
switches, driver circuits, and the total cost of the inverter, a new topology of cascaded multilevel inverters is
proposed[2],[4],[7] and [8].Then, to determine the magnitude of the dc voltage sources, a new algorithm is proposed.
Moreover, the proposed topology is compared with other topologies from different points of view such as the number of
IGBTs, number of dc voltage sources, the variety of the values of the dc voltage sources, and the value of the
blocking voltages per switch. Finally, the performance of the proposed topology in generating all voltage levels
through a 15-level inverter is confirmed by simulation using power system computer aided design (PSCAD)
software.
III. ADVANCED TOPOLOGY
It can be obtained by adding two unidirectional power switches and one dc voltage source to the H- bridge inverter
structure. In other words, the proposed inverters are comprised of six unidirectional power switches (S a, S b, S L,1, S L,2, S
R,1, and S R,2) and two dc voltage sources (VL,1and VR,1).Inthis paper, these topologies are called the developed H bridge. As
shown in Fig. 2,the simultaneous turn on of S L,1 and S L,2 (or S R,1 and S R,2)causes the voltage sources to short circuit in that
Figure2.Then the simultaneously turned on to mentioned switches must be avoided. In addition, S a and the S b should not turn on.
The difference in the topologies illustrated in Fig.1 is in the connection of the dc voltage sources polarity. Table I shows the output
voltages of the proposed inverters for different states of the switches. Therefore, the values of dc voltage sources should be
different to generate more voltage levels without increasing the number of switches and dc volt age sources.
An advanced topology, the number of output voltage levels ( N step) , number of switches ( N switch), number of dc
voltage sources ( N source) , and the maximum magnitude of the generated voltage are calculated as follows, respectively:
2n+1
N step = 2
(1)
1
N switch = 4n + 2
(2)
N source = 2n
(3)

IV.BEE ALGORITHM
The Bee algorithm is an optimization algorithm based on the natural foraging behaviour of honeybees to find the
optimal solution[6]. A bee colony consists of three kinds of bees: employed bees on-looker bees, and scout bees. Employed bees
carry information about place and amount of nectar in a particular food source. They transfer that form information to on-looker
bees with dance in that of hive. The time of dance determines the amount of nectar in a food source An on-looker chooses a food
source based on the amount of nectar in a food source A good food source attracts more on-looker bees to itself. Scout bees
seek in search space and find new food sources. Scout bees control the exploring process, considered as possible solutions to
a problem[5].The food source is a D- dimensional vector, where D is while employed and on-looker bees play an exploiting
role. In this algorithm, food sources are the number of optimization variables.The amount of nectar in a food source determines
the value of fitness.The basic flowchart of BA is shown in Fig. 3.

261

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.3.Basic Flowchart of BA
In step 1, random initial food sources are generated. The number of initial food sources is half of the bee colony. In
step 2, employed bees are sent to the food sources to determine the amount of nectar and calculated to its fitness. For each food
source, there is only one employed bee. So, the number of food sources is equal to the number of employed bees. In addition, the
employed bees modify the solutions, saved in memory, by searching in the neighbourhood of its food source.The employed
bees save the new solution if its fitness is better than the older one. Employed bees go back to the hive and share the
solutions with the onlooker bees. In step 3, on-looker bees, which are another half of the colony, select the best food sources using
a probability-based selection process. Food sources with more nectar attract more on-looker bees. On- looker bees are sent to the
selected food sources. The on-looker bees improve the chosen solutions and calculate its fitness. Similar to employed bees,
the on-looker bees save a new solution if its fitness is better than an older solution. In step 4, the food sources that are not
improved for a number of iterations are abandoned. So, the employed bee is sent to find new food sources as a scout bee. The
abandoned food source is replaced by the new food source. Finally, in step5, the best food source is memorized. The maximum number
of iterations is set as a termination criterion which is checked at the end of iteration. If it is not met, the algorithm returns to step2 for
the next iteration.
262

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

V.ADVANCED ALGORITHM TO DETERMINE MAGNITUDES OF DC VOLTAGE SOURCES


In this paper, the following algorithm is applie d to determine the magnitude of dc voltage sources. It is important to note that all voltage
levels (even and odd) can be generated.

A. Proposed Seven-Level Inverter


The magnitudes of the dc voltage sources of the 7- level inverter shown in Fig.2(b) are determined as follows
VL,1 = Vdc
(4)
VR,1 = 2Vdc.
(5)
Considering (5), (6) Table I, the proposed 7level inverter can generate 0, 1 V, 2V, 3Vdc at output voltage.

B. Proposed 15-Level Inverter


The magnitudes of the dc voltage sources of the proposed15level inverter are recommended as follows:
VL,1 = Vdc
(6)
VR,1 = 2Vdc
(7)
VL,2 = 5Vdc
(8)
The proposed inverter can generate all negative and positive voltage levels from 0 to 15Vdc with steps of Vdc.

C. Proposed General Multilevel Inverter


The magnitudes of the dc voltage sources of the proposed general multilevel inverter can be obtained as follows:
j 1
VL,j = 5 Vdc for j = 1, 2, 3, . . . , n
(9)
j 1
VR,j = 2 5
Vdc for j = 1, 2, 3, . . . , .
(10)
Considering (4) and (16), the values of Vo, max and Vblock, n of the proposed general multilevel inverter are as
follows,respectively:
n 1
Vo,max = VL,n + VR,n = 3 5 Vdc
(11)
n 1
Vblock, n = 4(VL,n+VR,n) =12(5
)Vdc
(12)

VI. SIMULATION RESULTS


In order to verify the correct performance of the proposed multilevel inverter in generating all output voltage levels (even and
odd), a 15 level inverter based on the topology shown in Fig.2.It has been used for the simulation and Table I shows the
switching states of the 15-level inverter. The simulation is done by using PSCAD software. The simulated output voltage and current
waveforms are shown in Fig.4.As Fig.4(a) shows, the proposed topology is able to generate 15 levels (15 positive levels, 15
negative levels, and 1 zero level) with the maximum voltage of 225 V. Comparing the output voltage and current waveform
indicates that the output current waveform is more similar to the ideal sinusoidal waveform than the output voltage because the
RL load acts as a low-pass filter. In addition, there is a phase difference between the output voltage and current waveforms,

Fig.4.Proposed 15-level inverter.(a) Output voltage waveform.


(b) Output current waveform.

263

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

which is caused by the inductive feature of the load. The total harmonic distortions of the output voltage and current are
equal approximately below 3%,respectively.Considering the magnitude of the
blocking voltage of the switches, the
relations associated to the maximum voltage drop of the switches are well confirmed.Fig.5,6shows the simulation results of the
implemented 15 level inverter.

Fig.5.Simulation Output Of 15 level Inverter

Fig.6.Proof for THD Result


264

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

VII.CONCLUSION
In this paper, an advanced topologies have been proposed for multilevel inverters to generate seven voltage levels at the
output. The basic topologies can be developed to any number of levels at the output where the 7-level, 15- level and general
topologies are consequently presented. Therefore, in proposed system efficiency is about 97.2% and THD is below 3%.In addition,
a bee algorithm to determine the magnitude of the dc voltage sources has been proposed. An advance topology was compared
with the different kinds of presented in a topologies in literature from different points of this view. An according to the comparison
results, hence advanced topology requires a lesser number of MOSFETs, power diodes, driver circuits, and dc voltage sources.
Moreover, the magnitude of the blocking voltage of the switches is lower than that of conventional topologies.

REFERENCES:
[1]. Ebrahim Babaei Somayeh Alilu, and Sara Laali, IEEEA New General Topology for Cascaded Multilevel Inverters
With Reduced Number of Components Based on Developed H- BridgeIEEETrans.Ind. Electron., vol 61,Aug.2014.
[2] E. Babaei and S. H. Hosseini, Charge balance control methods for asymmetrical cascade multilevel converters, in Proc.
ICEMS, Seoul, Korea,2007,pp.7479.
[3].K. Wang, Y. Li, Z. Zheng, and L. Xu, Voltage balancing and fluctuation suppression methods of floating capacitors in a new
modular multilevel converter, IEEE Trans. Ind. Electron., vol. 60, no.5,pp.19431954,May,2013.
[4]. J. Ebrahimi, E. Babaei, and G. B Gharehpetian,A new topology of cascaded multilevel converters with reduced number of
components for high voltage applications, IEEE Trans. Power Electron., vol.26,no. 11,pp. 31093118, Nov,2011.
[5].M.Manjrekar and T.A.Lipo,A hybrid multilevel inverter topology for drive application, in Proc. APEC, 1998, pp. 523529
[6].Kavousi,Behrooz,Vahidi,Naeem Farokhnia and S. Hamid Fathi,Application of the Bee Algorithm for Selective Harmonic
Elimination Strategy in Multilevel Inverters IEEE Trans.Power Electron., vol. 27, no. 4, pp. 625636, Apirl 2012.
[7].A. Rufer, M. Veenstra, and K. Gopakumar,Asymmetric multilevel converter for high resolution voltage phasor
generation, presented at the Proc. EPE, Lausanne, Switzerland, 1999.
[8]. S. Laali, K. Abbaszades, and H. Lesani, Anew algorithm to determine the magnitudes of dc voltage sources in
asymmetrical cascaded multilevel converters capable of using charge balance control methods, in Proc. ICEMS, Incheon,
Korea, 2010, pp. 5661

265

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Optimized implementation of ANN control strategy for parallel operation of


single phase voltage source inverter
Dinesh Kumar.V1 PG Scholar, Dr. Balachandran.M2 Associate Professor, Nirubha.M3 Assistant Professor,
Department of EEE, Nandha Engineering College, Erode, India.
dineshv10@gmail.com, balachandran_pm@yahoo.com, nirubha.m@gmail.com

Abstract This paper describes about the efficient control strategy in parallel operation of inverters in common alternating
current(AC) bus which is used in real time applications like uninterrupted power supply (UPS), medical equipments and many. The
control system for parallel operation of inverters consists of two main loops; the first control loop is parallelism control over the
feedback inductor current to modify the input voltage to the filter. The second loop is voltage control over the output voltage of each
individual voltage source inverter connected to the common AC bus. Here control system is implemented in neuro fuzzy control
strategy which ensures less precise, flexible, better stability in control. The proposed control strategy ensures the voltage source
inverters(VSI) operating in common AC bus with proper sharing of load current, redundancy and without any needs of
communication exchange between inverter modules .The parallel operation of inverters in common AC bus with proposed control
system are verified both in theoretical and experimental results.

Keywords Alternating Current(AC),Uninterrupted Power Supply(UPS),Voltage Source Inverters(VSI),Direct Current


(Dc),Average Load Sharing(ALS),Master Slave(MS).

INTRODUCTION
The need for highly reliable power supplies has increased in recent years with the proliferation of critical loads such as computers,
medical equipments, satellite systems, telecommunications, and other electronic dependent equipment, which are intensely in demand
in present day society. Not with standing, the role of uninterruptible power supply (UPS) has also increased tremendously in
sustaining high reliable power to those critical loads. One of the ways to achieve higher reliability is by paralleling two or more units
of UPS. This is because the paralleled system has wide advantages over single unit UPS. Advantages include increase of power
capability, enhanced availability from the fault tolerance with more than 1 module, and ease of maintenance with redundancy
implementation [1].
In application for uninterrupted power supply (UPS) for very large application the parallel operation of inverters are seen in
common AC bus. The parallel operation is a special feature of high performance uninterrupted power supply system. The parallel
connection of UPS inverters is a challenging problem that is more complex than paralleling DC sources, since every inverter must
share the load while staying synchronized. In theory, if the output voltage of every inverter has same amplitude, frequency, and phase,
the current load could equally be distributed. However, due to interconnections .The first one is based on active load sharing
techniques, and the major part of them is derived from control schemes of parallel-connected dcdc converters, such as centralized
master slave (MS) average load sharing (ALS), and circular chain control (3C) Although these control schemes achieve both go
output-voltage regulation and equal current sharing, they need critical intercommunication lines among modules that could reduce the
system reliability and expandability [2].
The parallel operation of voltage source inverters (VSIs) is a configuration that allows the processed load power to be shared
among the converters, creating redundant systems and making the power expansion flexible. These characteristics have led to the use
of this configuration in an uninterruptible power supply (UPS), mainly to build a redundancy

THE CONCEPTION OF CONTROL STRATEGY


Review
The control strategy includes two main loops for each voltage source inverters (VSI) connected to common AC bus. They are
Parallelism control loop and voltage control loop. The parallelism control loop employs the feedback of the inductor current from the
output filter to modify the input voltage of the same filter and, therefore, to control the power flow of each inverter to the load. The
266

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

second loop voltage control is responsible for controlling the output voltage of the LC filter, which coincides with the output voltage
of the VSI.

Fig 1.Circuit of VSI with two control loops

In each VSI, a control loop is added, called parallelism control, which is placed in cascade with the voltage controller Cv, as shown
in Fig. 1. This second loop enables the inverter to work in parallel. This new control modifies the output signal of the voltage
controller, Vvc, with the aim of changing the Vpc signal applied in the PWM modulator and, consequently, altering the VAB voltage of
the VSI. The parallelism control employs the feedback of inductor current L from the LC filter, called Vi, to modify the Vvc signal.
Therefore, each VSI utilizes only the feedback of its own inductor current to ensure its proper parallel operation. A relevant point is
that the proposed strategy is based on a single reference voltage (Vref) for all VSIs, thus the output voltages of all inverters have only
small deviations, which are caused by parametric variations in the control and power Fig.1. Circuit of a VSI based on the proposed
control strategy. Components of the inverters. Therefore, the parallelism control has the function of equalizing these small deviations
to ensure power sharing among the inverters [3].
Proposed Control Strategy
We consider a multi-input, single-output dynamic system whose states at any instant can be defined by n variables X1, X2, Xn.
The control action that derives the system to a desired state can be described by a well known concept of if-then rules, where input
variables are first transformed into their respective linguistic variables, also called fuzzification [4],[5]. Then, conjunction of these
rules, called inferencing process, determines the linguistic value for the output. This linguistic value of the output also called fuzzified
output is then converted to a crisp value by using defuzzification scheme. All rules in this architecture are evaluated in parallel to
generate the final output fuzzy set, which is then defuzzified to get the crisp output value. The conjunction of fuzzified inputs is
usually done by either min or product operation (we use product operation) and for generating the output max or sum operation is
generally used. For defuzzification, we have used simplified reasoning method, also known as modified center of area method. For
simplicity, triangular fuzzy sets will be used for both input and output. The whole working and analysis of fuzzy controller is
dependent on the following constraints on fuzzification, defuzzification and the knowledge base of an FLC, which give a linear
approximation of most FLC implementations [6].
CONSTRAINT 1: The fuzzification process uses the triangular membership function.
CONSTRAINT 2: The width of a fuzzy set extends to the peak value of each adjacent fuzzy set and vice versa. The sum of the
membership values over the interval between two adjacent sets will be one. Therefore, the sum of all membership values over the
universe of discourse at any instant for a control variable will always be equal to one. This constraint is commonly referred to as fuzzy
partitioning.
CONSTRAINT 3: The defuzzification method used is the modified center of area method. This method is similar to obtaining a
weighted average of all possible output values.
An example of a very simple neuro fuzzy controller with just four rules is depicted in figure 2. This architecture can be readily
understood as a neural-like architecture. At the same time, it can be easily interpreted as a fuzzy logic controller. The modules X1
and X2 represent the input variables that describe the state of the system to be controlled. These modules deliver crisp input values to
the respective membership modules (m-modules) which contain definitions of membership functions and basically fuzzify the input.
Now, both the inputs are in the form of linguistic variables and membership associated with the respective linguistic variables. The mmodules are further connected to R-modules which represent the rule base of the controller, also known as the knowledge base. Each
m-module gives to its connected R-modules, the membership value m (xi) of the input variable Xi associated with that particular
linguistic variable or the input fuzzy set. The R-modules use either min-operation or product-operation to generate conjunction of their
respective inputs and pass this calculated value forward to one of n-modules. The n-modules basically represent the output fuzzy sets
267
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

or store the definition of output linguistic variables. If there are more than two rules affecting one output variable then either their sum
or the max is taken and the fuzzy set is either clipped or multiplied by that resultant value. These n-modules pass on the changed
output fuzzy sets to C-module where the defuzzification process is used to get the final crisp value of the output [7].

Fig 2. Architecture of four rule fuzzy controller from neural networks point of view
The architecture given in Fig 2 of a fuzzy logic controller resembles a feed forward neural network. The X-, R-, and C-modules
can be viewed as the neurons in a layered neural network and the and n-units as the adaptable weights of the network. The X-module
layer can easily be identified as the input layer of a multi-input neural network whereas the C-module layer can be seen as the output
layer. The R-module layer serves as the hidden or intermediate layer that constitutes the internal representation of the network. The
fact that one n-module can be connected to more than one R-module is equivalent to the connections in a neural network that share a
common weight. This is of key importance for keeping the structural integrity of the fuzzy controller intact.

EXPERIMENTAL ANALYSIS
The parallel operation of voltage source inverters connected in common AC bus used for much application like uninterrupted
power supply and medical equipment sets. The output voltage of all inverters connected in common AC bus should be same and all
should share the load equally. The AC source input is given to AC/DC converter and DC output is then fed into a single phase voltage
source inverter and output is then filtered by a LC filter through isolation transformer between inverter and filter. The filter output is
given to the common AC bus where similar AC output of other module is immersed. The output voltage from the output of filter is
controlled by taking it as feedback and comparing with the reference voltage from reference voltage DC bus and controlled by classic
PID control strategy in existing system and fuzzy logic control in proposed system. The error and change in error is analyzed in
controller and current signal from the controller is given to parallelism control comparing with current to LC filter then signal is given
to PWM gate driver circuit of inverter to vary switching of the inverter. The block representation is shown in fig 3.
TABLE I
SPECIFICATION OF VSI OF 5KVA

268

VREF

5V(PEAK)

VO

220 VRMS/60 HZ

KIL

0.015

VAC

220 VRMS/60 HZ

1.63

LM

348 MH

LL

370H

LE

730 H

1100 H

36 F

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 3. Circuit of VSIs modules connected in parallel


This methodology defines voltage and current equations for each inverter in relation to the components of all VSI. Thus, the model
allows the study of the distribution of power, sharing, and circulation of currents among inverters in the case of parametric variation of
the components, load variations, and controller modifications. The specification for two voltage source inverters connected in parallel
in common AC bus shown in block representation in fig 3.are detailed in TABLE I

SIMULATION RESULTS
Pulse output after the process of control strategy from pulse width modulation (PWM) generator is shown in fig 4

Fig 4 Pulse output from PWM generator

269

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 5 Square output voltage from inverter


The output current measured from voltage source inverter through isolation transformer is shown in fig 6

Fig 6.Output current measured from the inverter


The output voltage measured from the common AC bus where inverters modules of various voltage ratings are connected is shown in
fig 7

Fig 7.Output voltage measured from common AC bus

CONCLUSION
Thus the uninterrupted power supplies (UPS) and medical equipments frequently needs parallel operation of inverters with proper load
sharing capability and redundancy. Hence proper control strategy should be implemented for independent operation of voltage source
inverters making them to work at any load conditions and no load conditions. The control strategy allows the connection and/or
disconnection of one inverter from the common ac bus with a smooth transient response in load voltage. It also enables the parallel operation
of different power inverters. ANN control strategy which emerging control logic for many application is more efficient and satisfies all
features of control strategies of parallel operation of voltage source inverters. Simulation results have shown that fast dynamic response,
proper output regulation, and equal current distribution can be achieved in the proposed multi-module inverter system.

270

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] YeongJia.C and E. K. K. Sng, A novel communication strategy for decentralized control of paralleled multi-inverter
systems, IEEE Trans. Power Electron., vol. 21, no. 1, pp. 148156, Jan. 2006.
[2] Guerrero J.M, L. Hang, and J. Uceda, Control of distributed uninterruptible power supply systems, IEEE Trans. Ind.
Electron., vol. 55, no. 8, pp. 28452859, Aug. 2008.
[3] Telles B. Lazzarin, Member, IEEE, Guilherme A. T. Bauer, and Ivo Barbi, A Control Strategy for Parallel Operation of
Single-Phase Voltage Source Inverters: Analysis,Design and Experimental ResultsIEEE transactions on industrial
electronics, vol. 60, no. 6, june 2013.
[4] Comparison between Conventional PID and Fuzzy Logic Controller for Liquid Flow Control: Performance Evaluation of
Fuzzy Logic and PID Controller by Using MATLAB/Simulink Gaurav, AmritKaur,june 2012
[5] Zhongyi.H and X. Yan, Distributed control for UPS modules in parallel operation with RMS voltage regulation, IEEE
Trans. Ind. Electron., vol. 5, no. 8, pp. 28602869, Aug. 2008.
[6] Qing-Chang Zhong, Senior Member, IEEE,Robust Droop Controller for Accurate Proportional Load Sharing Among
Inverters Operated in ParallelIEEE transactions on industrial electronics, vol. 60, no. 4, april 2013
[7] Pascual M, G. Garcera, E. Figueres, and F. Gonzalez-Espin, Robust model-following control of parallel UPS single-phase
inverters, IEEE Trans. Ind. Electron., vol. 55, no. 8, pp. 28702883, Aug 2008

271

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Isolated Quasi-Switched-Capacitor DC/DC Converter using solar PV


Sathish kumar. T1 PG Scholar, Dr. Balachandran. M2 associate professor,
Department of EEE, nandha engineering college, erode, India
sathishkumar.ts.t7@gmail.com, balachandran_pm@yahoo.com

Abstract This paper proposed to utilize the DC source from the solar PV. The DC/DC converter is the unidirectional power flow
between the solar PV and load, while in existing its bidirectional. The proposed converter becomes compact using wide band gap
devices which provide high efficiency and reduces heat generation of the circuit. PSO is used for the switching control for the
converter. The converter can step up, step down and current doublers, synchronous rectifier is also present. The load can be a DC
motor.
Keywords unidirectional, solar PV, PSO, DC motor

INTRODUCTION
Now a days use of other fossil fuels was becoming global warming and also due to lagging of the conventional fuels, the
renewable sources plays major role throughout. Therefore the solar PV is used as a source for the converter unit. In the EV/HEV
power system, a dc/dc converter is required to serve as an auxiliary power supply, connecting the high-voltage (HV) battery and the
low voltage (LV) dc bus [2]. The LV dc bus is linked to the 12-V battery and the LV electronic loads such as the head lamps, stereo
system, and various electronic control modules.
In normal operation, the converter delivers power from the HV battery to the LV electronic loads, while the 12-V battery is used
only for stabilizing the voltage and starting up the vehicle. The converter must incorporate galvanic isolation to protect the LV
electronic system from potentially hazardous high voltage [3]. For this application, full-bridge and half-bridge-based topologies, such
as the phase-shift and resonant converters, are primarily considered as standard approaches [4][12]. In these topologies, the voltage
stresses on the HV-side switches and the transformer HV-side winding equal to the HV-dc-bus voltage. Traditional fly back converter
is not a suitable topology for this application mainly because of the dc-offset current in the transformer. In [13] and [14], new dc/dc
converters derived from fly back topology were proposed, but the voltage stress on the HV-side switches is higher than the HV-dc-bus
voltage. In HEVs/EVs, the HV-battery voltage is typically rated at about 350 V, and it can reach up to 450 V in fluctuations. In heavyduty vehicles, the HV-battery voltage can be even higher. As the HV-battery voltage increases, the raised voltage stresses on
components will lead to a lower efficiency of the converter and less component selection choices. Three-level dc/dc converters have
been proposed in [15] and [16], having the ability to reduce the voltage stresses on HV-side switches by half. However, such
converters require multiple components and complex control. Another approach to reduce the voltage stresses on switches is to apply
the switched-capacitor circuit. In [17][20], new dc/dc converters based on switched-capacitor circuit are proposed. However, in the
case of a higher HV-dc-bus voltage, these converters can only operate within a limited duty-ratio range, and will suffer severe current
stresses in the components.
This paper proposes the unidirectional flow between the solar PV and the load. Aiming the good efficiency and to serve the
converter for the certain applications. Now, the converter is proposes to run the DC motor applications. The features of the proposed
converter include: 1) the voltage stresses on HV side switches are reduced to two-third of the HV-dc-bus voltage; 2) the voltage stress
on transformer is reduced to one-third of the HV-dc-bus voltage; 3) the transformer turns ratio is reduced; 4) it has soft-switching
capability and high efficiency; the block diagram for the above converter is shown in fig 1.

272

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 1: Block diagram

II. BLOCK MODEL-DESCRIPTION


A. solar PV
Photovoltaic cells are devices that absorb sunlight and convert that solar energy into electrical energy. Solar cells are commonly
made of silicon, one of the most abundant elements on Earth. When photons (sunlight) hit a solar cell, its energy frees electron-holes
pairs. The electric field will send the free electron to the N side and hole to the P side. This causes further disruption of electrical
neutrality, and if an external current path is provided, electrons will flow through the path to their original side (the P side) to unite
with holes that the electric field sent there, doing work for us along the way. The electron flow provides the current, and the cell's
electric field causes a voltage.
Three solar cell types are currently available: mono-crystalline, polycrystalline, and thin film, discerned by material, efficiency,
and composition.
B. MPPT
The Maximum Power Point Tracker (MPPT) is needed to optimize the amount of power obtained from the photovoltaic array to
the power supply. MPPT is designed to withstand the harsh, fast changing environmental conditions of solar PV.

Figure 2: I-V Curve

Design of the customized MPPT will ensure that the system operates as closely to the Maximum Power Point (MPP) while being
subjected to the varying lighting and temperature. The inputs of the MPPT consisted of the photovoltaic voltage and current outputs.
The
adjusted
voltage
and
current
output
of
the
MPPT
charges
the
power
supply.
A microcontroller was utilized to regulate the integrated circuits (ICs) and calculate the maximum power point, given the output from
the solar array. Hardware and software integration was necessary for the completion of this component. While here the PSO is used as
the advanced method for the substitute for microcontroller.
C. PSO
PSO adapts in searches for the best solution-vector in the search space. A single solution is called particle. Each particle has a
fitness/cost value that is evaluated by the function to be minimized, and each particle has a velocity that directs the "flying" of the
273
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

particles. The particles fly through the search space by following the optimum particles. The algorithm is initialized with particles at
random positions, and then it explores the search space to find better solutions. In every iteration, each particle adjusts its velocity to
follow two best solutions. The first is the cognitive part, where the particle follows its own best solution found so far. This is the
solution that produces the lowest cost (has the highest fitness). This value is called pBest (particle best). The other best value is the
current best solution of the swarm, i.e., the best solution by any particle in the swarm. This value is called gBest (global best). Then,
each particle adjusts its velocity and position with the following equations:
v' = v + c1.r1. (pBest - x) + c2.r2. (gBest - x)
x' = x + v'
v is the current velocity, v' the new velocity, x the current position, x' the new position, pBest and gBest as stated above, r1 and r2 are
even distributed random numbers in the interval [0, 1], and c1 and c2 are acceleration coefficients. Where c1 is the factor that
influences the cognitive behavior, i.e., how much the particle will follow its own best solution and c2 is the factor for social behavior,
i.e., how much the particle will follow the swarm's best solution.
D. WBG
Wide band gap devices are operates at higher frequencies to filter the harmonics. This also employs to minimize the size of the
passive component particularly for the switched capacitor. It has been documented extensively in the literature that WBG devices have
lower switching loss, better reverse recovery characteristics, and lower figure of merit in terms of the product of Rds(on) and total gate
charge Qg [22][23].

Wide band gap devices provide good efficiency.

In the existing system the wide band gap devices are designed to withstand the harsh condition due to the temperature developed in
the vehicles [1]. Such kind of protection is not necessary for the proposed converter.

III. OPERATION PRINCIPLE OF PROPOSED ISOLATED QSC CONVERTER


A. Circuit Description:

Figure 3: proposed isolated quasi switched capacitor

The power supply is utilized from the solar PV. The L1 and L2 are the shunt connect with Lm. Inductances of Lm is much greater
than L1 and L2. Thus the transformer can

withstand without any air-gap. The proposed converter can operates in various modes according to the duty cycle given
to the circuit. Based on that duty cycle ratio the converter modes are as follows.
<50% - Buck mode
>50% - Boost mode
B. Buck mode operation:
274

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The buck mode operation can be explained based on the various switching frequencies.
(i) Hard switching operation:
Switches S1- S4 and S2-S3 are the two complementing pairs.

Figure 4: hard switching operation of buck mode.

Mode 1 (t1 t2): The input voltage source, C2, C3, Ls, L1, and L2 are connected in series. C2 and C3 are charged. L1 and Ls store
energy, while L2 release its energy to the load.
Mode 2 (t2 t3): Ls releases its energy back to C2 and C3, via S3, S4, and the body diode of S2. L1 and L2 release their energy to the
load.
Mode 3 (t3 t4): The energy of Ls is completely released at t3. The body diode of S2 is blocked. L1 and L2 continue to release their
energy to the load.

275

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Mode 4 (t4 t5): C2 and C3 are connected in parallel and they are discharged. L1 and L2 continue to release their energy to the load.
Both VL1 and VL2 are clamped to the output voltage, so ILs increases quickly, transferring the current flowing through S3 to S4. The
body diode of S3 conducts because the current of IL2 is larger than ILs.
Mode 5 (t5 t6): The current of IL2 is equal to ILs at t5, so the current of S3 reaches 0 at t5. After t5, the body diode of S3 is blocked.
Mode 6 (t6 t7): Ls releases its energy back to the input voltage source and C1, via S3, S4, and the body diodes of S1; L1 and L2
release their energy to the load.
Mode 7 (t7 t8): The energy of Ls is completely released. The body diode of S1 is blocked. L1 and L2 continue to release their Energy.
Mode 8 (t8 t9): The input voltage source, C2, C3, Ls, L1, and L2 are connected in series. Both VL1 and VL2 are clamped to the output
voltage, so ILs increases quickly, transferring the current flowing through S4 to S3. During this mode, the body diode of S4 conducts
because the current of IL1 is larger than ILs. At t9, the current of S4 reaches 0 and its body diode is blocked.

(ii) Soft switching operation:


The converter has the capability to realize ZVS in the front stage circuit and ZCS in the post-stage circuit. To achieve the ZVS ON
of S1 and S2, the freewheeling current must continue to flow till the dead-band transients are over, which means the switching Mode 3
and Mode 7 must be bypassed. Fig. 5 shows the major waveforms of the buck-mode, soft-switching operation.

Figure 5: soft switching operation of buck mode.


In Fig. 5, T5 is the freewheeling transient, where Ls releases and stores energy. During T5, S1 or S2 is turned ON for enabling
synchronous rectification that reduces the conduction loss. Based on the volt-second balance of L1, the 5 is derived as,

C. Boost mode operation:


Soft switching operation:
In boost mode, the duty ratios of S1 and S2 are fixed at 50%.The duty ratios of S3 and S4 need be extended over 50%,
to overlap in advance with those of S1 and S2, respectively. In the overlapped transients, the transformer leakage
inductance stores and releases energy which is required for boost-mode power delivery. Fig. 6 shows the major
waveforms of the boost mode, soft-switching operation. Similarly, based on the voltage and the current of Ls at different
time, Po of the boost mode operation can be derived as,

276

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 6: soft switching operation of boost mode.

IV. SIMULATION RESULTS


The simulation is done by using the MATLAB software, in which the converter has the solar PV, was completely
controlled by the MPPT technique to maintain the constant power to the converter. PSO serves in the MPPT technique to
get the accurate result of the converter.
Experimental results:
1. The converter efficiency increased.
2. The voltage stress of is reduced.
3. Solar intensities were different but the result will be the constant DC voltage is possible.
4. Converter made into compact size.
CONCLUSION
This paper proposes the isolated quasi switched capacitor DC/DC converter for the effective utilization of the unidirectional power flow
between the solar PV and DC motor. By the use of converter the voltage stress has been reduced to the level and the wide band gap devices
employs to better performance of the converter and to avoid the generation of heat to the converter. Thus the efficiency of the converter is
higher than the existing model and it values 96.4%. which means it provides the higher switching frequencies.

REFERENCES
[1] Xuan Zhang,, Chengcheng Yao, Cong Li, Lixing Fu, Feng Guo, A Wide Bandgap Device-Based Isolated Quasi-SwitchedCapacitor DC/DC Converter , IEEE transactions on power electronics, vol. 29, no. 5, MAY 2014.
[2] A. Emadi, S. S.Williamson, and A. Khaligh, Power electronics intensive solutions for advanced electric, hybrid electric, and fuel
cell vehicular power systems, IEEE Trans. Power Electron., vol. 21, no. 3, pp. 567 577, May 2006.
[3] A. Gorgerino, A. Guerra, D. Kinzer, and J. Marcinkowski, Comparisonof High Voltage Switches in Automotive DC-DC
Converter, in Proc. Power Convers. Conf. - Nagoya, Apr. 2007, pp. 360367.
[4] F. Z. Peng, H. Li, G. Su, and J. S. Lawler, A new ZVS bidirectional DCDC converter for fuel cell and battery application, IEEE
Trans. Power Electron., vol. 19, no. 1, pp. 5465, Jan. 2004.
[5] F.Krismer and J. W.Kolar, Efficiency-optimized high-current dual active bridge converter for automotive applications, IEEE
Trans. Ind. Electron.,vol. 59, no. 7, pp. 27452760, Jul. 2012.
[6] J. Lee, Y. Jeong, and B. Han, A two-stage isolated/bidirectional DC/DCconverter with current ripple reduction technique,
IEEE Trans. Ind. Electron.,vol. 59, no. 1, pp. 644646, Jan. 2012.
[7] T. Wu, Y. Chen, J. Yang, and C. Kuo, Isolated bidirectional full-bridge DCDC converter with a flyback snubber, IEEE Trans.
Power Electron., vol. 25, no. 7, pp. 19151922, Jul. 2010.
[8] M. Pahlevaninezhad, P. Das, J. Drobnik, P. K. Jain, and A. Bakhshai, A novel ZVZCS full-bridge DC/DC converter used for
electric vehicles, IEEE Trans. Power Electron., vol. 27, no. 6, pp. 27522769, Jun. 2012.
[9] L. Zhu, A novel soft-commutating isolated boost full-bridge ZVS-PWM DC-DC converter for bidirectional high power
applications, in Proc. IEEE 35th Annu. Power Electron. Spec. Conf., Jun. 2004, vol. 3, pp. 2141 2146.
[10] S. Han and D. Divan, Bi-DirectionalDC/DC converters for Plug-in hybrid electric vehicle (PHEV) applications, in Proc.
Twenty-Third Annu. IEEE Appl. Power Electron. Conf. Expo.), Feb. 2008, pp. 784789.
277

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[11] G. Pledl, M. Tauer, and D. Buecherl, Theory of operation, design procedure and simulation of a bidirectionalLLCresonant
converter for vehicular applications, in Proc. IEEE Vehicle Power Propulsion Conf., Sep. 2010, pp. 15.
[12] B. Lin and C. Chao, Soft switching converter with two series half-bridge legs to reduce voltage stress of active switches, IEEE
Trans. Ind. Electron., vol. 60, no. 6, 99, pp. 22142224, Jun. 2013.
[13] E. V. de Souza and I. Barbi, Bidirectional current-fed flyback-push-pull DC-DC converter, in Proc. Brazilian Power Electron.
Conf., Sep. 2011, pp. 813.
[14] T. Bhattacharya, V. S. Giri, K. Mathew, and L. Umanand, Multiphase bidirectional flyback converter topology for hybrid
electric vehicles, IEEE Trans. Ind. Electron., vol. 56, no. 1, pp. 7884, Jan. 2009.
[15] P. Li, W. Li, Y. Zhao, H. Yang, and X. He, ZVS three-level phase-shift high step-down DC/DC converter with two
transformers, in Proc.14th Eur. Conf. Power Electron. Appl., Aug. 2011, pp. 110.
[16] B.-R. Lin and C.-H. Chao, Analysis of an interleaved three-level ZVS converter with series-connected transformers, IEEE
Trans. Power Electron., vol. 28, no. 7, pp. 30883099, Jul. 2013.
[17] A. A. Fardoun, E. H. Ismail, A. J. Sabzali, and M. A. Al-Saffar, Bidirectional Converter with Low Input Output Current Ripple
for Renewable Energy Applications, in Proc. Third Annu. IEEE Energy Convers. Congr. Expo., Sept. 2011, pp. 33223329.
[18] A. A. Fardoun and E. H. Ismail, SEPIC converter with continuous output current and intrinsic voltage doubler characteristic, in
Proc. IEEE Int. Conf. Sustainable Energy Technol., Nov. 2008, pp. 431436.
[19] H. Nomura, K. Fujiwara, and M. Yoshida, A New DC-DC converter circuit with larger step-up/down ratio, in Proc. 37th IEEE
Power Electron. Spec. Conf., Jun. 2006, pp. 17.
[20] B. Axelrod, Y. Berkovich, and A. Ioinovici, Switched capacitor/
switched-inductor structures for getting transformer less
hybrid DCDC PWM converters, IEEE Trans. Circuits Syst. I: Reg. Papers, vol. 55, no. 2, pp. 687696, Mar. 2008.
[21] Y.-K. Lo, J.-Y. Lin, and C.-Y. Lin, Asymmetrical zero-voltage-switching PWM DC-DC converter employing two transformers
and current doublers rectification, IET Power Electron., vol. 1, no. 3, pp. 408418, Sep. 2008.
[22] B. Callanan, 2011. Application considerations for silicon carbide MOSFETs,. [Online]. Available:
http://www.cree.com//media/ Files/Cree/Power/Application%20Notes/CPWRAN08.pdf
[23] D. Reusch and J. Strydom, Understanding the effect of PCB layout on circuit performance in a high frequency gallium nitride
based point of load converter, IEEE Trans. Power Electron., vol. 29, no. 4, pp. 20082015, Apr. 2014.

278

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of ETL Process in Data Warehouse


N.Nataraj1, Dr.R.V.Nataraj2
1

PG Scholar,2Professor, Department of Information Technology, Bannari Amman Institute of Technology, Sathyamangalam,


nataraj.se13@bitsathy.ac.in

Abstract ETL is responsible for the extraction of data, their cleaning, conforming and loading into the target. ETL is a
Critical layer in DW setting. It is widely recognized that building ETL processes is expensive regarding time, money and
effort. In this, firstly we review commercial ETL tools and prototypes coming from academic world. After that we review
designing works in ETL field and modelling ETL maintenance issues. We review works in connection with optimization
and incremental ETL, then finally challenges and research opportunities around ETL processes.
Keywords ETL, Data warehouse, ETL Modelling, ETL Maintenance
INTRODUCTION
Enterprises as organizations invest in DW projects in order to enhance their activity and for measuring their performance.
It aims to improve decision process by supplying unique access to several sources. In this we have two types. The two
famous types are databases and flat files. Finally let note that sources are autonomous or semi autonomous. It is the
integration layer in DW environment [1]. ETL tools pull data from several sources (databases tables, flat files, ERP,
internet, and so on), apply complex transformation to them. ETL is a critical component in DW environment. Indeed, it is
widely recognized that building ETL processes, during DW project, are expensive regarding time and money.A. Data
Warehouse layersSources: They encompass all types of data sources. They are data provider. The two famous types are
databases and flat files. Finally let note that sources are autonomous or semi autonomous.
ETL: It is the integration layer in DW environment. ETL tools pull data from several sources apply complex
transformation to them. Finally in the end, data are loaded into the target which is data warehouse store in DW
environment.
Data Warehouse: is a central repository to save data produced by ETL layer. DW is a DB includes fact tables and
dimension tables. Together these tables are combined in a specific schema that may be star schema or snowflake schema.
Reporting and Analysis: Collected data are served to end-users in several formats. For example data is formatted into
reports, histograms.
Extraction:The problem data from a set of sources which may be local or distant. Logically, data sources come from
operational applications, but there is an option to use external data sources for enrichment. External data source means
data coming from external entities. Thus during extraction step, ETL tries to access available sources, pull out the relevant
data, and reformat such data in a specified format.
Transformation:This step is the most laborious one where ETL adds value. This step is associated with two words: clean
and conform. In one hand, cleaning data aims to fix erroneous data and to deliver clean data for end users (decisions
makers). Dealing with missing data, rejecting bad data are examples of data cleaning operations. In other hand,
conforming data aims to make data correct, in compatibility with other master data.Checking business rules, checking
keys and lookup of referential data are example of conforming operations.
Loading: This step conversely to previous step, has the problem of storing data to a set of targets. During this step,
ETL loads data into targets which are fact tables and dimension in DW context.
279

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Commercial ETL Tools We have two types of ETL tools. On the one hand, there is subfamily of payable ETL Data Stage
and Informatica [7][8] . On the other hand, the second subfamily of commercial ETL comes with no charge [9].
Informatica: Informatica is broadly used ETL tool for extracting the source data and loading it into the target after
applying the Required Transformation .ETL developers map the extracted data from source systems and load it to target
systems after applying the required transformations [9].
Data Stage: Its basic element for data manipulation is called "stage." Thus, for this tool an ETL process is a combination
of "stages." Thus we speak about transformation stages and stages for extracting and loading data (called connectors since
release which are interconnected via links.
SSIS:SSIS imposes two levels of tasks combination. The first level is called "Flow Control" and the second level
controlled by the first, is called "Data flow." Indeed, the first level is dedicated to prepare the execution environment
(deletion, control, moving files, etc....) and supplies tasks for this purpose. The second level (data flow) which is a
particular task of the first level performs classical ETL mission. The Data-Flow task offers various tasks for data
extraction, transformation and loading.
SIRIUS:It develops an approach metadata oriented that allows the modelling and execution of ETL processes [13]. It is
based on SIRIUS Meta model component that represents metadata describing the necessary operators or features for
implementing ETL processes. In other words, SIRIUS provides functions to describe the sources, targets description and
the mapping between these two parts [12].
ARKTOS:ARKTOS is another framework that focuses on the modelling and execution of ETL processes. Indeed,
ARKTOS provides primitives to capture ETL tasks frequently used. More exactly, to describe a certain ETL process, this
framework offers three ways that are GUI and two languages XADL (XML variant) and SADL (SQL like language).
DWPP:DWPP is a set of modules designed to solve the typical problems that occur in any ETL project. DWPP is not a
tool but it is a platform. Exactly, it is C functions library shared under UNIX operating system for the implementation of
ETL processes. Consequently, DWPP provides a set of useful features for data manipulation.
II. MODELLING AND DESIGN OF ETL

ETL are areas with high added value labelled costly and risky. In addition, software engineering requires that any project
is doomed to switch to maintenance mode. For these reasons, it is essential to overcome the ETL modelling phase with
elegance in order to produce simple models and understandable. This method is spread over four steps: 1. Identification of
sources 2. Distinction between candidates sources and active sources. 3. Attributes mapping. 4. Annotation of diagram
(conceptual model) with execution constraints. A. Meta-data models based on ETL Design the designer needs to: 1.
Analyse the structure and sources. 2. Describe mapping rules between sources and targets. The based on meta-model,
provides a graphical notation to meet this necessitate [13].
III. ETL PROCESS MAINTENANCE:
When changes happen, analyzing the impact of change is mandatory to avoid errors and mitigate the risk of breaking
existent treatments. As a consequence, without a helpful tool and an effective approach for change management, the cost
of maintenance task will be high. Particularly for ETL processes, previously judged expensive and costly [14], [15]. Using
ETL terminology, above previous research efforts focus on the target unlike the proposal of which focuses onchanges in
the sources. In these proposal dealing with change management in ETL are interesting and offer a solution to detect
changes impact on ETL processes. However change incorporation is not addressed.

280

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IV. RESEARCH OPPORTUNITIES


1. Many conceptual models enrich the ETL design field. However no proposal becomes a standard neither widely
accepted by research community like multi-dimensional modeling in data warehouse area.
2. Mapping rules are an important delivery in ETL design.
3. Big data technologies arrive with exciting research opportunities. Particularly, performance issue seems solvable
withthis novelty.
4. Tests are fundamentals aspects of software engineering. In spite of this importance, and regarding ETL, they are
neglected.
5. Meta data and unstructured data [2].
V. CONCLUSION
ETL is identified with two tags: complexity and cost. Due its importance, this paper focused on ETL, the backstage of
DW, and presents the research efforts and opportunities in connection with these processes. It is widely familiar that
building ETL processes is expensive concerning time, money and effort. It consumes up to 70% of resources. Therefore,
in current survey, firstly we give a review on open source and commercial ETL tools, along with some ETL prototypes
coming from academic world. Namely, SIRIUS, ARKTOS. Before conclusion, we have given an picture of performance
issue along review of some works dealing with this issue, particularly, ETL optimization and incremental ETL. Finally,
this surveys ends with presentation of main challenges and research opportunities around ETL processes.

REFERENCES:
[1] W. Inmon D. Strauss and G.Neushloss, DW 2.0 The Architecture for the next generation of data warehousing,
Morgan Kaufman, 2007.
[2] A. Simitisis, P. Vassiliadis, S.Skiadopoulos and T.Sellis, DataWarehouse Refreshment, Data Warehouses and OLAP:
Concepts, Architectures and Solutions, IRM Press, 2007, pp 111-134.
[3] R. Kimball and J. Caserta. The Data Warehouse ETL Toolkit: Practical Techniques for Extracting, Cleaning,
Conforming, and Delivering Data, Wiley Publishing, Inc, 2004.
[4] A. Kabiri, F. Wadjinny and D. Chiadmi, "Towards a Framework for Conceptual Modelling of ETL Processes ",
Proceedings of The first international conference on Innovative Computing Technology (INCT 2011), Communications in
Computer and Information Science Volume 241, pp 146-160.
[5]
P.Vassiliadis
and
A.Simitsis,
EXTRACTION,
TRANSFORMATION,
http://www.cs.uoi.gr/~pvassil/publications/2009_DB_encyclopedia/Extract-Transform-Load.pdf

AND

LOADING,

[6] J. Adzic, V. Fiore and L. Sisto, Extraction, Transformation, and Loading Processes, Data Warehouses and OLAP:
Concepts, Architectures and Solutions, IRM Press, 2007, pp 88-110.
[7] W. Eckerson and C. White, Evaluating ETL and Data Integration Platforms, TDWI REPORT SERIES,
101communications LLC, 2003.
[8] IBM InfoSphereDataStage, http://www-01.ibm.com/software/data/infosphere/datastage/
281

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[9] Informatica, http://www.informatica.com


[10] C. Thomsen and T,B Pedersen, "A Survey of Open Source Tools for Business Intelligence", DB Tech Reports
September 2008. Homepage: www.cs.aau.dk/DBTR.
[11] C. Thomsen and T,B Ped- ersen, "A Survey of Open Source Tools for Business Intelligence.International Journal of
Data Warehousing and Mining.Volume 5, Issue 3, 2009.
[12] Talend Open Studio, www.talend.com
[13] A.VAVOURAS,"A Metadata-Driven
UNIVERSITT ZRICH,ZRICH, 2002.

Approach for DataWarehouse

Refreshment", Phd Thesis,

DER

[14] J.F. Roddick et al, "Evolution and Change in Data Management - Issues and Directions", SIGMOD Record 29, Vol.
29, 2000, pp 21-25

282

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review Paper on Energy Audit of a Boiler in Thermal Power Plant


Gaurav T. Dhanre, Urvashi T. Dhanre, Krunal Mudafale
Mtech Scholar, DBACER,gauravdhanre@gmail.com Mob no.9028290903
Mtech Scholar, KITS, urvashidhanre@gmail.com
Assistant Professor, DBACER, krunalp.mudafale@gmail.com

Abstract The world over energy resources are getting scarcer and increasingly exorbitant with time. In India bridging the everwidening gap between energy demand and supply by increasing supply is an expensive option. The share of energy costs in total
production costs can, therefore improve profit levels in all the industries. This reduction can be achieved by improving the efficiency
of industrial operations and equipments. Energy audit plays an important role in identifying energy conservation opportunities in the
industrial sector, while they do not provide the final answer to the problem; they do help to identify potential for energy conservation
and induces the companies to concentrate their efforts in this area in a focused manner.

Keywords Thermal Power Plant, Boiler, Boiler efficiency, Audit, Direct Method, Indirect Method, Coal.
INTRODUCTION
About 70% of energy generation capacity is from fossil fuels in India. Coal consumption is 40% of India's total energy consumption
which followed by crude oil and natural gas at 24% and 6% respectively. India is dependent on fossil fuel import to fulfill its energy
demands. The energy imports are expected to exceed 53% of the India's total energy consumption. In 2009-10, 159.26 million tones of
the crude oil is imported which amounts to 80% of its domestic crude oil consumption. The percentage of oil imports are 31% of the
country's total imports. The demand of electricity has been hindered by domestic coal shortages. Cause of this, India's coal imports is
increased by 18% for electricity generation in 2010.India has one of the world's fastest growing energy markets due to rapid economic
expansion. It is expected to be the second largest contributor to the increase in global energy demand by 2035. Energy demand of
India is increasing and limited domestic fossil fuel reserves. The country has ambitious plans to expand its renewable energy resources
and plans to install the nuclear power industries. India has the world's fifth largest wind power market and plans to add about 20GW of
solar power capacity. India increases the contribution of nuclear power to overall electricity generation capacity from 4.2% to 9%. The
country has five nuclear reactors under construction. Now, India became third highest in the world who is generating the electricity by
nuclear and plans to construct 18 additional nuclear reactors by 2025, then India will become second highest in the world.
M. J. Poddar, Mrs. A.C.Birajdar (2013) [7]:
As per the study carried out by M. J. Poddar and Mrs. A.C.Birajdar, the share of energy costs in total production costs can get
improves profit levels in all the industries. It can be achieved by improving the efficiency of industrial operations and equipments.
Energy audit plays an important role in identifying energy conservation opportunities in the industrial sector, while they do not
provide the final answer to the problem, they do help to identify potential for energy conservation and induces the companies to
concentrate their efforts in this area in a focused manner. Energy audit is a vital link in the entire energy management chain. The
overall program includes other managerial and operational activities and responsibilities. However, the audit process is the most
important part of the program and is essential to the programs implementation. In this project, the study is mainly targeted at
identifying, sustainable and economically viable energy cost saving opportunities in boiler section of Unit-III of Parli Thermal Power
Station, Parli-Vaijanath. The study shows that, there is a significant cost saving opportunities and recommendations have been made to
realize this potential.In the methodology, types of energy audit are provided. Factor affecting the operating efficiency of boiler The
factor affecting the operating efficiency of boiler are mentioned, initially the coal where it is available with wider variations in
specification, from the designed ones. The effects due to variations are highlighted in this paper. Next major factor is total air quality.
With the reduction in total air indicated by percentage increase in carbon dioxide, the stack losses would reduce and air temperature
will fall at air heater outlet. The fan power (ID and FD) will decrease, but the unburnt material will increase and also after a certain
point unburnt gas may appear leading to increase in loss. The variation in coal characteristics has not much effect on optimum
percentage of carbon dioxide, but the variation in load do have effect, primarily because the mixing of the fuel and air is not good.
Typical losses in boiler By using various formulae the losses are evaluated here such as dry flue gas loss, wet flue gas loss including
moisture in fuel loss, moisture in combustion air loss. In this way efficiency evaluation of FD, PA & ID fans is done at two different
loads i.e. at 185mW and 180mW. The comparative study has been done. From the audit it is concluded that the major reasons for
having lower efficiency are poor quality of coal and air leakages. Efficiency of the boiler is increased by 0.27% by reducing air
leakage about 6% in air heater.
283

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Moni Kuntal Bora & S. Nakkeeran(2014) [10]:


As per the study carried out by Moni Kuntal Bora & S. Nakkeeran, coal fired Boiler is one of the most important components for any
Thermal Power Plant. The prominent Performance parameter of a boiler is Boiler Efficiency. Boiler Efficiency affects the overall
performance of the electricity generation process and as well as plant economy. Boiler efficiency is affected by many factors. It
reduces with time, due to various heat losses such as loss due to unburnt carbon in waste, loss due to dry flue gas, loss due to moisture
in fuel, loss due to radiation, loss due to blow down, and loss due to burning hydrogen, etc.. Boiler efficiency tests help us to calculate
deviations of boiler efficiency from the design value and identify areas for improvement. The current paper puts forward an effective
methodology for the efficiency estimation of a coal fired boiler, comparison with its design value and enlists some of the factors that
affect the performance of a boiler. This study will help to increase overall boiler efficiency and as a result, annual monetary savings of
the thermal power plant.
Basically Boiler efficiency can be tested by the following methods:
A. Direct Method or Input Output Method.
B. Indirect Method or Heat Loss Method.

A. Direct Method or Input Output Method:


Direct method compares the energy gain of the working fluid (water and steam) to the energy content of the fuel. This is also known
as input-output method due to the fact that it needs only the useful output (steam) and the heat input (i.e. fuel) for evaluating the
efficiency.

Where,
= boiler efficiency in %.
SFR= steam flow rate in kg/hr.
SE= steam enthalpy in kCal/kg.
FEW= feed water enthalpy in kCal/kg.
FFR= fuel firing rate in kg/hr.
GVC= gross calorific value of coal in kCal/kg.

B. Indirect Method or Heat Loss Method:


In the heat loss method the efficiency is the difference between the losses and the energy input. In indirect method the efficiency can
be measured easily by measuring all the losses occurring in the boilers using the principles to be described. The weaknesses of the
direct method can be overwhelmed by this method, which calculates the various heat losses associated with boiler. The efficiency can
be arrived at, by subtracting the heat loss percentages from 100. An important advantage of this method is that the errors in
measurement do not make significant change in efficiency. The indirect method does not account for Standby losses, Blow down loss,
energy loss in Soot blowing, and energy loss running the auxiliary equipment such as burners, fans, and pumps.
Valid losses incorporate with to coal fired boiler:
1. Heat loss due to dry flue gas as sensible heat (L1).
2. Heat loss due to moisture in the coal (L2).
3. Heat loss due to moisture from burning of hydrogen in coal (L3).
4. Heat loss due to moisture in air (L4).
5. Heat loss due to formation of carbon Monoxide- partial combustion (L5).
6. Unburnt losses in fly ash as carbon (L6).
284

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

7. Unburnt losses in bottom ash as carbon (L7).


8. Loss due to surface radiation and convection (L8).

Boiler efficiency by indirect method:

Maintenance of boilers:
Effective maintenance can also highlight potential problems quickly and enable corrective action to be taken before there is a major
impact on performance. This will improve performance of boilers. Perform regular servicing, analyze flue gas, soot removal, minimize
limescale build-up, produce
a maintenance plan, manual and logbook, boiler replacement, these are some ways for the good maintenance of boiler. The feasibility
study should examine all implications of long-term fuel availability and business growth plans. All financial and engineering factors
should be considered. Since boiler plants traditionally have a useful life of well over 25 years, replacement must be carefully studied.
This paper is convergent on the diverse aspects of the operation of Boiler efficiently. Efficient operation of boiler is likely to play a
very big role in following years to come. Industries all over the world are going through increased and powerful competition and
increased automation of plants. The suspension cost of such system is expected to be very high. To get away with this challenge, it is
clearer by this paper. We have to use the advanced technology and management skills in all spheres of activities to perform its
effective role in the turnover of the company.
Mr. Nilesh R. Kumbhar & Mr.Rahul R. Joshi(2013) [9]:
As per the study carried out by Mr. Nilesh R. Kumbhar & Mr.Rahul R. Joshi, growing concerns arise about energy consumption and
its adverse environmental impact in recent years in India, which cause manufactures to establish energy management groups. The
energy auditing is the key to successful running of an industry with saving energy & contributing toward preserving national recourses
of energy. Managing energy is not a just technical Challenge but one of how to best implement those technical Challenges within
economic limits, and with a minimum of disruptions. In this paper importance of energy auditing and process of energy audit is
discussed. Energy auditing is an official method of finding out the ECOs. Ii is the official survey / study of the energy consumption /
processing / supply aspects related with of industry or organization. Purpose of energy auditing is to recommend steps to be taken by
Management for improving the energy efficiency, reduce energy cost and saving the money on the energy bills.
Methods of energy auditing:
Energy audits can be carried outs in different ways. Depending on time span invested auditing can be classified in as:
i) Walk Through Audit
ii) Intermediate Audit
iii) Detailed / Comprehensive Audit

Basic components of every auditing:


The Energy Audit Process starts by collecting information about facilities Operation and its past record of utility bills. This data is
then analysed to get Picture of how the Facility uses and possibly wastes energy, as well as to help the auditor learn that areas to
examine to reduce energy cost. Specific changes called Energy Conversion Opportunities (ECO) are identified and evaluated to
determine their benefits and their cost effectiveness. These ECOs are accessed in terms of their costs & benefits and economic
comparison is made to rank various ECOs. Finally an action plan is created whether certain ECOs are selected for implementation and
the actual process of energy saving & saving money begins
Auditors tool box: To obtain the best information from a successful energy cost control program the auditor must make some
measurement during audit visit.
Preparation for audit visit: Some preliminary work must be done before the auditor makes actual energy audit.
285

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Conducting the audit: Once the information on energy bills, faculty equipments and facility operations has been obtained, the audit
equipment can be gathered up and actual visit is to be started. Following are some important steps in audit Introductory meeting audit
team should meet facility manager & maintenance manager to brief about purpose of audit Audit interview getting correct
information on facility equipment and operation is important, if the audit is going to most successful in identifying ways to save
money on energy bills. Auditor must interview with floor supervisor and equipment operator to understand building and process
problems. Walk through audit a walk through tour of facility or plant should be arranged by facility/ plant manager and should be
arranged to the auditor or audit team can see major operational and equipment features of facility. Post audit analysis after visit data
collected should be examined , organized and reviewed for completeness ant thing missing data items should be obtained from facility
of re-visit.
Energy audit report: Next step in energy auditing process is to prepare a report which details the final result and recommendation.
Energy action Plan: The last step in audit process is to recommend an action plan for facility. Energy audit is an effective tool in
identifying and perusing a comprehensive energy management program. A care full audit of any type will give the industry a plan with
which it can effectively manage the industrial energy system at minimum energy cost. This approach could be useful for an industry in
combating essential energy cost and also raps several other benefits like improved production, better quality, higher profit and most
important satisfaction of heading towards contributing in world energy saving.
Raviprakash Kurkiya, Sharad Chaudhary(2012) [8]:
As per the study carried out by Raviprakash Kurkiya, Sharad Chaudhary, energy analysis helps designers to find ways to improve the
performance of a system in a many way. Most of the conventional energy losses optimization method are iterative in nature and
require the interpretation of the designer at each iteration. Typical steady state plant operation conditions were determined based on
available trending data and the resulting condition of the operation hours. The energy losses from individual components in the plant
is calculated based on these operating conditions to determine the true system losses. In this, first law of thermodynamics analysis was
performed to evaluate efficiencies and various energy losses. In addition, variation in the per-centage of carbon in coal content
increases the overall efficiency of plant that shows the economic optimization of plant. In boiler, efficiency has a great influence on
heating related energy savings. It is therefore important to maximize the heat transfer to the water and minimize the heat losses in the
boiler. The thermal power plant is based on a simple Rankine cycle; steam is used as the working fluid, steam generated from
saturated liquid water (feed-water). This saturated steam flows through the turbine, where its internal energy is con-verted into
mechanical work to run an electricity generating system. Not all the energy from steam can be utilized for run-ning the generating
system because of losses due to friction, viscosity, bend-on-blade, heat losses from boilers i.e. hot flue gas losses, radiation losses and
blow-down losses etc. Energy analysis of a thermal power plant is reported in this paper. It provides the basis to understand the
performance of a fluidized bed coal fired boiler, feed pump, turbine and con-denser. The various energy losses of plant, through
different components are calculated which indicates that maximum energy losses occur in turbine.
Following conclusions can be drawn from this study: The coal type affects the first law efficiency of the sys-tem considerably. It has
been also analysed that a part of energy loss oc-curs through flue gases. The carbon content in the coal has to be proper. The presence
of moisture has a detrimental effect on overall efficiency. If we use the heat recovery system to recover the heat losses through flue
gases then it will be more useful for us. With the growing need of the coal, which is an non renewable source of energy and depleting
with a very fast pace, it is de-sirable to have such optimal techniques (better quality of coal) which can reduce the energy losses in the
coal fired boiler and improves its performance these create impact on production and optimizations uses of energy sources. In addition
this study shows the better quality of coal giving the high per-formance of plant and even though the consumption of coal is been
reduced that creates economic condition for overall plant.
Shashank Shrivastava, Sandip Kumar, Jeetendra Mohan Khare(2013)
As per the study carried out by Shashank Shrivastava, Sandip Kumar, Jeetendra Mohan Khare, A frequent criticism of energy audits is
that they overestimate the savings potential available to the customer. This paper addresses several problem areas which can result in
over-optimistic savings projections, and suggests ways to prevent mistakes. Performing an energy and demand balance is the initial
step a careful energy analyst should take when starting to evaluate the energy use at a facility. These balances allow one to determine
what the largest energy users are in a facility, to find out whether all energy uses have been identified, and to check savings
calculations by determining whether more savings have been identified than are actually achievable.
Method description
Detailed energy auditing is carried out in three phases: Phase I, II and III.
Phase I - Pre Audit Phase
Phase II - Audit Phase
286

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Phase III - Post Audit Phase


Industry-to-industry, the methodology of Energy audits needs to be flexible. Following steps are adopted methodology for detailed
energy audit
Step 1 : In this step study of process and energy uses are taken from employees, this understanding helps in planning the resources
available and time required for conducting energy audit.
Step 2 : In this step importance of energy uses are discussed with the section officers so that awareness could be build this will also
help in future cooperation. (Kick off meeting)
Step 3 : In this step collect the plant data and electric bill find out the more energy uses of area, which are using and work properly for
different process and collect name plate review and some data use with the help of measurement device.
Step 4 : In this step measurement are taken with the help of portable instrument such as lux meter, techo meter, power analyzer etc.
The energy is mainly being use in pumping and other process for purification of water. This data is compare with operating design
data and baseline energy use is determined.
Step 5 : In this step calculation of all performance data (standard parameters) involve in the process is prepared and present
performance data is compared with baseline data (design). Based on technology availability and compression, recommendations are
proposed to save /conserve energy. These recommendations are as investment grade (payback period). Reduction in energy
consumption will take place after implement of recommendations.
Step 6 : In this step flow up the methodology & technical advice on the plant than rapid will be concur best result.
Energy auditing is not an exact science, but a number of opportunities are available for improving the accuracy of the
recommendations. Techniques which may be appropriate for small-scale energy audits can introduce significant errors into the
analyses for large complex facilities. We began by discussing how to perform an energy and demand balance for a company. This
balance is an important step in doing an energy use analysis because it provides a check on the accuracy of some of the assumptions
necessary to calculate savings potential. We also addressed several problem areas which can result in over-optimistic savings
projections, and suggested ways to prevent mistakes. Finally, several areas where additional research, analysis, and data collection are
needed were identified. Once this additional information is obtained, we can all produce better and more accurate energy audit results.

CONCLUSION
From the overall literature review it get concluded that there are so many ways to reduce the energy consumption, energy cost
reduction etc. which we are getting from energy auditing . Hence there is a need to prefer energy auditing of every plant once in an
year. For the research, it is found that it is also possible to do auditing at different load conditions and by comparison we get the actual
consumption as well as wastage.
Hence the energy auditing of a boiler at SLCM captive power plant is decided to do for energy and coal wastage.

287

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] S.C.ARORA AND S. DOMKUNDWAR A COURSE IN POWER PLANT ENGINEERING. DHANPHAT RAI & CO. (P) LTD.
EDUCATIONAL & TECHNICAL PUBLICATION. PP 40.1-40.9.
[2] RONALD A ZEITZ ENERGY EFFICIENCY HANDBOOK COUNCIL OF INDUSTRIAL BOILER OWNERS (CIBO) BURKE. PP 19-21,
35-42 AVAILABLE AT WWW.CIBO.ORG
[3] SCHNEIDER ELECTRIC SPA ET AL, STANDARD ENERGY AUDIT PROCEDURE GREEN@HOSPITAL 2012.VIPUL SHAH A PAPER
ON ENERGY AUDIT. PP 1-21.
[4] AMITKUMAR TYAGI HAND BOOK OF ENERGY AUDITS AND MANAGEMENT. TERI PUBLISHER . PP 16-28.
[5] M. LEI, ENERGY-SAVING MANUALS FOR POWER GENERATION. BEIJING: CHINA ELECTRIC POWER, 2005, PP. 13.
[6] V S VERMA POWER ON DEMAND BY 2012, AVAILABLE AT AVAILABLE AT WWW.TERIIN.ORG
[7] CHRISTOPHER B MILAN A GUIDE BOOK FOR PERFORMING WALKTHROUGH ENERGY AUDITS OF INDUSTRIAL FACILITIES
BONNEVILLE POWER ADMINISTRATION, PP 17-21, 37-39 AVAILABLE AT WWW.BPA.GOV LABORATORY OCTOBER 2010.
[8] M. J. PODDAR, MRS. A.C.BIRAJDAR (2013), ENERGY AUDIT OF A BOILER-A CASE STUDY THERMAL POWER PLANT, UNIT-III
PARLI(V) MAHARASHTRA, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) ISSN: 22780181,VOL. 2 ISSUE 6,PP.1660-1666.
[9] RAVIPRAKASH KURKIYA, SHARAD CHAUDHARY(2012), ENERGY ANALYSIS OF THERMAL POWER PLANT, INTERNATIONAL
JOURNAL OF SCIENTIFIC & ENGINEERING RESEARCH, ISSN 2229-5518,VOLUME 3, ISSUE 7, JULY-2012,PP 1-7.
[10] MR. NILESH R. KUMBHAR, MR. RAHUL R. JOSHI, AN INDUSTRIAL ENERGY AUDITING: BASIC APPROACH, INTERNATIONAL
JOURNAL OF MODERN ENGINEERING RESEARCH (IJMER) , ISSN: 2249-6645, VOL.2, ISSUE.1, PP-313-315 .
[11] MONI KUNTAL BORA AND S. NAKKEERAN(2014), PERFORMANCE ANALYSIS FROM THE EFFICIENCY ESTIMATION OF COAL
FIRED BOILER, INTERNATIONAL JOURNAL OF ADVANCED RESEARCH , ISSN 2320-5407, VOLUME 2, ISSUE 5,PP.561-574.
[12] RAHUL DEV GUPTA, SUDHIR GHAI, AJAI JAIN. ENERGY EFFICIENCY IMPROVEMENT STRATEGIES FOR INDUSTRIAL BOILERS:
A CASE STUDY. JOURNAL OF ENGINEERING AND TECHNOLOGY. VOL 1. ISSUE 1. JAN-JUNE 2011

288

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Excoecaria agallocha: a potential bioindicator of heavy metal pollution

Shankhadeep Chakraborty1, Sufia Zaman2 and Abhijit Mitra3

Department of Oceanography, Techno India University, Salt Lake Campus, Kolkata-700091, India

Department of Marine Science, University of Calcutta, 35 B.C. Road, Kolkata 700 019, West Bengal, India; Also attached to
Techno India University, Salt Lake Campus, Kolkata-700 091, India
3

Department of Oceanography, Techno India University, Salt Lake Campus, Kolkata-700091, India

Abstract: We analyzed the concentrations of zinc, copper and lead in the root, stem and leaf of Excoecaria agallocha collected from
12 stations in the north east coast of Bay of Bengal during April, 2013. The region is extremely polluted due to presence of
industries, tourism units, fish landing stations and trawler repairing units. In all the selected stations, the metals accumulated in the
vegetative parts as per the order root > stem > leaf. In the root region of

E. agallocha, the concentration of zinc ranged from 17.57

ppm dry wt. (at Bagmara) to 91.84 ppm dry wt. (at Nayachar island). In the stem region, the values ranged from10.05 ppm dry wt. (at
Bagmara) to 74.61 ppm dry wt. (at Nayachar island), whereas in the leaf region the values ranged from 8.80 ppm dry wt. (at
Bagmara) to 35.04 ppm dry wt. (at Nayachar island). In case of copper, the values in the root region ranged from 11.01 ppm dry wt.
(at Bagmara) to 35.25 ppm dry wt. (at Nayachar island). In the stem region, the values ranged from 8.99 ppm dry wt. (at Bagmara) to
30.23 ppm dry wt. (at Nayachar island), and in the leaf region the values ranged from 7.46 ppm dry wt. (at Bagmara) to 19.85 ppm
dry wt. (at Nayachar island). The concentrations of lead were lowest in all the vegetative parts and also in all the stations. The values
ranged from 3.33 ppm dry wt. (at Bagmara) to 15.61 ppm dry wt. (at Nayachar island) in root, 2.56 ppm dry wt. (at Bagmara) to 8.92
ppm dry wt. (at Nayachar island) in stem and 1.98 ppm dry wt. (at Bagmara) to 6.54 ppm dry wt. (at Nayachar island) in leaf.
Simultaneous analyses of dissolved heavy metals in the surface water of the selected stations revealed highest values of zinc followed
by copper and lead. Among the 12 selected stations, highest concentrations of dissolved zinc, copper and lead were observed in
Nayachar island (540.2 ppb, 159.8 ppb and 42.04 ppb respectively). Station 12 (at Bagmara) exhibited lowest concentrations of
dissolved heavy metals viz. 299.47 ppb for zinc, 98.59 ppb for copper and 15.75 ppb for lead.
Keywords: Excoecaria agallocha, zinc, copper, lead, bioindicator.
Introduction
Rapid urbanization and industrialization near the coastal mangrove areas within numerous parts of the world have posed a threat to
wetland ecosystems. Bioaccumulation of anthropogenic chemicals and non-essential nutrients through the food chain has recently
become a matter of concern for several researchers (Alberic et al., 2006). Mangrove ecosystems act as sinks or buffer and they tend
to remove or immobilize metals. Numerous studies have utilized mangrove species and their sediments as reliable bio-indicators for
heavy metal pollution and contamination (Burchett et al., 2003). Due to close proximity to urban development, mangroves are
exposed

to

significant

direct

contaminant input,

including heavy metals

(MacFarlane, 2002). The value of mangrove

communities and particularly of mangrove forest sediments, as a buffer between potential sources of metalliferous pollutants and
marine ecosystems has been noted previously (Harbison, 1986; Saenger et al., 1991). Several literatures on the response of
mangrove species to heavy metal exposure (Chakraborty et al., 2014; Montgomery and Price, 1979; Peterson et al., 1979; Walsh et
289

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

al., 1979; Thomas and Ong, 1984; Chiu et al., 1995; Chen et al., 1995; Wong et al., 1997) have been published, but detailed studies
of heavy metals in mangrove forest of Indian Sundarbans are rare.
The north east coast of Bay of Bengal is exposed to heavy metal pollution from several point and non-point sources (Mitra, 1998;
Mitra et al. 2011; Banerjee et al. 2012, Mitra and Ghosh, 2014). Ample studies have not been conducted in India, particularly in
the Sundarban mangroves, regarding the incidence of heavy metal status and pollution in these mangrove settings
(Untawale et al., 1980; Seralathan, 1987). Excoecaria agallocha, a dominant mangrove species in Indian Sundarbans, can
withstand a wide range of salinity ranging from 4 psu to 28 psu (Mitra et al., 2010). Several heavy metals like zinc, copper, iron
manganese, cobalt, nickel and lead have been reported in the environment of north east coast of Bay of Bengal, but from the
viewpoint of abundance the levels of zinc, copper and lead are the highest (Mitra, 1998). Hence, the present study focused on the
bioaccumulation pattern of these three heavy metals in the selected mangrove species with the aim to evaluate the efficiency of the
species as indicator of selected dissolved heavy metals to which this region is exposed.
Materials and methods
Sampling of E. agallocha
Twelve stations were selected in the north east coast of Bay of Bengal in and around Indian Sundarbans mangrove ecosystem (Table
1). E. agallocha was collected at ebb within 500 meter coastal stretch (from the low tide line) of the selected stations during 5 - 15th
April, 2013. The collected samples were segregated into root, stem and leaf, washed with ambient sea water and brought to laboratory.
The segregated samples were washed with double distilled water, dried with tissue paper and stored at -200C for further analysis.

Analysis of dissolved Zn, Cu and Pb


Surface water samples were collected using 10-l Teflon-lined Go-Flo bottles, fitted with Teflon taps and deployed on a rosette or on
Kevlar line, with additional surface sampling carried out by hand. Shortly after collection, samples were filtered through Nuclepore
filters (0.4 m pore diameter) and aliquots of the filters were acidified with sub-boiling distilled nitric acid to a pH of about 2 and
stored in cleaned low-density polyethylene bottles. Dissolved heavy metals were separated and pre-concentrated from the seawater
using dithiocarbamate complexation and subsequent extraction into Freon TF, followed by back extraction into HNO 3 as per the
procedure of Danielsson et al (1978). Extracts were analyzed for Zn, Cu and Pb by Atomic Absorption Spectrophotometer (Perkin
Elmer: Model 3030). The accuracy of the dissolved heavy metal determination is indicated by good agreement between our values
and reported for certified reference seawater materials (CASS 2) (Table 2).

Analysis of tissue Zn, Cu and Pb


Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is now - a - day accepted as a fast, reliable means of multi-elemental
analysis for a wide variety of biological sample types. A Perkin-Elmer Sciex ELAN 5000 ICP mass spectrometer was used for
analysis of selected heavy metals in the root, stem and leaf tissues of E. agallocha. A standard torch for this instrument was used
with an outer argon gas flow rate of 15 L/min and an intermediate gas flow of 0.9 L/min. The applied power was 1.0 kW. The ion
290

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

settings were standard settings recommended, when a conventional nebulizer/spray is used with a liquid sample uptake rate of 1.0
mL/min. A Moulinex Super Crousty microwave oven of 2450 MHz frequency magnetron and 1100 Watt maximum power
Polytetrafluoroethylene (PTFE) reactor of 115 ml volume, 1 cm wall thickness with hermetic screw caps, were used for the
digestion of the root, stem and leaf samples. All reagents used were of high purity available and of analytical reagent grade. High
purity water was obtained with a Barnstead Nanopure II water-purification system. All glasswares were soaked in 10% (v/v) nitric
acid for 24 h and washed with deionised water prior to use.

Data analysis
Statistical software SPSS 14.0 was used to determine the inter-relationships of heavy metal concentrations in the vegetative parts of
E. agallocha and dissolved heavy metals in aquatic environment through Pearson correlation coefficient analysis.

Results
Dissolved heavy metal
The dissolved heavy metals were observed to follow the trend Zn > Cu > Pb irrespective of all stations. Dissolved Zn ranged from
299.47 ppb (at Bagmara) to 540.2 ppb (at Nayachar island). Dissolved Cu ranged from 98.59 ppb (at Bagmara) to 159.8 ppb (at
Nayachar island), whereas, dissolved Pb ranged from 15.75 ppb (at Bagmara) to 42.04 ppb (at Nayachar island) (Figure 1).

Bioaccumulation pattern
In E. agallocha samples, the heavy metals varied as per the order Zn > Cu > Pb. This sequence is uniform in all the twelve selected
stations during the study period. In the present study, the concentration of Zn in root ranged from 17.57 ppm (at Bagmara) to 91.84
ppm (at Nayachar island). Cu ranged from 11.01 ppm (at Bagmara) to 35.25 ppm (at Nayachar island) and Pb ranged from 3.33 ppm
(at Bagmara) to 15.61 ppm (at Nayachar island) (Figure 2).
The concentration of Zn in stem ranged from 10.05 ppm (at Bagmara) to 74.61 ppm (at Nayachar island). Cu ranged from 8.99 ppm
(at Bagmara) to 30.23 ppm (at Nayachar island) and Pb ranged from 2.56 ppm (at Bagmara) to 8.92 ppm (at Nayachar island)
(Figure 3).
The concentration of Zn in leaf ranged from 8.80 ppm (at Bagmara) to 35.04 ppm (at Nayachar island) and Cu ranged from 7.46
ppm (at Bagmara) to 19.85 ppm (at Nayachar island). Concentration of Pb ranged from 1.98 ppm (at Bagmara) to 6.54 ppm (at
Nayachar island) (Figure 4).

Discussion
291

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Heavy metal pollution in the estuarine sector is the result of land run-off, mining, activities like shipping and dredging and
anthropogenic inputs (Panigrahy et al., 1997). The main sources of zinc in the present geographical locale are the galvanization
units, paint manufacturing units and pharmaceutical processes, whereas the main sources of copper in the coastal waters are
antifouling paints, particular type of algaecides used in different aquaculture farms, paint manufacturing units, pipe line corrosion
and oil sludges (32 to 120 ppm). Ship bottom paint has been found to produce very high concentration of Cu is sea water and
sediment in harbors of Great Britain and southern California (Bellinger and Benham 1978; Young et al. 1979). The most toxic of
these three heavy metals is lead, which finds its way in coastal waters through the discharge of industrial waste waters, such as from
painting, dyeing, battery manufacturing units and oil refineries etc. Antifouling paints used to prevent growth of marine organisms
at the bottom of the boats and designed to constantly leach toxic metals into the water to kill organisms that may attach to bottom of
the boats, ultimately get transported to the sediment and aquatic compartments. The study area is exposed to all these activities
being proximal to the highly urbanized city of Kolkata, Howrah and the newly emerging Haldia port-cum- industrial complex
(Mitra and Choudhury, 1993; Mitra, 1998; Mitra et al, 2011; Mitra et al., 2012).
The bioaccumulation pattern in plant parts like leaves, bark and roots may vary depending on the concentration of heavy metals in
the sediment, the types of heavy metals and also the tolerance of the species and its parts towards the heavy metals (Baker and
Walker, 1990; De Lacerda and Abrao, 1986). Heavy metal concentration in plant tissues is influenced by the metabolic
requirements for essential micronutrients such as Cu and Zn, while non-essential metals like Pb tend to be

excluded

or

compartmentalized (Baker and Walker,1990). Zinc is an essential micro-nutrient of numerous enzyme systems, respiration
enzyme activators, and the biosynthesis of plant growth hormones (Ernst et al., 1992). Copper is an essential micronutrient
required in mitochondria and chloroplast reactions, enzyme systems related to photosystem II electron transport, cell wall
lignification, carbohydrate metabolism, and protein synthesis (Verkleij and Schat, 1990). The mobility of metals (in the order of
Zn > Cu > Pb) is thus justified given the role played by essential metals like Zn and Cu in plant metabolism while the non essential
metal like Pb was rather restricted in translocation to the upper plant parts. The order of the heavy metals in the vegetative
parts of E. agallocha reflects that of the dissolved heavy metals, which clearly confirms the use of the species as effective indicator
of dissolved Zn, Cu and Pb in the northeast coast of Indian subcontinent particularly in the inshore region of Bay of Bengal. The
heavy metals from the ambient environment enter the coastal vegetation very frequently as they are inundated during high tide twice
daily. The roots are exposed to water and sediment during most of the period, whereas the leaves come in contact with water during
the high tide phase. This may be one of the reason of the maximum concentrations of heavy metals in the root followed by stem and
leaf. Several researchers also observed variation in metal level in the vegetative parts of coastal vegetation particularly mangroves.
Acknowledgements
The authors are grateful to the financial support and analyses facilities offered by Progressive Organisation Of Rural Service For
Health, Education, Environment (PORSHEE).
REFERENCES:

1.

Alberic P, Baillif P, Baltzer F, Cossa , Lallier Verges, E. 2006. Heavy metals distribution in mangrove sediments along the mobile
coastline of French Guiana, Marine Chemistry, 98, 1-17.

292

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.

Baker AJ, and Walker PI. 1990. Ecophysiology of metal uptake by tolerant plants, In: Shaw AJ (ed.) Heavy metal tolerance in
plants; evolutionary aspects, CRC Press, Florida, 155178.

3.

Banerjee K, Roy Chowdhury M, Sengupta, K, Sett, S and Mitra, A. 2012. Influence of anthropogenic and natural factors on the
mangrove soil of Indian Sundarban Wetland, Architectural Environmental Science, 6, 80-91.

4.

Bellinger E and Benhem B. 1978. The levels of metals in Dockyard sediments with particular reference to the contributions from
ship bottom paints. Environmental Pollution Assessment, 15(1), 71-81.

5.

Burchett MD, MacFarlane GR and Pulkownik A. 2003. Accumulation and distribution of heavy metals in the grey mangrove
Avicennia marina (Forsk.) Vierh.: biological indication potential, Environmental Pollution, 123, 139-151.

6.

Chakraborty S, Trivedi S, Fazli P, Zaman S and Mitra M. 2014. Avicennia alba: an indicator of heavy metal pollution in Indian
Sundarban estuaries, Journal of Environmental Science, Computer Science and Engineering & Technology, 3(4): 1796-1807.

7.

Chen GZ, Miao SY, Tam N, Wong FYS, Li SH and Lan CY. 1995. Effects of synthetic wastewater on young Kandelia candel
plants growing under greenhouse conditions. Hydrobiologia, 295,263-273.

8.

Chiu CY, Hsiu, FS, Chen SS and Chou, CH. 1995. Reduced toxicity of Cu and Zn to mangrove seedlings in saline environments.
Botanical Bulletin Academy, Singapore, 36, 19-24.

9.

Danielsson LG, Magnusson B and Westerlund S. 1978. An improved metal extraction procedure for the determination of trace
metals in seawater by atomic absorption spectrometry with electrothermal atomization. Analytical Chemistry Acta, 98, 45 57.

10. De Lacerda LD and Abrao JJ. 1986. Heavy metal accumulation by mangrove and saltmarsh intertidal sediments. Marine
Pollution Bulletin, 17, 246-250.

11. Ernst WHO, Verkleij JAC and Schat H. 1992. Metal tolerance in plants, Acta Botanica Neerlandica, 41. 229248.
11. Harbison P. 1986. Mangrove muds, A sink and a source for trace metals, Marine Pollution Bulletin, 17, 246-250.
12. MacFarlane GR. 2002. Leaf biochemical parameters in Avicennia marina (Forsk.) Vierh as

potential biomarkers of heavy metal

stress in estuarine ecosystems, Marine Pollution Bulletin, 44, 244256.


13. Mitra A. 1998. Status of coastal pollution in West Bengal with special reference to heavy metals, Journal of Indian Ocean Studies,
5(2), 135-138.
14. Mitra A, Chakraborty R, Sengupta K and Banerjee K. 2011. Effects of various cooking processes on the concentrations of heavy
metals in common finfish and shrimps of the River Ganga, National Academy of Science Letters, 34 (3 & 4), 161 168.
15.

Mitra A and Choudhury A. 1993. Heavy metal concentrations in oyster Crassostrea cucullata of Sagar Island, India. Indian
Journal of Environmental Health, NEERI, 35 (2), 139-141.

16. Mitra A, Chowdhury R, Sengupta K and Banerjee K. 2010. Impact of salinity on mangroves of Indian Sundarbans, Journal. of
Coastal Environment, 1(1), 71-82.
293

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

17.

Mitra A, Choudhury R and Banerjee K. 2012. Concentrations of some heavy metals in commercially important finfish and
shellfish of the River Ganga. Environmental Monitoring and Assessment, 184, 221 223 (SPRINGER DOI 10.1007/s10661-0112111-x).

18. Mitra A and Ghosh R. 2014. Bioaccumulation Pattern of Heavy Metals in Commercially Important Fishes in and Around Indian
Sundarbans, Global Journal of Animal Science Research, 2(1).
19. Montgomery JR. and Price MT. 1979. Release of trace metals by sewage sludge and the
subsequent uptake by members of a turtle grass mangrove ecosystem. Environmental Science & Technology, 13, 546-549.
20. Panigrahy PK, Nayak BB, Acharya BC, Das SN, Basu SC & Sahoo RK. 1997. Evaluation of heavy metal accumulation in coastal
sediments of northern Bay of Bengal. 139-146. In: C.S.P. Iyer (ed.), Advances in Enironmental Science, Educational Publishers
and Distributors, New Delhi.
21. Peterson PJ, Burton MA, Gregson M, Nye SM and Porter EK. 1979. Accumulation of tin by mangrove species in West Malaysia,
Science of the Total Environment, 11, 213-221.
22. Saenger P. 2002. Mangrove ecology, silviculture and conservation. Kluwer Academic Publishers, 360.
23. Seralathan P. 1987. Trace element geochemistry of modern deltaic sediments of the Cauvery

River, east coast of India, Indian

Journal of Marine Science, 16, 235239.


24. Thomas C and Ong JE. 1984. Effect of heavy metals zinc and lead on Rhizophora mucronata Lam. and Avicennia alba Bl.
seedlings. In: Soepadmo, E., A.N. Rao and D.J.McIntosh (Eds.), Proceedings of the Asian Symposium Mangrove Environments Research and Management, UNESCO, 568-574.
25. Untawale A.G., Wafar S., and Bhosle N.B., 1980. Seasonal variation in heavy metal

concentration in mangrove foliage,

Mahasagar-Bulletin of the National Institute of Oceaography, 13(3), 215-223.


26. Verkleij JAC and Schat H. 1990. Mechanisms of metal tolerance in plants. In: Shaw AJ (ed) Heavy metal tolerance in
plants-evolutionary aspects, CRC Press,
Florida, 179193.
27. Walsh GER, Ainsworth KA and Rigby, R. 1979. Resistance of red mangrove (Rhizophora
mangle L.) seedlings to lead, cadmium and mercury. Biotropica, 11, 22-27.
28. Wong YS, Tam NFY and Lan CY. 1997. Mangrove wetlands as wastewater treatment facility: a field trial, Hydrobiologia, 352, 4959.
29. Young DR, Alexander GV, McDermott-Ehrlich D. 1979. Vessel related

contamination of southern California harbours by copper

and other metals. Marine Pollution Bulletin, 10, 50-56.

294

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1. Coordinates of selected stations


Stations

Latitude

Longitude

Canning

221837"N

884036"E

Gosaba

22 15' 45" N

88 39' 46" E

Diamond Harbour

22 11' 30" N

88 11' 4" E

Nayachar island

21 45' 24" N

88 15' 24" E

Kakdwip

215206N

881112E

Chemaguri

2138'25.86"N

8808'53.55" E

Sagar South

21 38' 51.55" N

88 02' 20.97" E

Jambu island

2135'42.03"N

8810'22.76"E

Frasergunge

21 33 47.76 N

88 15 33.98 E

Digha

21 42' 50.03" N

87 05' 20.12" E

Bali

2204'35.17''N

8844'55.70''E

Bagmara

2139' 4.45''N

8904' 40.59'' E

Table 2. Analysis of reference material for near shore seawater (CASS 2)


Element

Certified value

Laboratory results (g l-1)

(g l-1)

295

Zn

1.97 0.12

2.01 0.14

Cu

0.675 0.039

0.786 0.058

Pb

0.019 0.006

0.029 0.009

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 1. Spatial variations of dissolved heavy metal concentrations(in ppb)

Figure 2. Spatial variations of heavy metal concentrations in root (ppm dry wt.) of E. agallocha

296

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3. Spatial variations of heavy metal concentrations in stem (ppm dry wt.) of E. agallocha

Figure 4. Spatial variations of heavy metal concentrations in leaf (ppm dry wt.) of E. agallocha

297

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 3
Inter-relationships between heavy metal concentrations in E. agallocha and ambient environment
Study period

April, 2013

298

Combination

r-value

p-value

Root Zn x Dissolved Zn

0.960899

<0.01

Stem Zn x Dissolved Zn

0.976413

<0.01

Leaf Zn x Dissolved Zn

0.793961

<0.01

Root Cu x Dissolved Cu

0.968539

<0.01

Stem Cu x Dissolved Cu

0.962121

<0.01

Leaf Cu x Dissolved Cu

0.922790

<0.01

Root Pb x Disosolved Pb

0.880778

<0.01

Stem Pb x Dissolved Pb

0.947862

<0.01

Leaf Pb x Dissolved Pb

0.961542

<0.01

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Automation of Inter-Networked Banking and Teller Machine


Operations
Prof. Mazher Khan 1
Dr.Sayyad Ajij 2
Mr.Majed Ahmed Khan 3
Marathwada Institute of Technology,Aurangabad (MS)

Abstract In this article about biometric systems the general idea is to use of facial recognition to reinforce security on one of the
oldest and most secure piece of technology that is still in use to date thus an Automatic Teller Machine. The main use for any
biometric system is to authenticate an input by Identifying and verifying it in an existing database. Security in ATMs has changed
little since their introduction in the late 70s. This puts them in a very vulnerable state as technology has brought in a new breed of
thieves who use the advancement of technology to their advantage. With this in mind it is high time something should be done about
the security of this technology beside there cannot be too much security when it comes to peoples money.

Keywords Biometrics, Facial Recognition, GSM Standards, Biometric Standards, Automatic Teller Machine Technology, Biometric
Predecessors.

1. INTRODUCTION
Face detection and recognition are challenging tasks due to variation in illumination, variability in scale, location, orientation (upright, rotated) and pose (frontal, profile). Facial expression, occlusion and lighting conditions also change the overall appearance of
face. Face detection and recognition has many real world applications, like human/ computer interface, surveillance, authentication
and video indexing. Face detection using artificial neural networks was done by Rowley [7]. It is robust but computationally
expensive as the whole image has to be scanned at different scales and orientations. Feature-based (eyes, nose, and mouth) face
detection is done by Yow et al.[15]. Statistical model of mutual distance between facial features are used to locate face in the image
[4]. Markov Random Fields have been used to model the spatial distribution of the grey level intensities of face images [1]. Some of
the eye location technique use infrared lighting to detect eye pupil [2]. Eye location using genetic algorithm has been proposed
byWechsler [3]. Skinpaper, motion information is used to reduce the search space for face detection. It is known that eye regions are
usually darker than other facial parts, therefore probable eye pair regions are extracted by thresholding the image. The eyes pair region
gives the scale and orientation of face, and reduces the search space for face detection across different scales and orientations.
Correlation between averaged face template and the test pattern is used to verify whether it is a face or not.
Recognition of human face is also challenging in human computer interaction [6, 10, 11, 14]. The proposed system for face
recognition is based on Eigen analysis of edginess representation of face, which is invariant to illumination to certain extent [8, 9]. The
paper is organized as follows: Section 2 describes the face detection process. The method of obtaining edginess image and
eigenedginess of a faces are discussed in Sections 3 and 4, respectively. Experimental results are presented in Section 5. color is used
extensively to segment the image, and localize the search for face [13, 12]. The detection of face using skin color fails when the source
of lighting is not natural

2. FACE RECOGNITION TECHNIQUES


The method for acquiring face images depends upon the underlying application. For instance, surveillance applications may best be
served by capturing face images by means of a video camera while image database investigations may require static intensity images
taken by a standard camera.
Some other applications, such as access to top security domains, may even necessitate the forgoing of the nonintrusive quality of face
recognition by requiring the user to stand in front of a 3D scanner or an infra-red sensor.
Therefore, depending on the face data acquisition methodology, face recognition techniques can be broadly divided into three
categories: methods that operate on intensity images, those that deal with video sequences, and those that require other sensory data
such as 3D information or infra-red imagery. The following discussion sheds some light on the methods in each category and attempts
to give an idea of some of the benefits and drawbacks of the schemes mentioned therein in general.
2.1EIGENEDGINESS
Edginess is a strong feature extraction method and has proved to be better than other edge representations [2]. The reason behind this
is that edginess is based on one dimensional processing of images. The traditional 2D operators smooth the image in all directions
resulting in the smearing of edge information. To extract the edginess map, the image is smoothed using a 1D Gaussian filter along the
horizontal (or vertical) direction to reduce noise. The smoothing filter is a 1D Gaussian filter is given by
299

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

---------- (1)
Where 1 is the standard deviation of the Gaussian function. The response of the 1 D Gaussian filters applied along a particular scan
line of an image in one direction. A differential operator (first derivative of 1-D Gaussian function) is then applied in the orthogonal
direction, i.e., along the vertical (or horizontal) scan lines to detect the edges. The first order derivative of 1D Gaussian is given by

---------- (2)
The resulting image obtained by applying equation 1 produces the horizontal components of edginess (strength of an edge) in the
image. Similarly, the vertical components of edginess are derived by applying the above filters on original images in orthogonal
directions of those used in obtaining the horizontal components of edginess. Finally the total magnitude of partial edge information
obtained in both the horizontal and vertical edge components gives the edginess map of the original image. Figure 1a and 1b show a
plot of Gaussian mask and its derivative. Figure 2a-2f shows the various steps in creating an edginess image from a gray scale image.
The edginess of a pixel in an image is identical to the magnitude of the gradient of gray level function,[12,13] which corresponds to
the amount of change across the edge. The edginess images of an example face are shown in Figure 2f.

Fig 1. 1(a) Gaussian Function (smoothing filter), 1(b) First derivative of Gaussian (differential operator).
It is visually clear the edginess image carries more information than the edge map of an image. The intuitive reason for this is that the
edginess gives a very low output when it operates on completely smooth regions with no useful information.
However, unlike the edge detection process, the edginess maintains an output in the regions having even low amount of texture.
Again, the 1-d and orthogonal processing of the Gaussian and its derivative is less affected by the tradeoff between smoothing out the
noise and smoothing the image features. Thus as seen from the face images, the smooth regions of the face that carry no discriminate
information, and may cause class overlap in the classification, are removed. However, the regions with even a small amount of
discriminate texture are visible in the output. This is the intuitive motivation behind this research, where we need to know whether this
information at the output of the edginess filter is really made mainly of the discriminate information of the face.

Fig 2. 2(a) Gray Scale Image, 2(b) Image after smoothing in horizontal direction, 2(c) Image after applying the differential operator to 2(b)in vertical
direction, 2(d) Image after smoothing in vertical direction, 2(e) Image after applying the differential operator to 2(d) in vertical direction, 2(f)
Edginess Image.

2.1.1 EXPERIMENTAL RESULTS


300

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In order to establish the performance of Edginess-SVM in comparison with Euclidian distance based NN classification, we carried out
the experiments on a set of CMU PIE database. All the images considered had a frontal pose and nearly the same expression with wide
changes in illumination conditions. We have considered 24 images for one individual and these have been randomly distributed for the
training and testing sets. The training and testing sets are so chosen that there is no overlap between them. Different experiments have
been performed considering varying number of images for training and the recognition rates have been recorded for both Euclidian
distance based NN classification scheme and SVM based classification scheme. The testing set consists of 12 images chosen
randomly. Figure 3 shows the comparison of recognition rates between the Eigenedginess-SVM method and Eigenedginess-NN
method, [13, 14].

Fig 3. The graph shows a comparison of recognition rate between SVM and NN classifier for different number of training images
As observed from the graph in Figure 3, the performance of both NN and SVM is nearly same except for the cases when the number
of training images is less. However, in [4] the authors have shown that classification by SVM's is more efficient than that by nearest
neighbor scheme for face recognition problem with PCA as the feature extraction technique. The reason for this is that nearest
neighbor based scheme is very sensitive to noisy inputs and can easily get confused with the neighboring classes in the eigen space.
The latter is due to the fact that it does not perform classification based on discriminatory function like in SVM's but on the data points
itself. This leads us to conclude that possibly edginess is a strong feature extraction method and the classification is not affected much
by the classifier used at the back end,[13]
2.2 FACE RECOGNITION FROM INTENSITY IMAGESFace recognition methods for intensity images fall into two main categories: feature-based and holistic, an overview of some of the
well-known methods in these categories is given below.
2.2.1 FEATURED-BASED
Feature-based approaches first process the input image to identify and extract (and measure) distinctive facial features such as the
eyes, mouth, nose, etc., as well as other fiducial marks, and then compute the geometric relationships among those facial points, thus
reducing the input facial image to a vector of geometric features. Standard statistical pattern recognition techniques are then employed
to match faces using these measurements Early work carried out on automated face recognition was mostly based on these techniques.
One of the earliest such attempts was by Kanade [19], who employed simple image processing methods to extract a vector of 16 facial
parameters - which were ratios of distances, areas and angles (to compensate for the varying size of the pictures) - and used a simple
Euclidean distance measure for matching to achieve a peak performance of 75% on a database of 20 different people using 2 images
per person (one for reference and one for testing).
Another well-known feature-based approach is the elastic bunch graph matching method proposed by Wiskott et al. [22] . This
technique is based on Dynamic Link Structures [17]. A graph for an individual face is generated as follows: a set of fiducial points on
the face are chosen. Each fiducial point is a node of a full connected graph, and is labeled with the Gabor filters responses applied to
a window around the fiducial point. Each arch is labeled with the distance between the correspondent fiducial points. A representative
set of such graphs is combined into a stack-like structure, called a face bunch graph. Once the system has a face bunch graph, graphs
for new face images can then be generated automatically by Elastic Bunch Graph Matching. Recognition of a new face image is
performed by comparing its image graph to those of all the known face images and picking the one with the highest similarity value.
Using this architecture, the recognition rate can reach 98% for the first rank and 99% for the first 10 ranks using a gallery of 250
301

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

individuals. The system has been enhanced to allow it to deal with different poses (Fig. 3) [11] but the recognition performance on
faces of the same orientation remains the same. Though this method was among the best performing ones in the most recent FERET
evaluation [12, 13], it does suffer from the serious drawback of requiring the graph placement for the first 70 faces to be done
manually before the elastic graph matching becomes adequately dependable [14]. Campadelli and Lanzarotti [15] have recently
experimented with this technique, where they have eliminated the need to do the graph placement manually by using parametric
models, based on the deformable templates proposed in [10], to automatically locate fiducial points. They claim to have obtained the
same performances as the elastic bunch graph employed in [19]. Other recent variations of this approach

Fig. 3. Grids for face recognition [21]. (1999 IEEE)


Replace the Gabor features by a graph matching strategy [16] and HOGs (Histograms of Oriented Gradients [17]. Considerable effort
has also been devoted to recognizing faces from their profiles [08-12] since, in this case, feature extraction becomes a somewhat
simpler one-dimensional problem [17, 11]. Kaufman and Breeding [10] reported a recognition rate of 90% using face profiles;
however, they used a database of only 10 individuals. Harmon et al. [18] obtained recognition accuracies of 96% on a database of 112
individuals, using a 17-dimensional feature vector to describe face profiles and utilizing a Euclidean distance measure for matching.
More recently, Liposcak and Loncaric [01] reported a 90% accuracy rate on a database of 30 individuals, using subspace filtering to
derive a 21- dimensional feature vector to describe the face profiles and employing a Euclidean distance measure to match them (Fig.
4).

Fig. 4. a) The twelve fiducial points of interest for face recognition; b) Feature vector has 21 components; ten distances D1-D10
(normalized with / (D4+D5)) and eleven profile arcs A1-A11 (normalized with /(A5+A6)) [21]. (Courtesy of Z. Liposcak and S.
Loncaric)

2.3 PCA ALGORITHM


Automatic face recognition systems try to find the identity of a given face image according to their memory. The memory of a face
recognizer is generally simulated by a training set. In this project, our training set consists of the features extracted from known face
302

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

images of different persons. Thus, the task of the face recognizer is to find the most similar feature vector among the training set to the
feature vector of a given test image. Here, the identity of a person where an image of that person (test image) is give to the system is
recognized. PCA is itself a feature extraction algorithm.
In the training phase, the feature vectors for each image are extracted in the training set. Let A be a training image of person A
which has a pixel resolution of M N (M rows, N columns). In order to extract PCA features of A, the image is converted into a
pixel vector A by concatenating each of the M rows into a single vector. The length (or, dimensionality) of the vector A will be M
N. Use the PCA algorithms a dimensionality reduction technique which transforms the vector A to a vector A which has a
dimensionality d where d M N. For each training image i, you should calculate and store these feature vectors i
In the recognition phase (or, testing phase), you will be given a test image j of a known person. Let j be the identity (name) of this
person. As in the training phase, you should compute the feature vector of this person using PCA and obtain j. In order to identify j,
you should compute the similarities between j and all of the feature vectors is in the training set. The similarity between feature
vectors can be computed using Euclidean distance. The identity of the most similar i will be the output of our face recognizer. If i = j,
it means that we have correctly identified the person j, otherwise if i j, it means that we have misclassified the person j. Schematic
diagram of the face recognition system that will be implemented is shown in Figure 2.

Mobile scanning device scans SIM number through GSM Modem Collected data is given to the Automated teller machines (ATMs)
for further processing. At the same time, web camera captures the images and compares using digital signal processing with the image
stored in the data base. Each processing information produces by voice annunciation module.
Power supply unit consists of a step down transformer along with rectifier unit to convert 230 V AC into required 7 V DC. 7 V DC
supply is given to the micro controller for its action. It may be difficult for blind people to use existing ATM so we can add voice
annunciates to indicate each and every process to the blind people. It enables a visually and/or hearing impaired individual to
conveniently and easily carry out financial transactions or banking functions. Each processing information produces by voice
annunciation module
303

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

If images and PIN number are same then further processing is continued, otherwise it gives alarm through Alert module. Data trasfer
unit consist of a micro controller of no AT89352 which transfers the data between alert module, voice annunciation module & ATM
machine. Automated teller machines (ATMs) are well known devices typically used by individuals to carry out a variety of personal
and business financial transactions and/or banking functions ATMs have become very popular with the general public for their
availability and general user friendliness.
ATMs are now found in many locations having a regular or high volume of consumer traffic. For example, ATMs are
typically found in restaurants, supermarkets, Convenience stores, malls, schools, gas stations, hotels, work locations,
banking centers, airports, entertainment establishments, transportation facilities and a myriad of other locations. ATMs
are typically available to consumers on a continuous basis such that consumers have the ability to carry out their ATM
financial transactions and/or banking functions at any time of the day and on any day of the week. Existing ATMs are
convenient and easy to use for most consumers. Existing ATMs typically provide instructions on an ATM display screen
that are read by a user to provide for interactive operation of the ATM. Having read the display screen instructions, a user
is able to use and operate the ATM via data and information entered on a keypad.

CONCLUSION
This paper presents a system creates the new generation ATM machine which can be operator without the ATM card. By using this
system ATM machine can be operator by using our SIM in the mobile phone. When we insert our SIM in the reader unit of the ATM
machine it transfers the mobile to the server. In server we can collect the related information of the mobile number (i.e) the users
account details, their photo etc. the camera presented near the ATM machine will capture the users image and compare it with the user
image in the server. Only when the image matches it asks the pin number and further processing starts. Otherwise the process is
terminated. So by using this system need of ATM card is completely eliminated we can operate the ATM machine by using our SIM
itself. By using this system malfunctions can be avoided. Our transaction will be much secured. One more application can also be
added in this system for helping the blind people. In the existing system all the transactions are done through keyboard only. It may be
difficult for blind people so we can also add voice enunciator to indicate each and every process to the blind people. It that enables a
visually and/or hearing impaired individual to conveniently and easily carry out financial transactions or banking functions.
REFERENCES:
[1] S. C. Dass and A. K. Jain. Markov face models. In Proceedings, Eighth IEEE International Conference on Computer Vision
(ICCV), pages 680687, July 2001.
[2] C.-C. Han, H.-Y. M. Liao, G.-J. Yu, and L.-H. Chen. Fast face detection via morphology-based pre-processing. In Proceedings,
Ninth International Conference on Image analysis and Processing (ICIAP), volume 2, pages 469476, 1998.
[3] J. Huang and H. Wechsler. Eye location using genetic algorithm. In Proceedings, Second International Conference on Audio and
Video-Based Biometric Person Authentication, pages 130135, March 1999.
[4] T. Leung, M. Burl, and P. Perona. Finding faces in cluttered scenes using labeled random graph matching. In Proceedings, Fifth
International Conference on Computer Vision, pages 637644, June 1995.
[5] P. Kiran Kumar, Sukhendu Das and B. Yegnanarayana. One- Dimensional processing of images. In International Conference on
Multimedia Processing Systems, Chennai, India, pages 181 185, August 13-15, 2000.

[6] P. N. Belhumeour, J. P. Hespanha and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear
projection. IEEE Trans. Pattern Analysis and Machine Intelligence, 19(7):711720, July 1997.
[7] H. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Trans. Pattern Analysis and Machine
Intelligence, 20(1):2338, January 1998.
[8] S. Ramesh, S.Palanivel, Sukhendu Das and B. Yegnanarayana. Eigenedginess vs. eigenhill, eigenface, and eigenedge. In
Proceedings, Eleventh European Signal Processing Conference, pages 559562, September 3-6, 2002.
[9] K. Suchendar. Online face recognition system. M.Tech Project Report, IIT, Madras, January 2002.
[10] B. Tacacs and H. Wechsler. Face recognition using binary image metrics. In Proceedings, Third International Conference on
Automatic Face and Gesture Recognition, pages 294299, April 1998.
304
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[11] M. A. Turk and A. P. Pentland. Face recognition using eigenfaces. In Proceedings, Eleventh International Conference on Pattern
Recognition, pages 586591, 1991.
[12] J. Yang and A.Waibel. A real-time face tracker. In Proceedings, Third IEEEWorkshop on Applications of Computer Vision,
pages 142147, 1996.
[13] M. H. Yang and N. Ahuja. Detecting human face in color images. In Proceedings, IEEE International Conference on Image
Processing, volume 1, pages 127130, 1998.
[14] Yilmaz, Alper and M. Gokmen. Eigenhill vs. eigenface and eigenedge. Pattern Recognition, 34:181184, 2001.
[15] K. C. Yow and R. Cipolla. Feature-based human face detection.
[16] A. Colmenarez, B. J. Frey, and T. S. Huang, "A probabilistic framework for embedded face and facial expression recognition," in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol.1. Ft. Collins, CO, USA, 1999, pp. 15921597.
[17] Y. Shinohara and N. Otsu, "Facial Expression Recognition Using Fisher Weight Maps," in Sixth IEEE International Conference
on Automatic Face and Gesture Recognition, Vol.100, 2004, pp.499-504.
[18] F. Bourel, C. C. Chibelushi, and A. A. Low, "Robust Facial Feature Tracking," in British Machine Vision Conference. Bristol,
2000, pp.232-241.
[19] K. Morik, P. Brockhausen, and T. Joachims, "Combining statistical learning with a knowledgebased approach -- A case study in
intensive care monitoring," in 16th International Conference on Machine Learning (ICML-99). San Francisco, CA, USA: Morgan
Kaufmann, 1999, pp.268-277.
[20] S. Singh and N. Papanikolopoulos, "Vision-based detection of driver fatigue," Department of Computer Science, University of
Minnesota, Technical report 1997.
[21] D. N. Metaxas, S. Venkataraman, and C. Vogler, "Image-Based Stress Recognition Using a Model- Based Dynamic Face
Tracking System," International Conference on Computational Science, pp.813-821, 2004.

[22] M. M. Rahman, R. Hartley, and S. Ishikawa, "A Passive And Multimodal Biometric System for Personal Identification," in
International Conference on Visualization, Imaging and Image Processing. Spain, 2005, pp.89-92

305

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of Noise Signal Cancellation using Adaptive Algorithms


Abhishek Chaudhary1, Amit Barnawal2, Anushree Gupta3, Deepti Chaudhary4
Dept. of Electronics and Instrumentation, Galgotias College of Engg. And Tech. Greater Noida U.P India
deeptichaudhary2014@gmail.com4
Abstract Noise is an indispensable part in signal processing that we encounter every day. The study of reducing noise
arises from the need to achieve stronger signal to noise ratios. It is any unwanted disturbance that hampers the desired
response while keeping the source sound. The different sources may include speech, music played through a device such
as a mobile, IPod, computer, or no sound at all. Active noise cancellation involves creating a supplementary signal that
DE constructively interferes with the output ambient noise. The cancellation of noise can be efficiently accomplished by
using adaptive algorithms. An adaptive filter is one that self-adjusts the coefficients of transfer function according to an
algorithm driven by an error signal. The adaptive filter uses feedback in the form of an error signal to define its transfer
function to match changing parameters. The adaptive filtering techniques can be used for a wide range of applications,
including echo cancellation, adaptive channel equalization, adaptive line enhancer, and adaptive beam forming. In last few
years, a lot of algorithms have been developed for eradicating the distortion from the signals. This paper presents analysis
of two algorithms namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and gives comparative
study on various governing factors such as stability, computational complexity, filter order, robustness and rate of
convergence. It further represents the effect of error with alteration in amplitude of noise signal fixating reference signal
and desired signal. The algorithms are developed in MATLAB
Keywords Anti noise, Adaptive filter, LMS, NLMS, Rate of convergence, Noise cancellation, Filter
I.

INTRODUCTION

Noise is any unpleasant, objectionable, unexpected, undesired distortion in sound. In electronics, it may be defined as
the random fluctuation in an electrical signal. Noise is all around us, right from radios and television, to lawn movers and
washing machine. Normally, the sounds that we hear do not affect hearing, but too loud sounds may be harmful in the
long run. Noise free signals give better signal to noise ratios as the absence of noise strengthens the signal to noise ratio.
[1]
The technique employed to achieve this is active noise cancellation. The best approach to cancelling noise would be to
take the noise signal, invert it, and add the input and inverted signals such that they add deconstructive. It is a highly
recommended method because can block selectively and improves noise-control. It offers potential benefits such as size,
cost, volume and effective attenuation of low frequency noise. The components of noise signal such as frequency,
amplitude and phase are non-stationary and time varying; hence the use of adaptive filter helps us to deal effectively with
the variations. Noise Cancellation utilizes the principle of destructive interference. When two sinusoidal waves
superimpose in a way such that the amplitude, frequency and phase difference of the two waves are the governing factors
of the resulting waveform and, if the two waves, the original and its inverse happen to meet at a joint, at the same instant,
total cancellation occurs.[2]-[4]

306

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1 Noise Cancellation through Destructive Interference


Noise elimination from an input signal could produce disastrous results, which is marked by an increase in the average
power of the output noise. However when an adaptive process controls filtration and reduction, it is possible to achieve a
superior system performance compared to direct filtering of the input signal.[5]
METHODSMethods used for executing the result of the research work is described belowII.

A. ADAPTIVE FILTER
An adaptive filter has the property of exhibiting self-modification in its frequency response with respect to time,
allowing the filter to adapt the response to the input signal characteristics change enhancing performance and
construction flexibility.
An Adaptive Filter may be defined by following four aspects:
1. The signal being processed the by the filter.
2. The structure that defines how the output signal of the filter is computed from its input signal.
3. The parameters within this structure that can be iteratively changed to alter the filters input-output relationship.
4. The adaptive algorithm that describes how the parameters are adjusted from one time instant to the next

Fig.2. Block Diagram of General Adaptive Filter

307

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The parameters of an adaptive filter are updated in each iterative step and hence it becomes data dependent. The adaptive
filter is used when the parameters are not fixed or the specifications are unknown. Therefore, this implies the nonlinearity
feature of the filter, as it fails to follow the principle of superposition and homogeneity. An adaptive filter is linear if the
input output relation obeys the above principles and the filter parameters are fixed. As the parameters change continuously
in order to meet a performance requirement, the adaptive filters are time varying in nature. In this sense, we can interpret
an adaptive filter as a filter that performs the approximation step on-line. The performance criterion requires the existence
of a reference signal that is usually hidden in the approximation step of fixed-filter design.
These filters are recommended because of their ease of stability and simplicity in implementation without any
adjustment. Adaptive filtering, which concerns the choice of structures and algorithms for a filter that has its parameters
(or coefficients) adapted, in order to improve a prescribed performance criterion. [1]-[7]
The adaptive filter adjusts coefficients to minimise following
Cost function J (n) = E [ (n)]
Where E[ (n)] is the expectation of

(n), and

(n) is the square of the error signal at time n.

The two algorithms used in the FIR Adaptive filter to control the adjustment of filter coefficients are:
1. Least Mean Square Algorithm (LMS)
2. Normalized Least Mean Square Algorithm (NLMS)

B. LEAST MEAN SQUARE:


Least Mean Squares (LMS) algorithm are a class of adaptive filter used to mimic a desired filter by finding the filter
coefficients that relates to producing the Least Mean Square of the error signal, that is, the deviation between the original
signal and the desired signal. One of the feature of LMS filter algorithm is its simplicity. Further it neither requires
measurement or knowledge of correlation function nor does it require Matrix Inversion (case of more than one = step
size, a scaling factor which controls the incremental change applied to the tap weight vector of the filter from one iteration
to next. To ensure stability, should satisfy the following condition.
0
Where,
L = filter length

is the maximum value of the power spectral density of the tap input x(n).

The goal of the LMS method is to find the filter coefficients required to reduce the mean square error of the error
signal. The error signal, which, is the difference present between the desired d(n) and output y(n). The filter will only
conform to the error at the current time. In this algorithm, initially it assumes small weights (mostly zero), and at each
iterative step, by finding the gradient of the mean square error, the filter parameters are updated. That is, if the MSEgradient is positive, it implies that the error would keep increasing positively. If the same parameters are used for further
iterations, which implies that we have to reduce the weights. Similarly, if the gradient is negative, we have to increase
the weights. The mean square errors is a quadratic function which means it has only one extrema that minimises the
mean square error, that is the optimal weight. The LMS thus, approaches towards this optimal weight by
ascending/descending down the curve between the mean square error and filter weight. [3]-[7]
308

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

NORMALIZED LEAST MEAN SQUARE

C.

The main drawback of the LMS algorithm is that it is sensitive the scaling factor, of the input signal. When it is
large, the filter suffers from gradient noise amplification problem. This makes it difficult in choosing the appropriate step
size for the filter to ensure the stability. The Normalized Least Mean Square algorithm is an improvement over the
conventional LMS and solves this problem by normalising the power of input. [8]
The step size of NLMS filter is given as

where, = Adaptation constant which is dimensionless and optimizes rate of convergence by satisfying the condition,
0 < <2
The weights of the filter are updated as the following step:

III.

FACTORS DETERMINING THE BEHAVIOUR OF ALGORITHM

A. Stability:

The term stability refers to the effects of finite precision on the algorithm that is used to find the solution of some
problem of interest.
B. Computational Requirement:

Basic requirement during computation are: Number of operations required to complete the whole algorithm in one
iteration; memory required for storing the data and algorithm program and also investment required during the
programming of the algorithm on computer.
C. Rate of Convergence:

It explains the number of iteration required for the algorithm with respect to stationary inputs in order to converge close
enough to the optimal wiener solution in the mean square error sense. Due to fast rate of convergence, which permit the
algorithm to adapt rapidly to a stationary environment of unknown statistics.
In case of NLMS, when the input vector x(n) and x(n-1) are orthogonal to each other, i.e., the angle between them
90, then the rate of convergence is fastest. When the input vector x(n) and x(n-1) are in the same direction or in opposite
direction such that the angle between them is 180, then the rate of convergence is slowest.
D. Robustness:

The ability of the system to cope up with errors is defined as robustness of the system. For a robust adaptive filter, small
fluctuations results in errors. These deviations can be present as a result of various, internal or external factors of the filter.
[9]-[12]

309

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IV.

RESULTS:

A. ANALYSIS I

These algorithms (LMS & NLMS) were formulated in MATLAB and following results were generated. The first figure
shows the desired signal. The next figure represents the input signal which is composed of sinusoidal signal and

Fig. 3 Desired Signal Fig. 4 Input Signal


a. Results of LMS Algorithm
Figure 5 represents the filter output. The next figure gives the error generated. The filter order is 251 and the step size is
0.005.
2

Fig. 5 Adaptive Filter output

Fig. 6 Error signal

b. Results of NLMS Algorithm


Figure 7 represents the adaptive filter output. The next figure shows the error signal of NLMS algorithm. The filter order
is 7 and is fixed at 0.5.

310

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 7 - Adaptive filter outputFig. 8 Error signal

Table.1 Comparison between LMS and NLMS on various factors


Serial No.

Factors

LMS

NLMS

Stability

Highly stable

Computational Complexity

2N+1

Rate of Convergence

Robustness

Moderate stable
3N+1

Low and has slow


implementation

High and has faster


implementation

Less robust

More robust

B. ANALYSIS II

In this section, we have observed the fluctuation in the error signal with the variation of amplitude of noise signal
having constant reference signal and fixed desired signal. The input signal changes as it is the summation of reference and
varied noise signal.

1.5

desire
d
signal

0.5

-0.5

-1

-1.5
0

200

400

600

800

1000

1200

1400

time

Fig. 9 Desired Signal

311

www.ijergs.org

1600

1800

2000

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

This is desired signal of amplitude 1.5.


We have fixated the order of the filter at 251.
1. When the scaling factor of random noise is 0.3, the noise signal, input signal and error signal are generated as follows.

Fig. 10 Noise signal

Fig. 11 Input signal

Fig.12-Error Signal
2. When the scaling factor of random noise is 0.7, the noisesignal, input signal and error signal are produced as below

312

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 13 Noise Signal Fig. 14 Input Signal

Fig. 15 Error Signal


2. When the scaling factor of random noise is 1 and 1.1 the error signals are given below in figure 16 and 17
respectively.

Fig.16- Error signal for scalinging factor 1

Fig.17-Error signal for scaling factor 1.1

V.
ACKNOWLEDGEMENT
It gives us a great sense of pleasure to present the research paper on the basis of the B. Tech Project undertaken during
B.Tech Final Year. We owe special debt of gratitude to our supervisors Mr Navneet Mishra (Professor EIE) and
Department of Electronics and Instrumentation Engineering, Galgotias College of Engineering & Technology for their
constant support and guidance throughout the course of our work. We are also thankful to HOD of Dept. Of EIE
313

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Professor Dr Monika Jain and Project Co-ordinator Mr Gavendra Singh (Asst. Proffesor Dept. EIE). Their sincerity,
thoroughness and perseverance have been a constant source of inspiration for us. It is only their cognizant efforts that
our endeavours have seen light of the day.How can we forgot to thanks Mr Gulshan kumar, who always helped us.
We are also thankful to Dr. R.Sundaresan, Director of Galgotias College of Engg. And Tech. For providing their support
and all the facilities for completing this project.
VI.

CONCLUSION

In first analysis, we see that the LMS is preferred because of its stability, simplicity and additional adjustment is required, but has a slow
rate of convergence, whereas NLMS though less stable, offers better robustness, converges faster than conventional LMS and has good
performance because of its properties.
In second analysis, we have observed that with increase in the scaling factor of random noise, the fluctuation in the error signal increases
and it takes little longer to optimize the error in accordance to the desired signal. As verified above, when the scaling factor is taken as 0.3,
0.7, 1 and 1.1, the adaptive filter output, initially deviates from the desired signal and gradually the filter output approaches the desired signal.

REFERENCES:
Douglas, S.C. Introduction to Adaptive Filters Digital Signal Processing Handbook Ed. Vijay K. Madisetti and
Douglas B. Williams Boca Raton: CRC Press LLC, 1999
[1]

Kuang-Hung liu, Liang-Chieh Chen, Timothy Ma, Gowtham Bellala, Kifung Chu, Active Noise Cancellation
Project Report,vol. EECS 452,winter 2008
[3]
Riggi Aquino and Jacob Lincoln,Hardware and Software Study of Active Noise Cancellation Project Report,
California Polytechnic State University, 2012.
[4]
Rainer Martin and Stefan Gustafsson, An improved echo shaping algorithm for acoustic echo control, Aachen
University of Technology,proceedings of European signal processing conference,pp25-28,September 1996
[5]
R Serizel, M.Moonen, J.Wouters and S.H.Jensen, Integrated active noise control and noise reduction in hearing
aids, IEEE Transaction on audio, speech, ,language and processing, Vol 18, no.6,August 2010
[6]
Ankush Goel, Anoop Vetteth, Kote Radhakrishna Rao, and Venkatanarayan Sridhar, Active cancellation of
acoustic noises using a self-tuned filter IEEE Transactionon Circuits and Systems-I, Vol.51,Issue 11,pp 2148-2156,2004
[7]
D.R. Morgan, S.M.Kuo, Active Noise Control:Tutorial view, proceedings of IEEE, Vol 87,2004
[8]
Paulo S.R. Diniz, Adaptive Filtering Algorithm and Practical Implementation, Springer Publication,Third
Edition,2008
[9]
Alexander D.Poularikas and Zaved M.Ramadan, Adaptive Filtering Primer with MATLAB, CRC Press,Taylor
& Francis Group,2006.
[10]
Vinay K.Ingle and John G.Prokais,Northeastern University Digital Signal Processing using MATLAB V.4,PWS
Publication BookWare Companion Series, 2000
[11]
Mohinder S.Grewal and Angus P.Andrews Kalman Filtering : Theory and Practices using MATLAB,WileyInterscience Publication,Second Edition,2001
[12]
Dr, D.C.Dhubkarya, Aastha Katara, Raj Kumar Thennua, Simulation of adaptive noise cancellerfor an ECG
signal analysis, ACEEE Int. J. on Signal & Image Processing, Vol. 03, No. 01, Jan 2012
[2]

314

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

DIAGNOSIS AND PROGNOSIS BREAST CANCER USING


CLASSIFICATION RULES
Miss Jahanvi Joshi
Mr. RinalDoshiDr. Jigar Patel
jahanvijoshib@gmail.comrinalhdoshi@gmail.com drjigarvpatel@gmail.com

Abstract Breast Cancer is highly heterogeneous disease. Breast Cancer Diagnosis and Prognosis are two medical challenges to
the researchers in the field of clinical research. Breast self-exam and mammography can help find early diagnosis of breast cancer.
This is possible when in some situation or stage the treatment is possible. Treatment may consist of radiation, lumpectomy, and
mastectomy and hormone therapy. The origin of this research for diagnosis a breast cancer depends upon a lump in the breast, a
change in size or shape of the breast or a nipple. Men can have breast cancer, too, but the number of cases is small. The purpose of this
research is to develop a novel prototype of clinical problem regarding to diagnose and manage patients with breast cancer. The
primary dataset of breast cancer is carried out from UCI dataset repository for the purpose of experimental work. These experimental
works justify the problem formulation of the clinical research using different classification technique.\

Keywords Breast Cancer, Clinical Problem, Classification Rules, Data Mining, Health Care, Web Mining,Weka.
INTRODUCTION

Breast Cancer becomes dangerous disease in todays era. The most common type of this type of breast cancer is ductal carcinoma,
which begins in the lining of the milk ducts. It is nothing but only thin tubes that carry milk from the lobules of the breast to the little
nipple. Another type of breast cancer is lobular carcinoma, which begins in the lobules of the breast. Invasive breast cancer is breast
cancer that has spread from where it began in the breast ducts or lobules to surrounding normal tissue. Breast cancer occurs in both
men and women, although male breast cancer is rare.
According to the survey of United States in 2014, there are 232,670 females and 2,360 males having this type of new cases
regarding the breast cancer. Among them 40,000 females and 430 males was death during the period this survey [1]. This survey is
origin and motivation for our research work.
Early burning signs of breast cancer may absorb the detection of a new lump or a change in the breast skin. These are the signs and
symptoms for the early detection of the breast cancer. By performing monthly breast self-exams, patient will be able to more easily
identify any changes in her breast. If patient found abnormal changes in her breast she gives a path to contact healthcare experts. In
some situations women are encourage for an excitement like breast sensitivity issue, breast examination by doctors and measurement
with tailor.
Data Mining is a powerful tool and technique to handling this task. In data mining breast cancer research has been one of the
important research topics in medical science during the recent years The classification of Breast Cancer data can be useful to predict
the result of some diseases or discover the genetic behavior of tumors. There are many techniques to predict and classification breast
cancer pattern. This paper empirically compares performance of different classification rules that are suitable for direct interpretability
of their results.

315

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

LITERATURE REVIEW
Author of this paper Tired to improve the website design, optimize website structure and built intelligent website. To achieve
these targeted objectives, the author used machine learning approaches and for the user identification algorithm, session identification
algorithm and Apriori algorithm applied on the preprocessed dataset. These processes worked on WEKA open source Data Mining
s/w tool. The resulted outcome arrives on the bases of Apriori algorithm. The author gives the challenge that must apply on other.
Data mining algorithm and the comparative analysis gives more optimum outcome [2]. Author of this paper Compare the performance
of the supervised algorithm Nave Bayesian, support vector machine, Radical basis neural network, decision tree, j48 and simple
CART. The proposed work conducted to discover the batter quality classifier for disease for detection. That is processed in WEKA
with the dataset WBC, WDBC, pama diabetes and Brest tissue. The outcome of the result showed that the SVM RBF kernel
responded well than the others. The author concludes with to achieve the potential outcome with accuracy well as the complexity will
be calculated for feature expansion [3]. Author of this paper performed comparative study of classification method with different data
set- PIMA Indian Diabetes, state Log Heart Disease, BUPA Liver disorders, and Wisconsin Brest Cancer. This work target to study
the performance to achieve higher accuracy level with lower error rate. There experimental findings used 10 fold cross validation
method. The outcome by SVM method indicated promising level of accuracy level of 96.74% for PIMA Indian diabetes dataset and
99.25% with statLog heart Disease data set. They have used c4.5 decision tree technique with BAPA Liver-disorders dataset with an
accuracy level of79.71% there ultimate finding with user techniques-Bayes Net, SVM, kNN and RBF-NN with the dataset Wisconsin
Brest Cancer Data set have indicated as to combine multiple technique with different parameter [4]. Author of this paper explored the
comparative performance of the classification and clustering algorithm using heart disease dataset. The work targeted to higher level
of prediction accuracy by comparing both the techniques. The evaluation carried on the performance of classifiers of Bayes (Nave
Bayes, Nave Bayes updateable), functions (SMO),Lazy (IB1,IBK),Meta Multi BoostAB, Multiclass Classifier),Rule(Decision
Table),tees (NB Tree)and the clustering algorithm of EM, Cobweb, Father First, Make Density Based Cluster ,Simple K-means
algorithm. The analyses lead to conclusion which state that the NB tree having higher prediction Accuracy compared to the clustering
algorithm [5]. Author of this paper focus on the work statistical and data mining tool and technique for diagnosis disease. To achieve
the more accurate outcome of data mining techniques they applied hybridization on the selected method used which illustrate the
acceptable levels of accuracy then for enhance the accuracy of disease the hybridization data mining techniques. Hybrid data mining
techniques produces more effective outcome in diagnosis of heart disease. Different hybrid technique like fuzzy artificial immune
recognition system and K-nearest neighbor are applied together are applied tougher which produced accuracy of 87%.Neural network
gives accuracy of 89.01% which is batter. One case of neural network and genetic algorithm is also discussed which produced better
result in deter mining heart disease [6]. Author of this paper constructed the work to discover the effectiveness of preprocessing
algorithm on dataset to investigate. The research conducted data mining algorithm on dataset on the Z AlizadehSani dataset which is
used to achieve more accurate results. The cost sensitive algorithm are used along with base classifiers on nave Bayes, sequential
minimal optimization (SMO), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and C4.5 were a closed work with the
SMO algorithm has to very high sensitivity (97.22%) and accuracy (92.09%)rates. AS a final point, they said that the proposed cost
sensitive algorithm can be used on other diseases such as cancer [7]. Author of this paper focus on compared the performance of the
classification technique like Sequential Minimal Optimization (SMO), IBK, BF Tree. The proposed work conducted to find out the
accuracy classifier for breast cancer detection. that process in weka with the data set UCI machine learning in all over technique
sequential minimal Optimization (SMO) achieve better outcome of the result with accuracy ,low error rate , and performance[8].
Author of this paper explore performance of classification technique using Wisconsin prognostic breast cancer (WPBC) data from
UCI machine learning technique .the work targeted to high level of prediction accuracy by compare classification different technique.
316
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the evaluation carried on the performance of classification like Binary Logistic Regression (BLR), C4.5 decision tree algorithm
,Partial Least Squares for Classification (C-PLS), Classification Tree(C-RT), Cost-Sensitive Classification Tree(CS-CRT), Costsensitive Decision Tree algorithm(CS-MC4), SVM for classification(C-SVC), Iterative Dichomotiser(ID3), K-Nearest Neighbor(KNN), Linear Discriminant Analysis (LDA), Logistic Regression, Multilayer Perceptron(MP), Multinomial Logistic Regression(MLR),
Nave Bayes Continuous(NBC), Partial Least Squares -Discriminant/Linear Discriminant Analysis(PLS-DA/LDA), Prototype-Nearest
Neighbor(P-NN), Radial Basis Function (RBF), Random Tree (Rnd Tree), Support Vector Machine(SVM ) the analysis lad to
conclusion random tree and Quinlans C4.5 having 100% accuracy in classifying the Wisconsin prognostic Brest cancer data set
[9].Author focus on performance of supervised learning algorithm viz decision tree, random tree ,ID3 ,CART,C4.5 ,and Naive Bayes.
That processed in TANAGARA with Wisconsin breast cancer data set used for modeling breast cancer data. The result of random tree
fives batter optimum result with highest accuracy rate [10].Author of this paper investigation on data mining three technique first one
is Nave Bayes, second one is back-propagated neural network, and last one C4.5 decision tree algorithms. For this processed use
WEKA. Open source tool with data set SEER. The outcome of the result showed C4.5 better performance then other two techniques
[11]. Author of this paper analyzed Brest cancer using data mining technique classification. They use machine learning technique like
Decision Tree (C4.5), Artificial Neural Network and support vector machine for predicting a breast cancer. That process in WEKA
tool kit with the Iranian center of breast cancer .this work targeted to analyzed the performance to achieve higher accuracy specificity
and sensitivity result by SVM technique indicated promising level of accuracy level of 95.7%,97.1% and 94.5%[12].Author of this
paper prediction on data mining technique on heart disease , Diabetes, Breast Cancer in heart disease data collected data from a
hospital information system the heart disease prediction in that machine learning algorithms namely nave bayes, K-NN, Decision
List. In all technique classification accuracy of the nave bayes algorithm is better when compared to other algorithm. Second one
breast cancer as per survey of united state prediction data mining technique like C4.5, ANN and Fuzzy decision tree. ANN conducts
better accuracy and good performance. Third one about diabetes as per base on the American diabetes association perdition by using
homogeneity based algorithm genetic algorithm predicts batter accuracy. For feature work enhance they predates diff type of disease
prediction using data mining technique [13]. Author of this paper focus on use three different type of machine learning technique for
predicting of breast cancer. They use Iranian canter for breast cancer (ICBC) data set and implement machine learning technique like
decision tree (C4.5) Support vector machine (SVM) and Artificial Neural network (ANN) compared the performance of technique
and find sensitivity, specificity , accuracy. As per a conclusion SVM provide better performance with highest accuracy rate [14].
Author of this paper Prediction on breast cancer and heart disease with dataset Public Use data. They consisting of 909 Record for
Heart disease and 699 for breast cancer. They use two type algorithms c4.5 and c5.0.as per a conclusion c4.5 get better result all over
technique [15]. Author of this paper compared the performance of classification algorithms- Decision tree, Nave Bayes, MLP,
Logistic Regression SVM, KNN. That is process in WEKA with data set. The outcome of the result showed that the SVM responded
well then the other. The author conduct with to achieve the potential outcomes with accuracy as well as the complexity will be
calculated for future expansion [16]. Author of this paper focus on prediction of breast cancer using data mining technique.This
process in WEKA data mining tool kit with data set SEER.They investigation on three data mining techniques like Nave Bayes, the
back-propagated neural network, and the C4.5Decision tree algorithms. The outcome of the result c4.5 algorithms shows much better
performance then the other two techniques [17]. Author of this paper focus on data mining application on medical research for a
predicting and discovering pattern base on detected symptom on health condition for process take a mammography, dermatology,
orthopedic thyroids for data pre-processing execute classification for clinical test data lode test data for verification for classifier
malady classification. They support decision tree generate by the quinlans algorithm is smaller then the decision tree by the random
tree classification technique [18]. Authors diagnosis breast cancer using clustering data mining techniques [19]. Authors of the paper
317
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

develop pattern knowledge discovery framework using data mining technique. This framework is generic which is relate to different
services for users [20].

PROPOSED WORK
The main area of the research work is web mining. Web mining has three categories content, structure and usage. We use web usage
mining for finding hidden pattern from the breast cancer dataset. The suggested model of the proposed work is followed.

WebMining

Structure

Usage

Content

Web Dataset
Breast Cancer Dataset
WEKA Open Source Data Mining Tool

BayesNet
Nave Bayes
NaiveBayesUpdateable
Logistic
MultilayerPerceptron
SGD
SimpleLogistic
SMO
Voted
IBK
KStar
LWL
AdaBoostM1

CLASSIFIER RULES
Attribute Selected Classifier
Bagging
ClassificationVia Regression
CVParameterSelection
FilteredClassifier
LogitBoost
MultiClassClassifier
MultiScheme
RandomCommittee
RandomSubSpace
Stacking
Vote
InputMappedClassifier

DecisionTable
JRip
OneR
PART
ZeroR
DecisionStump
J48
LMT
RandomForest
RandomTree
REPTree

Prototype Evaluation
Healthy, Sick

Predictive Classifier

Pattern Discovery from Breast Cancer Dataset

Figure 1: Prototype for Breast Cancer Pattern Discovery (PBCPD)


In Fig. 1 we discuss novel prototype for breast cancer detection. Here Main work is start with Web Mining which has three
dimensions- content, usage and structure. In this prototyping we select usage mining. In Web Usage Mining data is retrieved from web
dataset. We select breast cancer dataset in Weka Open Source environment. In Weka we use 37 classification rules for diagnosis and
prognosis breast cancer among patients. By Experimental Analysis we can get predictable pattern
318
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IMPLEMENTATION WORK:
To classify breast cancer data set with high accuracy and efficiency different classifier rules are used for finding healthy
patients. In this research WEKA open source mining tool is used for modeling breast cancer data. This tool proposes
several classification rules from exploratory data analysis, statistical learning and machine learning.
For the purpose of the implantation work the data is taken from UCI for the purpose of to solve the research objective. To
perform experimental work this research take WEKA as an open source data mining tool and then apply different
classification algorithm for diagnosis and prognosis patients. The dataset attributes descriptions are as under in Table
1:
Table 1: BREST CANCER DATASET ATTRIBUTE
Attribute Name

Description

Age

Patients Age in years

Menopause

the period in a woman's life when menstruation ceases

Tumor-size

Patients tumor-size on her breast

inv-nodes

Node size in main portion of the breast.

Node-caps

Node is present or not in cap of the breast

Deg-malig

Stage of breast cancer

Brest

Left breast or Right breast or both breast

Breast-quad

Portion of the breast for example left-up, left-low, right-up, right-low, central.

Irradiate

Present or not (YES/NO)

Class

no-recurrence-events, recurrence-events (Reduce the risk of breast cancer)

Table 2: BREST CANCER DATASET CLASS

319

Class name

Description

Diagnosis

sick, healthy or (unpredictable) no class

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

For this research we use different 37 classification algorithm for the purpose of diagnosis of healthy and sick patients. The
result of these classification rules are tabulated in table 3.
Table 3:RESULT ANALYSIS OF BREAST CANCER PATIENT
Clustering Technique

320

Healthy

Sick

BayesNet

76

24

NaiveBayes

75

25

NaiveBayesUpdateable

75

25

Logistic

76

24

MultilayerPerceptron

76

24

SGD

76

24

SimpleLogistic

76

24

SMO

76

24

Voted

74

26

IBK

98

KStar

98

LWL

79

21

AdaBoostM1

76

24

Attribute Selected
Classifier

76

24

Bagging

81

19

ClassificationVia
Regression

76

24

CVParameterSelection

70

30

FilteredClassifier

76

24

LogitBoost

78

22

MultiClassClassifier

76

24

MultiScheme

70

30

RandomCommittee

98

RandomSubSpace

77

23

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Stacking

70

30

Vote

70

30

InputMappedClassifier

70

30

DecisionTable

71

29

JRip

77

23

OneR

73

27

PART

80

20

ZeroR

70

30

DecisionStump

72

28

J48

76

24

LMT

76

24

RandomForest

98

RandomTree

98

REPTree

74

26

Figure 2: CLASSIFIER PERFORMANCE ANALYSIS


321

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

By this result analysis BayesNet , Logistic, MultilayerPerceptron ,SGD, SimpleLogistic, SMO, AdaBoostM1, Attribute Selected ,
ClassificationVia Regression, FilteredClassifier, MultiClassClassifier Classifier, J48, LMT classifier gives more accurate result.
According to these classifier rules these research diagnosis 76% healthy and 24% sick patients.

CONCLUSION
Because In this paper various classification rules are compared to predict the best classifier. We develop new prototype for diagnosis
predictable pattern discovery of breast cancer. Experimental results show the effectiveness of the proposed method. The base for this
is knowledge discovery and data mining. The classifier is identified to determine the nature of the disease which is highly important
for finding healthy breast cancer patients. By this work is useful to uncover patterns hidden in the data that can help the clinicians and
doctors in decision making.

FUTURE WORK
For this research we use different classifier rules for diagnosis healthy patients. To study and experimental work we use weka open
source data mining tool. This research is extending by using different clustering, statistical model and machine learning algorithm. In
future we use tanagara and orange data mining tool. We can make generic prototyping for different domains like ecommerce,
electricity or many areas.

REFERENCES:
[1] Breast,from http://www.cancer.gov/cancertopics/types/breast access on [02-09-2014]
[2] The research and application of web log mining based on the platform weka,Xiu-yuZhong,Scince Direct Elsevier 2011.
[3] AN EMPIRICAL COMPARISON OF SUPERVISED LEARNING ALGORITHMS IN DISEASE DETECTION, S. Aruna, Dr
S.P. Rajagopalan and L.V. Nandakishore IJITCS, August 2011.
[4] PERFORMANCE ANALYSIS OF VARIOUS DATA MINING CLASSIFICATION TECHNIQUES ON HEALTHCARE
DATA,Shelly Gupta, Dharminder Kumar and Anand Sharma, International Journal of Computer Science & Information
Technology (IJCSIT) , August 2011
[5] Improving the Performance of Data Mining Algorithms in Health Care Data,P. Santhi, V. MuraliBhaskaran, IJCST ,September
2011
[6] Using data mining techniques in heart disease diagnosis and treatment, Shouman, M. Turner, T. ; Stocker, R.,IEEE 2012.
[7] Diagnosis of Coronary Artery Disease Using Cost-Sensitive AlgorithmsHosseini,M.J. ,Sani,Z.A. Ghandeharioun,A. 2012IEEE.
[8] A Novel Approach for Breast Cancer Detection using Data Mining Techniques VikasChaurasia, SaurabhPal IJIRCCE Vol. 2,
Issue 1, January 2014.
[9] Efficient Classifier for Classification of Prognostic Breast Cancer Data through Data Mining Techniques ShomonaGracia
Jacob, R. GeethaRamani Proceedings of the World Congress on Engineering and Computer Science Vol I October 2012.
[10] Application of Data Mining Techniques to Model Breast Cancer Data S. Syed Shajahaan ,S. Shanthi , V. ManoChitra IJETAE
Volume 3, November 2013.
[11] Predicting Breast Cancer Survivability Using Data Mining Techniques AbdelghaniBellaachia,ErhanGuven
http://www.siam.org/meetings/sdm06/workproceed/Scientific%20Datasets/bellaachia.pdf?q=data-mining- techniques
[12] Using the Data Mining Techniques for Breast Cancer Early Prediction Samar Al-Qarzaie,Sara Al-Odhaibi, BedoorAl-Saeed,
and Dr.MohammedzAl-Hageryhttp://www.psu.edu.sa/megdam/sdma/ Downloads/Posters/Poster%2011.pdf
[13] Disease Prediction in Data Mining Technique A Survey,S.VijiyaraniS.Sudha,International Journal of Computer Applications
& Information Technology January 2013
[14] Using Three Machine Learning Techniques for Predicting Breast Cancer Recurrence AbbasToloieEshlaghy, Ali Poorebrahimi,
MandanaEbrahimi, Amir R. Razavi and Leila GhasemAhmad,OMICS 2013
[15] A Prototype of Cancer/Heart Disease Prediction Model Using Data Mining, MohammadTaha Khan, Dr. ShamimulQamar and
Laurent F. Massin, International Journal of AppliedEngineering Research 2012.
322

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[16] An Efficient Prediction of Breast Cancer Data using Data Mining Techniques,G. Ravi Kumar, Dr. G. A. Ramachandra,
K.Nagamani, International Journal of Innovations in Engineering and Technology (IJIET) August 2013.
[17] Predicting Breast Cancer Survivability Using Data Mining Techniques, AbdelghaniBellaachia, ErhanGuven.
[18] ShomonaGracia Jacob, R. GeethaRamani Mining of Classification patterns in clinical data through data mining
algo access
from IEEE.
[19] Jahanvi Joshi, RinalDoshi and Jigar Patel. Article: Diagnosis of Breast Cancer using Clustering Data Mining
Approach.International Journal of Computer Applications 101(10):13-17, September 2014.
[20] Mr. RINAL H. DOSHI, Dr. HARSHAD B. BHADKA and Ms. RICHA MEHTA, 2013. DEVELOPMENT OF PATTERN
KNOWLEDGE DISCOVERY FRAMEWORK USING CLUSTERING DATA MINING ALGORITHM.International Journal of
Computer Engineering & Technology (IJCET).Volume:4, Issue: 3, Pages: 101-112

323

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Forecasting e-waste amounts in India


,

Sirajuddin Ahmed1 ,Rashmi Makkar Panwar2,Anubhav Sharma3


Department of Civil Engineering, Jamia Millia Islamia, Delhi, India
2
Research Scholar, Department of Civil Engineering, Jamia Millia Islamia, Delhi, India
3
G.B.Pant Polytechnic, DTTE, New Delhi, India
1

E-mail: rashmimakkarpanwar@gmail.com , Contact No.- 09811512801

Abstract: The increase in sales of electronic goods and their rapid obsolescence has resulted in generation of electronic waste,
which is popularly known as e-waste. Changing trends and exponential growth of electronics industry, increase of electrical and
electronic products, consumption rates and higher obsolescence rate leads to higher generation of e-waste. This paper presents a study
of the amount of e-waste generated by different sectors and devices during the last few years and the trend it follows. It includes the
prediction of amount of E-waste trend. The amount of E-waste generated is increasing at higher rate annually and if not treated
properly it will not only have adverse impact on environment but also on human lives. The purpose of this study is to establish a set of
baseline data for management of e-waste by reviewing the e-waste problem in terms of quantity and hazardous constituents present in
it.

Keywords: e-waste, forecasting, sales, components, lifespan, desktops, laptops, mobile phones, televisions

1. Introduction
In this paper, the amount of e-waste created in India every year is analyzed based on the data provided by various agencies and based
on that the e-waste from various appliances is estimated and predicted for the years to follow. Although no definite official data exist
on how much waste is generated in India or how much is disposed off, there are estimations based on independent studies conducted
by the NGOs or government agencies. It is necessary for effective e-waste management to quantify and characterize electronic waste
stream, identify the major generators and assess the risk involved. It is also pertinent for government to keep the inventory [1].
Reliable figures on quantity are crucial in order to evaluate compliance with regulations set by authorities. Reliable figures are also
important for monitoring and further improvement of return schemes [2]. The difficulty in Inventorization is one of the important
barriers to safe e-waste management [3 ].
The main sources of electronic waste in India are the government, public and private (industrial) sectors, which account for almost 70
per cent of total waste generation. The contribution of individual households is relatively small at about 15 per cent; the rest being
contributed by manufacturers. Though individual households are not large contributors to waste generated by computers, they
consume large quantities of consumer durables and are, therefore, potential creators of waste.[4]. India generated around 4 lakh
tonnes of electronic waste in 2010, up from 1.47 lakh tonnes in 2005.[5]
Indias E-Waste market has been divided into various segments including IT and Telecom, Large Household Appliances and
Consumer Electronics. Some of the key products generating most of the E-Waste in the country include PCs, mobile phones, laptops,
televisions, refrigerators, washing machines etc. The following three categories of WEEE account for almost 90% of the generation
[6] :
Large household appliances: 42%,
Information and communications technology equipment: 33.9% and
Consumer electronics: 13.7%.
Because electrical and electronic products come in such a wide range of varieties, such as household appliances, telecommunication
and information technology equipment, toys, lighting equipment, and medical equipment, it would be far too complicated to address
the problems arising from all electrical and electronic products here. Thus, this paper chiefly is concerned with four appliances/devices
which can be simultaneously categorized as IT, telecom and consumer electronic devices i.e., PCs(Desktops and laptops), Mobile
phones and Televisions (CRT, Plasma, LCD & LED).

324

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.1 Composition of E-waste


The composition of e-waste is diverse and falls under hazardous and non-hazardous categories. Broadly, it consists of ferrous and
non-ferrous metals, plastics, glass, wood and plywood, printed circuit boards, concrete, ceramics, rubber and other items. Iron and
steel constitute about 50% of the waste, followed by plastics (21%), non-ferrous metals (13%) and other constituents. Non-ferrous
metals consist of metals like copper, aluminium and precious metals like silver, gold, platinum, palladium and so on [7]. The presence
of elements like lead, mercury, arsenic, cadmium, selenium, hexavalent chromium, and flame retardants beyond threshold quantities
make e-waste hazardous in nature. It contains over 1000 different substances, many of which are toxic, and creates serious pollution
upon disposal [8].

1.2 Method used for Forecasting


The tool used for future prediction of sale values of electronic equipment for the upcoming years is Microsoft Excel. By using
regression analysis [9], trendline is extended in a chart beyond the actual data to predict future values. This method is defined to
calculate, or predict, a future value [10]of e-waste by using existing values.
The total lifespan of electronic products is equal to the amount of time they are in use. First, we searched for new and updated
information on product lifespan. As a result, we developed separate commercial-sector lifespan assumptions for these devices.
Though, comprehensively nationally-representative data on the life spans of electronic products, the patterns of use across residential
and commercial institutions, and the quantity of electronic products collected for recycling do not yet exist. Then we applied data on
the lifespan of electronic products to the sales data to estimate the number and weight of products in use and end-of-life management
for each year.
The present work considers the approximation of e-waste generation based on variance in the distribution of
product life span, in the absence of reliable information on actual WEEE arising. After a certain time span (average life time) the endof-life goods are passed on for collection. It is assumed that in the consumption period no losses occur and no conversion of material
takes place. By making prediction, this helps to make informed decisions to plan and develop strategies for collection, storage,
treatment, disposal and recycling services in order to channel computer waste through environmentally sound waste management
system to prevent environmental and public health impacts.

1.3 Lifespan of Electronic equipments


The useful life of consumer electronic products is relatively short, and decreasing as a result of rapid changes in equipment features
and capabilities [11]. The lifespan of central processing units in computers dropped from 4 to 6 years in 1997 to 2 years in 2005 [12].
The average life of electronic equipment mentioned above for which the device will operate without getting obsolete is normally
different for every equipment [13].
Device

Avg
Lifespan
Years
5
4
10
6
4

Desktop
Laptop
Television
Mobile phones
Printer

in

Table 1. Devices and their average life span

But in Indian scenario, the reality is different. The life time of an equipment will depend on the distribution around the equipment
average lifetime because equipment are often reused or restored [14].
Equipment

Years till
1

325

the device operate


2
3
4

www.ijergs.org

10

11

12

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Desktop
Laptop

10%

Mobile
phone

25%

25%

50%

25%

50%

20%

20%

25%

50
%

Television

60%

40%

Table 2 Distribution around equipment average lifetime [15] and [16]

2.

Sales and E-waste prediction of Equipments

2.1. Desktop Computers


The electronics industry is driven mainly by the computer and computer component sectors with as much as a fifth of its revenues
coming from sales of Personal Computers. The huge scale of demand in the market can be observed from the sale of the desktop PCs
during the period 2007-2013 as shown in the table below. A shift in the governance systems with e-governance initiatives adopted by
the Central and the State Governments, the telecom, banking and education sectors, Small and Medium Enterprises (SMEs) and IT
enabled services have been a major factor leading to the vibrancy of consumption in the information technology market
2.1.1. Sales (2007-13) and forecast (till 2016-17)
Year
2007

Million
units
5.522

2008

5.28

2009

5.522

2010

6.016

2011

6.71

2012

6.778

2013

5.015

2014

4.212

2015

3.631

2016

3.284

Table 3. Sales and forecast for desktops [17]

326

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

8
7

Desktop sales (million


units)

6
5
4
3
2
1
0

2006

2008

2010

2012

2014

2016

2018

Year
Graph 1.

2.1.2 E-waste generated from Desktop computers


Based on the distribution of Lifespan [Table 2], sales data [Table 3] and taking the average weight of desktop computers as 9.9kg [18],
E-waste generated in Metric Tonne due to desktop computers is estimated till the year 2020.

Year
2011

Metric
Tonne
54667.8

2012

52272

2013

54667.8

2014

59558.4

2015

66429

2016

67102.2

2017

49648.5

2018

41698.8

2019

35946.9

2020

32511.6

Table 4. E-waste generated from Desktops

327

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

80000
70000

E-waste (MT) from


desktops

60000
50000
40000
30000
20000
10000
0
2010

2012

2014

2016

2018

2020

2022

Year
Graph 2.

2.1.3. Hazardous substances in components of desktop computers waste


Description
Plastic
Lead
Mercury
Arsenic
Cadmium
Chromium
Barium
Beryllium
Copper

Weight of material
2.635 kg
0.72 kg
0.252 gm
0.149 gm
1.077 gm
0.72 gm
3.61 gm
1.79 gm
0.683 kg

Gross material by 2020 (MT)


8653
2364
0.827
0.489
3.536
2.364
11.855
5.878
2243

Table 5. [19]

2.2 Laptops
2.2.1. Laptop sales (2007-2016) and forecast (till 2018)
Year
2007

Million
units
1.822

2008

1.516

2009

2.508

2010

3.284

2011

4.022

2012

4.421

2013

6.849

328

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2014

7.936

2015

10.038

2016

12.698

2017

15.899

2018

20.09

Table 6. Laptop sales [17]

14
12

Laptop sales (million


units)

10
8
6
4
2
0
2006

2008

2010

2012

2014

2016

2018

Year
Graph 3.

The Laptop market grew by 55 percent in 2013-14 and the overall sales in the PC market was up by six percent owing to the negative
growth of the Desktops [see table 3 & 6] [17]

2.2.2 E-waste generated from Laptops


Based on the average lifespan distribution[Table 2], sales data [Table 6] and taking the average weight of Laptop PCs as 3.5kg [20],
the E-waste generated in Metric tone is estimated till the year 2020.
Year
2012

Metric
tonne
7875

2013

9971

2014

12640

2015

15248

2016

20673

2017

25290

2018

32360

2019

40367

2020

50769

329

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 7. E-waste from Laptops

50000
45000

E-waste (MT) from


Laptops

40000
35000

30000
25000
20000

15000
10000
5000
0
2010

2012

2014

2016

2018

2020

2022

Year
Graph 4.

By the end of the year 2020-21, there will be around 15 million computers (Desktops+Laptops) in India and it will take nearly three
decades at the current rate of penetration before there is one computer per capita across the nation [21]. It is clear from the research
that by 2020, Indias e-waste from old computers (desktop + laptop) will increase by 36% from the 2012 levels with discarded
Laptops occupying the major chunk.

2.2.3 Hazardous substances in components of laptop waste


Description
Glass
PCB
Battery/
Transformer/
Capacitors

Weight of material
.382 kg
.450kg
.273kg

Gross material by 2020 (MT)


5541
6527
3960

Plastic parts

.760kg

11024

Table 8. [22]

2.3 Mobile Phones


The mobile phone phenomenon is unique in the histories of both the telecommunication and consumer electronics markets. In less
than a decade, people have adopted mobile phones on a massive scale. This is about three times the size of the television or PC
markets. Growth has been fuelled by the spectacular evolution of mobile phone technologies, both in terms of performance and
miniaturization. As a result, unlike many other appliances, users change their mobile phones on average every two years.
Consequently, replacement handsets today represent about 80% of all mobile phone purchase [23].
This rapid growth has been possible due to various proactive and positive decisions of the Government and contribution of both by the
public and the private sector. The rapid strides in the telecom sector have been facilitated by liberal policies of the Government that
provide easy market access for telecom equipment and a fair regulatory framework for offering telecom services to the Indian
consumers at affordable prices.
330

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.3.1 SALES (2010-2013) AND FORECAST (TILL 2020)


The consistent growth in the smartphone market is driven by enhanced consumer preference for smart devices and
narrowing price differences. The smartphone penetration in India in quarter 1 of 2014 hovered at 10 per cent [24] and it
is expected to grow due to a variety of factors including greater availability of low-cost devices and additional sales
emphasis by top-flight vendors on less populous parts of the country. This rapid pace of growth in smartphones is
expected to continue in India and is estimated below in table 9.

Year

Million
units

2008

94.6

2009

100.9

2010

166.5

2011

213

2012

231

2013

251

2014

278

2015

303

2016

326

2017

348

2018

368

2019

386

2020

403

TABLE 9. SALES FORECAST FOR MOBILE PHONES IN INDIA 1.[24] 2.[25]

331

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

450
400

Mobile phones sales


(million units)

350
300
250
200
150
100
50
0
2005

2010

2015

2020

2025

Year
Graph 5

2.3.2 E-waste generated from Mobile phones


Components of a mobile phone
Circuit Boards

Constituents
Copper, Gold, Lead, Nickel, Zinc, Beryllium, Tantalum and
other metals

LCD

Mercury, plastic and glass

Rechargeable battery

Ni-MH and Ni-Cd batteries contain Nickel, cobalt, zinc,


cadmium and copper. Li-ion batteries use Lithium metallic
oxide and carbon based materials.

Table 10 Constituents of different components of a mobile phone [26]

Based on the sales data [Table 9], average lifespan [table 2] and taking the average weight as 130 gm [27], E-waste from
Mobile phones in India is estimated for the year 2020.

332

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Year

Metric
Tonne

2014

22919

2015

23101

2016

28827.5

2017

33475

2018

36172.5

2019

38870

2020

41925

Table11 E-waste from Mobile phones in India

2.3.3 Hazardous substances in components of mobile phone waste


Description

weight of material

Acrylonitrile butadiene
Styrene/Polycarbonate
(ABS-PC)
Cu and compounds
Epoxy Plastics
Flame retardant
Nickel
Zinc
Pb, Cd, Hg,

37.7gm

Gross weight by
2020
(Metric
Tonne)
12158

19.50gm
11.70gm
1.25gm
1.35gm
1.3gm
1.3gm

6289
3773
403
435
419
420

Table 12 Hazardous waste components in Mobile phone [28]

45000
40000

E-waste (MT) from


mobile phones

35000
30000
25000
20000
15000
10000
5000
0
2012

2014

2016

2018

2020

2022

Year
333

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Graph 6

2.4 Television
In the television segment, the advent of Liquid Crystal Display (LCD) and plasma screens have altered the concept of the television
for viewers. Better technology has meant improved picture quality and a diminishing price difference between the traditional CRT
(Cathode Ray Tube) television and the new flat screen LCD television. It has resulted in the popularity of the latter. Moreover,
increasing disposable income and the price decline influenced by robust demand has been factoring the growth in this segment.

2.4.1 Sales (2000-13) and forecast (2014-2017)


TVs are the largest contributors to the Consumer durables segment. Introduction of HDTVs is set to drive demand growth from
affluent consumers Liquid crystal displays (LCDs). The price decline due to relatively low import duty on LCD panels, higher
penetration levels, and the introduction of small entry-size models are key growth drivers. [29].
Year

Million
units

2000

2001

5.4

2002

6.7

2003

8.15

2004

10

2005

10.781

2006

12

2007

13.28

2008

14.53

2009

15.77

2010

17.14

2011

18.45

2012

19.8

2013

21.1

2014

22.4

2015

23.8

2016

25.15

2017

26.6

Table 13. Year wise TV sales in India [29]

334

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

TV sales (million units)

30
25
20
15
10
5
0
1995

2000

2005

2010

2015

2020

Year

Graph 7

2.4.2 E-waste generated from TVs in India


Televisions are more likely disposed off by the people in case of technological failures as they prefer to exchange with a new one
rather than repairing non-functional equipment. The other driving forces for television replacement are to match with latest trends and
technology, upgraded features and peer pressure [30]. In case of televisions; a lot of relocation happens to the nearby villages, towns
and cities, resulting into repeated cycles of reuse. [30]. As a result, there is a huge gap between the potential e-waste generated and the
e-waste actually recycled.
Based on the sales data (see table13), average lifespan distribution (see table 2) and assuming the average weight of television as 15kg
[15], the total e-waste generated from televisions in India is estimated till the year 2020-21.

Year
2012

Metric
tone
90300

2013

105750

2014

130200

2015

145800

2016

168000

2017

184200

2018

202770

2019

221550

2020

241500

Table 14. E-waste generated in India from TVs

335

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

E-waste (MT) from TVs

300000
250000
200000
150000
100000
50000
0
2010

2012

2014

2016

2018

2020

2022

Year

Graph 8

2.4.3 Hazardous substances in components of television waste [31] [32]


Description
Lead
Copper
Zinc
Cadmium

Weight of material
1.095kg
0.375kg
4.5gm
0.15gm

Gross material by 2020 (MT)


17629.5
6037.5
72.45
2.415

Plastics

4.56 kg

73416

Table 15. Hazardous materials in television

3. Results
3.1 Forecast for total E-waste generated annually
Forecast ( 2014-2020) for the total E-waste and the gross annual E-waste generated from the four electronic devices can
be summarized as below:-

Device
Desktop PCs
Laptops
Mobile phones
Televisions
Total
E-waste
annually

E-waste generated annually (Metric Tonne)


2014
2015
2016
2017
59558
66429
67102
49648.5
12640
15248
20673
25290
22919
23101
28827.5
33475
130200
145800
168000
184200
generated 225317
250578
284602.5 292613.5

Table 16. Forecast for Total E-waste generated annually from 2014 to 2020

336

www.ijergs.org

2018
41699
32360
36172.5
202770
313001.5

2019
35947
40367
38870
221550
336734

2020
32511
50769
41925
241500
366705

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

400000
350000

Total E-waste
(Metric Tonne)

300000
250000
200000
150000
100000
50000
0
2012

2014

2016

2018

2020

2022

Year
Graph 9

As per the above research, the total E-waste generated [Table 4,7,11,14] from the four electronic devices under study in
2020 is estimated as below:Equipment
Desktop PCs
Laptops
Mobile phones
Televisions

Metric Tonne
32511.6
40367
41925
241500

Table 16. Total E-waste generated in 2020

300000

Total E-waste estimated


for the year 2020

250000
200000
150000
100000

Series1

50000
0
Desktop PCs

Laptops

Mobile
phones

Televisions

Electronic Devices
Graph 10

3.2 Forecast for the total amount of hazardous material from E-waste in 2020

337

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Total amount of Hazardous material to be accumulated in 2020 [Table 5,8,12,15] due to the obsolete Desktops, Laptops, Mobile
phones and Televisions is estimated as below:Material
Plastics
Copper
Lead
Mercury
Cadmium
Zinc

Metric Tonne
109024
14569.5
20133.5
140.5
146
491

Table17. Total hazardous material (estimated) to be accumulated in 2020

Total material (Metric


tonne) in 2020

120000

100000
80000
60000
Series1

40000
20000
0

Graph 11

4. Discussions
As its quite conspicuous from the above study, India is at a crossroads with tremendous growth in the electronics industry but it also
faces the exponential growth of electronic waste. The reasons as have been discussed above for prompt generation and obsolesces of
E-waste include rapid economic growth, urbanization, industrialization, increased consumerism etc.

The above stated materials [Table 17] are toxic substances that have adverse impacts on human health and the environment if not
handled properly. Mercury leaches when certain electronic devices, such as circuit breakers are destroyed. And when Cadmium
containing plastics are landfilled, cadmium may leach into the soil and groundwater. Also, significant amounts of lead ion are
dissolved from broken lead containing glass, such as the cone glass of cathode ray tubes, gets mixed with acid waters and are a
common occurrence in landfills. The most dangerous form of burning E-waste is the open-air burning of plastics in order to recover
copper and other metals [33].
High obsolescence of electronic products and the necessity for supporting upgrades compound the problem. Often, these hazards arise
due to the improper recycling and disposal processes that are in practice in India. Such offensive practices can have serious aftermath
for those staying in proximity to the places where E-waste is recycled or burnt. The real cause of the problem is that the Indian people
are still to realize the associations between the cause of generation of E-waste and its effects including detrimental health and
environmental effects. The government must also essentially and effectively implement the e-waste management and handling
rules,2011 to address and counter the ever existing pile of e-waste in India.
338

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] Research Unit(LARRDIS), E-Waste in India, New Delhi: Rajya Sabha Secretariat, 2011
[2] www.norden.org , Methods to measure the amount of WEEE generated, Ttemanord, 2009
[3] www.itu.int/08e-waste , Discussion paper, International Telecommunication Union, GSR 2011
[4]Satish Sinha, 'Downside of the Digital Revolution', Toxics Link, 28 December2007
[5]. CAG report, E Waste in India, Rajya Sabha, New Delhi, 2011
[6] Santhanam Needhidasan, Melvin Samuel and Ramalingam Chidambaram, Journal of Environmental Health Science &
Engineering 2014, 12:36
[8]. Basel Action Network (BAN) and Silicon Valley Toxics Coalition (SVTC), Exporting Harm: The High-Tech Thrashing of Asia,
February 25, 2002
[9]. Dr. C. Lightner, Fayetteville State University, Forecasting Techniques, Chapter 16, Oct 2009
[10]. Cuddington J.T. and Khindanova I., Integrating Financial Statement Modeling and Sales Forecasting Using EViews, Journal
of Applied Business and Economics, 2011.
[11] Hai-Yong Kang and Julie M. Schoenung, Electronic waste recycling: A review of U.S. infrastructure and technology options,
2005
[12]. Babu R, Parande AK, Basha AC, Electrical and electronic waste: a global environmental problem, 2007
[13]. International Journal of Engineering Research and General Science Volume 2, Issue 4, June-July, 2014 ISSN 2091-2730
[14]. Dimitrakakis E et al, Creation of Optimum Knowledge Bank on E-Waste Management in India, EuropeAid Asia Pro Eco
programme, 2005
[15]. IRGSSA, Management, handling and practices of E-waste recycling in Delhi, India, 2004
[16].Neelu Jain and Pamela Chawla, Int. J. of Emerging Technology and Advanced Engineering, Vol.2, Issue 3, March 2012
[17].MAIT, IT industry performance annual reports 2007-2013, India.
[18]. Eugster, Hischier et al. Key Environmental Impacts of the Chinese EEE-Industry: A Life Cycle Assessment Study 2007.
[19] Silicon Valley Toxics Coalition, Toxic TVs and Poison PCs, June 1999
[20] SWICO Recycling Guarantee 2006 / ecoinvent vol 2.0
[21] Neelu Jain and Pamela Chawla, Int. J. of Environmental Technology and Management, Vol.17, No.2/3/4, pp.237 251, 2014
[22]. AEA Technology, WEEE & Hazardous waste Part 2 June 2006
[23]. Alcatel Telecommunications Review, Understanding the Mobile Phone Market Drivers, 2003-04
[24]. IDC Worldwide Quarterly Mobile Phone Tracker report, March, 2013
[25]. Indian Mobile Market Dynamics and Forecast (2008-2013), August 2009
[26] USEPA, The Life Cycle of a Cell Phone, 2003 www.ewasteguide.info
[27] MPPI (Mobile Phone Partnership Initiative), Conference of the Parties to the Basel Convention, 2002
[28]. Basel Action Network, Mobile Toxic Waste: Recent Findings on the Toxicity of End-of-Life Cell Phones, April 2004
339
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[29]. Consumer Durables Report, August 2013, India


[30]. BIRD and gtz, e waste assessment in India: special focus on Delhi, Nov 2007.
[31] RIS International Ltd., Baseline Study of End-of-Life Electrical and Electronic Equipment in Canada, June 2003
[32] Matsuto T et al., Material and heavy metal balance in a recycling facility for home electrical appliances, Waste Management 24
(2004), 434
[33]. Anwesha Borthakur and Pardeep Singh, Electronic waste in India: Problems and policies, IJES, Vol.3, No.1, 2012

340

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An Assessment on Asha Workers Awareness and Implementing a Low Cost


Integrated Toolkit for Accredited Social Health Activist(Asha) Using Android
Device
(Aakash
Tab) 2
1
1
3
Dipanwita Debnath , Suman Deb , Kaushik Debnath , Subir Saha
Dept. of Computer Science & Engineering.
National Institute of Technology, Agartala
Jirania, Tripura (W). India
ddebnath.nita@gmail.com

ABSTRCT: The objective of Government of India is to provide comprehensive integrated health care to the rural people underthe
umbrella of National Rural Health Mission (NRHM). Avillage level Female community health worker Accredited Social
HealthActivist (ASHA) acts as an interface between the communityand the public health system. Therefore present study
wasconducted to access the socio-demographic profile of ASHAworkers and to assess the knowledge awareness and practice oftheir
responsibilities.Mobile technologies have penetrated rural parts of the countries unlike any other technology. This canbe leveraged to
provide primary maternity healthcare services. A low cast toolkit containing AAKASH TAB (android smart phone) is designed in
such a way that it helps to take decision and supports for decision making.
Keywords
-Primary Maternity Healthcare, ASHA, NHRM, Awareness, Responsibility, Practice,PHC.
I. INTRODUCTION
With the objective of providing effective, efficient andaffordable healthcare to rural population in India, theNational Rural Health
Mission, India aims to appoint afemale health activist known as ASHA (Accredited Social Health Activist) [1] in every village.
Selected from thevillage itself and accountable to it, the ASHA workers aretrained based on the principles of old learning process to
be an interface between the community and thepublic health system.Each ASHAreceives reference material in the form of books and
previous files during thelearning programmer. ASHAs, after taking up andcompleting this learning program, will be equipped
withNecessary knowledge and a kit containing necessary medicines and previous files containing records of village. After they
understand the previous file they have to maintain the similar files and records. As ASHA may be a eight pass or less many times they
do not maintain the data in an appropriate way and Many of times this datas are lost due to improper maintenance of file. Keeping
that on and for decision support AAKASH Apps is made with the concept of iconic data entry or data entry with few clicks and
minimum data entry. ASHA can be easily trained with the apps which remove the complete file work and help the ASHA to take
decision, updates them with alerts like list of people for vaccination. Whenreal facts are asserted, the edge intelligence framework
canreturn a decision/judgment to the application. In short, edgeintelligence framework guzzles artificial intelligence in
theapplication.ASHA workers often need to refer to ASHA manual forassessing symptoms of women. The process of assessingpatient
symptoms based on hundreds of pages of hard-copyguidelines is quite cumbersome and error prone. Also, asthe ASHA and ANM
workers have the aware people about various health issues like what should a pregnant women eat what to give to a baby. Often it is
noticed that they forget to tell or dont point out some important issues. This can be overcome with some multi language resource
containing videos which People easily understands.
II. CURRENT SYSTEM
After going through the training modules prepared byNational Rural Health Mission, ASHA workers are requiredto work in villages
to provide primary healthcare services invillages.
ASHAs work consists mainly of five activities as listedbelow:
ASHA worker has to make at least five home visitsto the pregnant women for health promotion andpreventive care.
She has to take pregnant women to rural healthcenter for immunization or other services.
In case of medical emergency, ASHA worker escortspregnant women to a PHC. PHC is manned by adoctor and 14 other
paramedical staff. PHCmanages patients coming from 6 rural sub-centersand it has got 4-6 beds for the patients.
ASHA worker holds village level meetings withVillage Health and Sanitation Committee to increasehealth awareness and to plan
health work services.
She maintains records to make her work moreorganized and easier.
341

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ASHA workers need to refer the training modules again andagain and also need to remember various patient details toeffectively
discharge their duties. Currently, ASHA workersneed to record patient details ina notebook, and later send itto the PHC. She is
supposed to maintain following records inlog books:
a) Village Health Register: This register containsrecords of the details of pregnant women and
Others whom ASHA provides primary healthcareservices.
b) An ASHA diary: which is a record of ASHAs workand also useful for tracking performance of theASHA worker by the medical
supervisorpositioned at PHC.
c) Maintaining drug kit stocks: ASHA workers areprovided with a drug kit so as to be able to treatminor ailments/problems. The drug
kit containsParacetomol tablets, Albendazole tablets, IronFolic Acid (IFA) tablets, Chloroquine tablets, OralRehydration Salts (ORS),
and eye ointment. Inaddition, the kit may contain pregnancy testingkits, malaria testing kits, etc.
The following services are provided at the rural sub-centerby ANM or ASHA :
Early registration of pregnant women.
Regular weight check.
Blood test for anemia.
Urine test for protein and sugar.
Measure blood pressure.
Treatment for anemia.
Two doses of Tetanus Toxoid (TT).
Nutrition counseling.
General danger signs.
Preparing for birth.
In addition, ANM worker provides following services topregnant women with the help of ASHA workers:
III. PROPOSED SYSTEM:
We have developed a decision support system named ASHA,to enable health workers to provide maternity healthcareservices
efficiently and transparently. The ASHA can be used toregister all the pregnancies in rural parts of a country andsubsequently track
pregnant women throughout the period ofpregnancy for vaccination and periodic checkups the test results is updated with few tick
marks like yes and no.once the details are updated in the remote database doctors or experts can easily take any decisions with the help
of data. Appointment for ultrasonography, etc. may also be scheduledvia the mobile application ASHA. ASHA is the first port of call
for any healthrelated demands of deprived sections of thepopulation, especially women & children, whofind it difficult to access
health services. Keeping in mind the application is designed in such a way with fewer clicks &minimum memory load.

Fig: Android application ASHA for data entry with few clicks
342

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

All peoples data are maintained in the database but the special decision supports are for pregnant woman and their child health. With
the ASHA application every villagers information can be stored which will help all the details of diseases. There medicine stocks or if
any extra facilities needed. ASHA can entry the symptoms of diseases which is decision supportive. The whole system can also be
used for identification of a person or person count etc. All the data are stored in the local database of the android device and after a
particular time frame this data are damped to the remote server like an XAMMP server.
Multilingual videos stored in an android device containing health awareness can be used to Spreading awareness for health
concernsPromoting change in health related practices. Application like free SMS to a few registered numbers is also attached to the
ASHA application for the safety of the Accredited Social Health Activist(ASHA).

FIG: Free SMS APPs for Safety of ASHA


XII. FUTURE WORK
Multilingual videos which can be used by ASHA workers for awareness like Various Dos and Donts during pregnancy, important
food nutrients, natural remedies, vaccinations etc. andembedding those videos in the AAKASHAapplications.
different types of low cost devices like ECG, WEIGHT measuring device,low cast solar charger etc. and integrating them in the
android devices to make a complete toolkit.
XIII. CONCLUSION
Mobile communication networks have penetrated rural partsof developing countries, especially in India, unlike any other technology.
Android devices like AAKASH TAB come with a low cost, a rich set of resourcesand supports its local database SQLite. This
phenomenal growth can be exploited toprovide effective healthcare services in rural parts ofdeveloping world where healthcare
facilities are scarce. InIndia, a rural healthcare activist (ASHA worker) is beingappointed in each village with necessary manuals
onproviding healthcare services after training by the NationalRural Healthcare Mission (NRHM). In this paper, we havepresented
AAKASHA application built on an edge intelligenceplatform based on CLIPs Rules Engine ported to Androidplatform, which will
enable these semi-literate ASHAworkers discharge their duties in an efficient and transparentmanner. As the complete workflow of
ASHA workers isautomated, the data is stored in a backend application andcan be used for further analysis, online advice by
theDoctors, policy planning, forecast disease spread andmeasures to contain the spread.

REFERENCES:
[1] Mother and child protection card:http://hetv.org/pdf/protection-card/mcpc-english-a.pdf
[2] Health Infrastructure in India:http://childhealthfoundation.net/Health%20infrastructure%20in%20India.pdf
[3] A System to provide Primary Healthcare Services to Rural Indiamore efficiently and transparently : International Conference
onWireless Technologies for Humanitarian Relief. (ACWR2011)
[4] Richardson, Leonard; Ruby, Sam (2007-05), RESTful Web Services,O'Reilly, ISBN 978-0-596-52926-0
[5] Accredited Social Health Activist (ASHA), State Institute of Health & Family Welfare, Jaipur SIHFW: an ISO 9001: 2008
certified Institution
[6] CLIPS a tool for building an expert system :http://clipsrules.sourceforge.net
343

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7] ASHA Worker : http://mohfw.nic.in/NRHM/asha.htm


[8] AkashTablet :http://www.akashtablet.com/
[9] Mahyavanshi DK, Patel MG, Kartha G, Purani SK, Nagar SS. Across sectional study of the knowledge, attitude and practice
ofASHA workers regarding child health (under five years of age) inSurendranagar district.Healthline 2011; 2(2): 50

344

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Data Encryption Using DNA Sequences Based On Complementary Rules A


Review
Ms.Amruta D. Umalkar
Master of Engineering Department of Information Technology
. Sipna College of Engg. And Technology Amravati, India
amruta.umalkar2014@gmail.com
Prof. Pritish A. Tijare
Department of Information Technology
Sipna College of Engg. And Technology Amravati, India
pritishtijare@rediffmail.com

Abstract With the quick development of net technology and data process technology, the knowledge is unremarkable transmitted
via the net. The vital data in transmission is definitely intercepted by unknown person or hacker. So as to reinforce the knowledge
security, encryption becomes a vital analysis is direction. A message cryptography formula supported deoxyribonucleic acid
(Deoxyribo Nucleic Acid) sequence for presenting during this paper. The most purpose of this formula is to write the message with the
premise of complementary rules deoxyribonucleic acid sequence.

Keywords Data hiding; DNA Sequences; Complementary Rules, Secure Transmission and reception.
INTRODUCTION

The security of a system is essential nowadays. With the growth of the information technology power, and with the emergence of new
technologies, the number of threats a user is supposed to deal with grew exponentially.
With the increasing growth of transmission applications, security has become a crucial issue on communication. DNA secret writing is
rising as a brand new secret writing field wherever polymer is employed to hold the knowledge. The fascinating options concerning
the structure of polymer square measure the complementary rule. These rules are used for proposing message encryption methods.
Message encryption is the process of transmitting the message stealthily. In the message encryption, the original message is
transformed into an equivalent alternative by a definite encoding mechanism. This message is then send to the receiver. An encoding
scheme by incorporating the important chemical characteristics of biological DNA (Deoxyribonucleic Acid) sequences or structure of
purines and pyrimidines could serve as an effective stealth transmission of an message would be so secure that it could not be easily
cracked. In the proposed algorithm, a DNA sequence or structure is initial randomly taken and complementary rules are framed so the
secrete message to be sent is encoded at the senders aspect. At the receivers aspect, the decryption method is completed and
therefore the original message is extracted out.
A DNA sequence is a sequence composed of four distinct letters, A, C, G and T. Each nucleotide contains a phosphate attached to a
sugar molecule (deoxyribose) and one of four bases, adenine (A), cytosine (C), guanine (G), or thymine (T). It is the arrangement of
the bases in a sequence, for instance like ATTGCCAT, that determines the encoded gene. The natural sequence pattern with
complementary coding and chemical classification of the nucleotides can be used to shield the message.
Table I. DNA Based Coding

345

DNA Base

code

00

C
G
T

01
10
11

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

PURINES

ADENINE

PYRIMIDINES

GUANINE

THYMINE

CYTOSINE

URACIL

Figure 1. Structure of Purines and Pyrimidines

LITERATURE SURVEY
Message encryption using DNA sequence is a very new technique still evolving and tried out for secure transmission and reception of
hided messages. The method is deemed to be so secure that it would be very difficult for any intruder to break the encrypted message
and retrieve the actual message. Only the intended receiver can decrypt and receive the original message.
The following is some of the prospective DNA based messages encryption and data hiding schemes reported recently.
K. Menaka [1], proposed a data hiding method where the algorithm first randomly selects a DNA sequence. The message to be
encoded is then taken and each letter in the faked DNA sequence. Each letter in the message is converted into its ASCII equivalent
and they are then converted into equivalent binary form. Each two digits in the converted binary sequence are converted as per Table
1. Then, the message index position (first position of each letter) in the faked DNA sequence is applied to each letter of the converted
sequence. Each digit in the resultant sequence is replaced with its equivalent three digit binary value and the equivalent alphabet value
is replaced for the binary value. For example, if the obtained binary value is 010 011 101 , then it will be replaced as C D F where
A has the value 000, B has 001 and so on. The resultant sequence of alphabets is transmitted over to the receiver. In the receiver side,
the reverse process is done in which the original receiver knows the complementary rules and the randomly selected DNA sequence.
The message to be sent is then encoded with the fake DNA sequence.
Debnath Bhattacharyya [2] developed an algorithm for data encryption using DNA sequencing. In their algorithm, they have used the
concept of indexing the DNA Sequencing and transmitting the message to the receiver. They have not used any complementary rules.
Jin-Shiuh Taur et al. [3] proposed a way referred to as Table Lookup Substitution methodology (TLSM) that might double the
capability of message activity. In TSLM, they need replaced the complementary rule with a rule table. The key plan of the TLSM is to
increase the 1-bit complementary rule into a 2-bit rule table so every conversion of letters will represent 2 bits of the secret message.
In the method by Cheng Guo, Shiu, [4] the hiding procedure substitutes another letter for an existing letter on a special location set by
the algorithm. The embedding algorithm encompasses a conversion operates that converts a given letter with a selected letter outlined
by the complementary rule. For example, if a complementary rule is outline as (AC)(CG)(GT)(TA), then the result of (G) are going
to be T, and therefore the result of (T) are going to be A. To boot, the substitution methodology can convert the letter s into s
(unchanged), (s) and ( (s)) once the secrete message is 0, 1 and no data, respectively.
Mohammad Reza Abbasy, et al. proposed [5] an information hiding methodology wherever data was efficiently encoded and decoded
following the properties of DNA sequence. Complementary combine rules of DNA were employed in their methodology.

346

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Kritika Gupta* Shailendra Singh[6] has been projected a DNA Based Cryptological Techniques for an encryption algorithm based on
OTP (one-time-pad) that involve data encryption using traditional mathematical operations and/or data manipulating DNA techniques.
However once an encryption algorithm has been applied and therefore the data is transmitted on the transmission media: theres a
clear stage that the data, although within the cipher type gets manipulated by any interceptor.
Snehal Javheri, Rahul Kulkarni[7] proposed an algorithmic program has two phases in consequence: these are Primary Cipher text
generation using exploitation substitution methodology followed by Final Cipher text generation exploitation DNA digital secret
writing.
In the Primary Cipher text generation phase, the coding algorithmic program uses OTP (one-time-pad) key generation theme, since
nearly one key for one piece of data is sufficient to supply voluminous strength in coding technique. The projected methodology uses
indiscriminately generated symmetrical key of 8 bits size by the supposed receiver and provided to the sender. Therefore the sender
can have partial information of the personal key solely and so it generates the remainder part of the keys to cipher the data.
The Byte values are extracted from the input data or message. The additional secret writing method works on unsigned byte values of
the input data or text referred to as plain text. These byte values are replaced by combination of alphabets and special symbols
exploitation substitution methodology. And so this substitution worths are regenerate into its binary value. So as to embed lots of
security additional bits are padded at each ends of the first cipher text. These additional bits are nothing however the file size
information that is provided to the receiver through key. So the secret key, the data of primer pairs are shared between sender and
receiver through the secret key channel.
In the DNA digital secret writing section, the Ultimate Cipher text is generated from Primary Cipher text exploitation DNA digital
encryption technique. From a process purpose of read, cannot process the DNA molecules as in sort of alphabets, therefore the DNA
sequence encryption is employed during this methodology through that the binary knowledge is regenerate into DNA format and its
vice versa.
Guangzhao Cui #1, Limin Qin #2, Yanfeng Wang #3, Xuncai Zhang *4 [8]proposed a secrete writing theme by exploitation the
technologies of DNA synthesis, PCR amplification and DNA digital secrete writing additionally because the theory of ancient
cryptography. The supposed PCR two primer pairs was used because the key of this theme that not severally designed by sender or
receiver, however severally designed by the entire cooperation of sender and receiver. This operation might increase the safety of this
secrete writing theme. The standard secretes writing methodology and DNA digital cryptography is wont to preprocess to the
plaintext. Through this preprocess operation will get fully different ciphertext from the identical plaintext, which might effectively
stop attack from a potential word as PCR primers. The quality of biological troublesome issues and cryptography computing
difficulties give a double security safeguards for the theme. And therefore the security analysis the secrete writing theme has high
confidential strength.
Ritu Gupta, Anchal Jain [9] symmetric-key encoding algorithmic rule supported the DNA approach is projected. The initial key
sequence is enlarged to desire length victimization projected key growth technique guided by the pseudo random sequence. The
advantage is that theres no need to send an extended key over the channel. The variable key growth in encoding method combined
with DNA addition and complement makes the technique sufficiently secure. A DNA sequence consists of four nucleic acid bases A
(adenine), C (cytosine), G (guanine), T (thymine), wherever A and T are complementary, and G and C are complementary. Also use
C, T, A and G to denote 00, 01, 10, 11 (the corresponding decimal digits are 0123). By victimization this encoding technique every
8-bit component worth of the gray scale image is pictured as a nucleotide string of length four. Reciprocally to decrypt the nucleotide
string will get a binary sequence simply. In total 4! = 24 forms of writing, there are only 8 of them will meet complementary rule , for
instance, the decimal digits 0123 (the corresponding binary range is 00011011) will be encoded in to one of them, like CTAG,
CATG, GATC, GTAC, TCGA, TGCA, ACGT or AGCT. There are total six legal complementary rules [3] that are as
follows:
(AT)(TC)(CG)(GA), (AT)(TG)(GC)(GA), (AC)(CT)(TG)(GA), (AC)(CG)(GT)(TA),(AG)(GT)(TC)(CA), (AG)(GC)(CT)(TA).
Any one of them for instance, (AG) (GC) (CT) (TA) is applied to projected methodology.

347

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

FOLLOWING TABLE SHOWS THE WORK DONE BY RELATED AUTHORS ALONG WITH RESPECTIVE
YEARS:

Author

Proposed Work

Year

Guangzhao Cui, Limin Qin, Yanfeng


Wang, Xuncai Zhang

Proposed a secrete writing theme by exploitation


the technologies of DNA synthesis, PCR
amplification and DNA digital secrete writing

2008

Jin-Shiuh Taur

Mohammad Reza Abbasy

Cheng Guo, Shiu,

Debnath Bhattacharyya

Kritika Gupta* Shailendra Singh

K. Menaka

Snehal Javheri, Rahul Kulkarni

Ritu Gupta, Anchal Jain

348

Proposed
a
way
referred
to
as
Table Lookup Substitution methodology (TLSM)
that might double the capability of message
activity. The key plan of the TLSM is to
increase the 1-bit complementary rule into a 2-bit
rule
table so every
conversion
of
letters will represent 2 bits of the secret message.
Proposed an information hiding methodology
wherever data was efficiently encoded and
decoded following the properties of DNA
sequence.
Proposed the hiding procedure substitutes
another letter for an existing letter on a special
location set by the algorithm. The embedding
algorithm encompasses a conversion operates
that converts a given letter with a selected letter
outlined by the complementary rule.
Developed an algorithm for data encryption
using DNA sequencing. They have used the
concept of indexing not used any complementary
rules.
Projected a DNA Based Cryptological
Techniques for an encryption algorithm based on
OTP (one-time-pad) that involve data encryption
using traditional mathematical operations and/or
data manipulating DNA techniques.
Proposed a data hiding method where DNA
based complementary rules are used for hiding
the data.
Proposed an algorithmic program has two phases
in consequence: these are Primary Cipher text
generation using exploitation substitution
methodology followed by Final Cipher text
generation exploitation DNA digital secret
writing.
Projected a symmetric-key encoding algorithmic
rule supported the DNA approach.

www.ijergs.org

2010

2011

2012

2013

2013

2014

2014

2014

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANALYSIS OF PROBLEM
The message encryption algorithm has many steps to break and to get the original message. The random selection of DNA sequence
can be increased to many numbers. The complementary rules which are formed based on the properties of DNA could also be
increased since DNA sequence has many biological properties and using those properties also some more complementary rules can be
formed.
The complementary rules which are formed based on the properties of DNA could also be increased since DNA sequence has many
biological properties and using those properties also some more complementary rules can be formed for message encryption.

CONCLUSION
The entire proposed algorithm has many steps to break and to get the original message. So, any intruder who receives the intermediate
message will never be able to retrieve the original message as intended by the sender. The random selection of DNA sequence can be
increased to many numbers. The complementary rules which are formed based on the properties of DNA could also be increased since
DNA sequence has many biological properties and using those properties also some more complementary rules can be formed.
Message encryption using DNA sequence is a very new technique still evolving and tried out for secure transmission and reception of
hided messages. The method is deemed to be so secure that it would be very difficult for any intrude to break the encrypted message
and retrieve the actual message. Only the intended receiver can decrypt and receive the original message.

REFERENCES
[1] K.MenakaMessage Encryption Using DNA Seqence 978-1-4799-2977-4. 201 I EEE.
[2] Debnath Bhattacharyya, Samir Kumar Bandyopadhyay, Hiding Secret Data in DNA Sequence, International Journal of
Scientific &Engineering Research Volume 4, Issue 2, February-2013 ISSN 2229-5518.
[3] H. J. Shiu, K. L. Ng, J. F. Fang, R. C. T. Lee and C. H. Huang, Data hiding methods based upon DNA sequences,Information of
Science, vol.180, no.11, pp.2196-2208,2010.
[4] Cheng Guo, Chin-Chen Chang and Zhi-Hui Wang A New Data Hiding Scheme Based On DNA Sequence International Journal
of Innovative Computing, Information and Control ICIC International Volume 8, Number 1(A), January 2012.
[5]Mohammad Reza Abbasy, Azizah Abdul Manaf, and M.A.Shahidan, Data Hiding Method Based on DNA Basic Characteristics,
International Conference on Digital Enterprise and Information Systems, July 20-22, (2011), London, UK,pp. 5362.
[6] Kritika Gupta* Shailendra Singh DNA Based Cryptographic Techniques International Journal of Advanced Research in
Computer Science and Software Engineering Volume 3, Issue 3, March 2013 ISSN: 2277 128X
[7] Snehal Javheri, Rahul Kulkarni Secure Data communication and Cryptography based on DNA based Message Encoding
International Journal of Computer Applications (0975 8887) Volume 98 No.16, July 2014
[8] Guangzhao Cui #1, Limin Qin #2, Yanfeng Wang #3, Xuncai Zhang *4 An Encryption Scheme Using DNA Technology 978-14244-2724-6/08/2008 IEEE.
[9] Ritu Gupta, Anchal Jain A New Image Encryption Algorithm based on DNA Approach International Journal of Computer
Applications (0975 8887) Volume 85 No 18, January 2014

349

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IMPROVED FUZZY TECHNOLOGY FOR EFFICIENT SEARCHING


Mr. R. P. Sabale
Department of Computer Engineering.
G.H. Raisoni College of Engineering and Management Ahmednagar
AbstractInstant search is an information-retrieval in which a system finds answers to a query instantly while a user types in
keywords character-by-character. Fuzzy search further improves user search experiences by finding relevant answers with keywords
similar to query keywords. A main computational challenge in this the high-speed requirement, i.e., each query needs to be answered
within milliseconds to achieve an instant response and a high query throughput. At the same time, we also need good ranking
functions that consider the proximity of keywords to compute relevance scores.
In this paper, we study how to integrate proximity information into ranking in instant-fuzzy search while achieving efficient time and
space complexities. A nave solution is computing all answers then ranking them, but it cannot meet this high-speed requirement on
large data sets when there are too many answers, so there are studies of early-termination techniques to efficiently compute relevant
answers. To overcome the space and time limitations of these solutions, we propose an approach that focuses on common phrases in
the data and queries, assuming records with these phrases are ranked higher. We study how to index these phrases and develop an
incremental-computation algorithm for efficiently segmenting a query into phrases and computing relevant answers.
I.
INTRODUCTION
Instant Search: As an emerging information-access it returns the answers immediately based on a partial query a user has typed in.
Many users prefer the experience of seeing the search results instantly and formulating their queries accordingly instead of being left
in the dark until they hit the search button
Fuzzy Search: Users often make typographical mistakes in their search queries. Meanwhile, small keyboards on mobile devices, lack
of caution, or limited knowledge about the data can also cause mistakes. In this case we cannot find relevant answers by finding
records with keywords matching the query exactly. This problem can be solved by supporting fuzzy search, in which we find answers
with keywords similar to the query keywords. Combining fuzzy search with instant search can provide an even better search
experiences, especially for mobile-phone users, who often have the fat fingers problem, i.e., each keystroke or tap is time
consuming and error prone.
Finding Relevant Answers within Time Limit It is known that to achieve an instant speed for humans (i.e., users do not feel delay),
from the time a user types in a character to the time the results are shown on the device, the total time should be within 100
milliseconds [2]. The time includes the network delay, the time on the search server, and the time of running code on the device of the
user (such as JavaScript in browsers). Thus the amount of time the server can spend is even less. At the same time, compared to
traditional search systems, instant search can result in more queries on the server since each keystroke can invoke a query, thus it
requires a higher speed of the search process to meet the requirement of a high query throughput. What makes the computation even
more challenging is that the server also needs to retrieve high-quality answers to a query given a limited amount of time to meet the
information need of the user.
Problem Statement: In this paper, we study the following problem: how to integrate proximity information into ranking in instantfuzzy search to compute relevant answers efficiently? The proximity of matching keywords in answers is an important metric to
determine the relevance of the answers. Search queries typically contain correlated keywords, and answers that have these keywords
together are more likely what the user is looking for [3].
Our Contributions: We study various solutions to this important problem and show the insights on the tradeoffs of space, time, and
answer quality. One approach is to first find all the answers, compute the score of each answer based on a ranking function, sort them
using the score, and return the top results. However, enumerating all these answers can be computationally expensive when these
answers are too many. This case is more likely to happen compared to a traditional search system since query keywords in instant
search are treated as prefixes and can have many completions. In addition, fuzzy search makes the situation even more challenging
350

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

since there can be many keywords with a prefix similar to a query prefix. As a consequence, the number of answers in instant fuzzy
search is much larger than that in traditional search.
An efficient way to address the problem is to use early termination techniques that allow the engine to find top answers without
generating all the answers of the query [4]. The main idea is to traverse the inverted index of the data following a certain order, and
stop the traversal once we are sure that the most relevant results are among those records we have visited. The traversal order of the
inverted index is critical to be able terminate the traversal sooner. However, using a proximity aware ranking in early termination is
challenging, because the document order in the inverted index is typically based on individual keywords. At the same time, proximity
information is between different keywords and does not depend on the order of an inverted list.
There are studies on building an additional index for each term pair that appears close to each other in the data, or for phrases [5], [6],
[7]. However, building an index for the term pairs will consume a significant amount of space. For instance, the approach in [5]
reported an index of 1.3 TB for a collection of 25 million documents, and reduced the size to 343.5 GB by pruning the lists
horizontally. In addition, these studies focus on two-keyword queries only, and do not consider queries with more keywords.
Studies show that users often include entities such as people names, companies, and locations in their queries [8]. These entities can
contain multiple keywords, and the user wants these keywords to appear in the answers as they are, i.e., the keywords are adjacent and
in the same order in the answers as in the query. Users sometimes enter keywords enclosed by quotation marks to express that they
want those keywords to be treated as phrases [3]. Based on this observation, we propose a technique that focuses on the important case
where we rank highly those answers containing the query keywords as they are, in addition to adapting existing solutions to instantfuzzy search. To overcome the known limitations of existing solutions, we propose an approach that indexes additional common
phrases in addition to indexing single terms. This method can not only avoid the space overhead of indexing all the term pairs or
phrases, but also improve ranking significantly by efficiently finding relevant answers that contain these common phrases. To find
relevant answers, we identify the indexed phrases in the query, then access their inverted lists before accessing single-keyword lists. If
the query has different ways to be segmented into phrases, we consider all these segmentations and rank them. Each segmentation
corresponds to a unique index-access strategy to execute the query. We execute the ranked segmentations one by one until we
compute the most relevant answers or enough time is spent. We focus on a main challenge in this approach, which is how to do
incremental computation to answer a query so that we do not need to compute the results from scratch for each keystroke.
A. Related Work
Auto-Completion: It system suggests several possible queries the user may type in next. There have been many studies on predicting
queries (e.g., [9], [10]). Many systems do prediction by treating a query with multiple keywords as a single prefix string. Therefore, if
a related suggestion has the query keywords but not consecutively, then this suggestion cannot be found.
Instant Search: Many recent studies have been focused on instant search, also known as type-ahead search. The studies in [11], [12],
[13] proposed indexing and query techniques to support instant search. The studies in [14], [15] presented triebased techniques to
tackle this problem. Li et al. [16] studied instant search on relational data modeled as a graph.
Fuzzy Search: The studies on fuzzy search can be classified into two categories, gram-based approaches and trie-based approaches.
In the former approach, sub-strings of the data are used for fuzzy string matching [17], [18], [19], [20]. The second class of
approaches index the keywords as a trie, and rely on a traversal on the trie to find similar keywords [14] , [15]. This approach is
especially suitable for instant and fuzzy search [14] since each query is a prefix and trie can support incremental computation
efficiently.
Early Termination: Early-termination techniques have been studied extensively to support top-k queries efficiently [21], [22], [23],
[5], [6], [7]. Li et al. [4] adopted existing top-k algorithms to do instant-fuzzy search. Most of these studies reorganize an inverted
index to evaluate more relevant documents first. Persin et al. [23] proposed using inverted lists sorted by decreasing document
frequency. Zhang et al. [22] studied the effect of term-independent features in index reorganization.
Proximity Ranking: Recent studies show proximity is highly correlated with document relevancy, and proximity aware ranking
improves the precision of top results significantly [24], [25]. However, there are only a few studies that improve the query efficiency
of proximity-aware search by using early-termination techniques [26], [5], [6], [7]. Zhu et al. [26] exploited document structure to
351

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

build a multi-tiered index to terminate the search process without processing all the tiers. The techniques proposed in [5], [6] create an
additional inverted index for all term pairs, resulting in a large space. To reduce the index size, Zhu et al. [7] proposed to build a
compact phrase index for a subset of the phrases. However, both [6] and [7] studied the problem for two-keyword queries only.
II.
PRELIMINARIES
Data: Let R = {r1,r2,...,rn} be a set of records with text attributes, such as the tuples in a relational table or a collection of documents.
Let D be the dictionary that includes all the distinct words of R. Table I shows an example data set of medical publication records.
Each record has text attributes such as title and authors.

Query: A query q is a string that contains a list of keywords hw1,w2,...,wli, separated by space. In an instant-search system, a query is
submitted for each keystroke of a user. When a user types in a string character by character, each query is constructed by appending
one character at the end of the previous query. The last keyword in the query represents the word currently being typed, and is treated
as prefix, while the first l1 keywords hw1,w2,...,wl1i are complete keywords. (Our techniques can be extended to the case where each
keyword in the query is treated as a prefix.) For instance, when a user types in brain tumor character by character, the system
receives the following queries one by one: q1 = hbi, q2 = hbri, ..., q10 = hbrain,tumoi, q11 = hbrain,tumori.

Answers: A record r from the data set R is an answer to the query q if it satisfies the following conditions: (1) for
1 i l 1, it has a word similar to wi, and (2) it has a keyword with a prefix similar to wl. The meaning of similar to will be
explained shortly. For instance, r1, r3, and r4 are answers to q = hheart, surgei, because all of them contain the keyword heart. In
addition, they have words surgery, surgeons, and surgery, respectively, each of which has a prefix similar to surge. Record r6
is also an answer since it has an author named hart similar to the keyword heart, and also contains surgery with a prefix surge
matching the last keyword in the query.
The similarity between two keywords can be measured using various metrics such as edit distance.The edit distance between two
strings is the minimum number of single-character operations (insertion, deletion, and substitution) to transform one string to the
other. For example, the edit distance between the keywords Kristina and Christina is 2, because the former can be transformed to
the latter by substituting the character K with C, and inserting the character h after that. Let ed(wi,p) be the edit distance
between a query keyword wi and a prefix p from a record, and be a threshold. We say p is similar to wi if ed(wi,p) . Our techniques
can be extended to other variants of the edit distance function, such as a function that allows a swap operation between two characters,
a function that uses different costs for different edit operations, and a function that considers a normalized threshold based on the
string lengths.

Ranking: Each answer to a query is ranked based on its relevance to the query, which is defined based on various pieces of
information such as the frequencies of query keywords in the record, and co-occurrence of some query keywords as a phrase in the
record. Domain-specific features can also play an important role in ranking. For example, for a publication, its number of citations is a
good indicator of its impact, and can be used as a signal in ranking. In this paper, we mainly focus on the effect of phrase matching in
ranking. For example, for the query q = hheart,surgeryi, record r1 in Table I containing the phrase heart surgery is more relevant
than the record r4 containing the keywords heart and surgery separately.

Basic Indexing: As the techniques described in Ji et al. [14] that combines fuzzy and instant search, we use three indexes to answer
queries efficiently, a trie, an inverted index, and a forward index. In particular, we build a trie for the terms in the dictionary D. Each
path from the root to a leaf node in the trie corresponds to a unique term in D. Each leaf node stores an inverted list of its term. We
also build a forward index, which includes a forward list that contains encoded integers of the terms for each record. We can use this
index to verify if a record contains a keyword matching a prefix condition.

Top-k Query Answering: Given a positive integer k, we compute the k most relevant answers to a query. One way to compute these
results is to first find all the results matching the query conditions, then rank them based on their score. An alternative solution is to
352
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

utilize certain properties of the ranking function, and compute the k most relevant results using early termination techniques without
computing all the results.
III.

BASIC ALGORITHMS FOR TOP-k QUERIES

A. Computing All Answers


A naive solution is to first compute all the answers matching the keywords as follows. For each query keyword, we find the
documents containing a similar keyword by computing the union of the inverted lists of these similar keywords. For the last query
keyword, we consider the union of the inverted lists for the completions of each prefix similar to it. We intersect these union lists to
find all the candidate answers. Then we compute the score of each answer using a ranking function, sort them based on the score, and
return the top-k answers.
A main advantage of this approach is that it supports all kinds of ranking functions. An example ranking function is a
TABLE I. EXAMPLE DATA OF MEDICAL PUBLICATIONS. THE TEXT IN BOLD REPRESENTS THE INDEXED
PHRASES
.
Record
Title
Authors
ID
r1
Clare
Royal Brompton Hospital challenges decision to close its heart surgery unit.
Dyer
r2
, ...
r3

.
r4
r5
r6

, ...
.
, ...
Comment on the update on blood conservation for cardiac surgery.
James
Hart,
...
linear weighted sum of content-based relevancy score and proximity score that consider the similarity of each matching keyword. For
example, we can use a variant of the scoring model proposed by Buttcher et al. [27], which can be enhanced by considering similarity
based on edit distance. This ranking function uses Okapi BM25F [28] as content-based relevancy score, and computes the proximity
score between each pair of adjacent query term occurrences as inversely proportional to the square of their distance. We can adapt this
ranking function by multiplying each term-related computation with a weight based on the similarity between the matching term and
its corresponding query keyword.
A main disadvantage of this approach is that its performance can be low if there are many results matching the query keywords, which
may take a lot of time to compute, rank, and sort. Thus it may not meet the high-performance requirement in an instant-search system.
B. Using Early Termination
To solve this problem, Li et al. [4] developed a technique that can find the most relevant answers without generating all the candidate
answers. In this approach, the inverted list of a keyword is ordered based on the relevancy of the keyword to the records on the list.
This order guarantees that more relevant records for a keyword are processed earlier. This technique maintains a heap for each
keyword w to partially compute the union of the inverted lists for ws similar keywords ordered by relevancy. By processing one
record at a time, it aggregates the relevancy score of each keyword with respect to the record using a monotonic ranking function. For
example, we can use a variant of Okapi BM25F as a monotonic ranking function, which is enhanced by considering a similarity based
on edit distance. This technique works for many top-k algorithms. For instance, we can use the well-known top-k query processing
353

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

algorithm called the Threshold Algorithm [21] to determine when to terminate the computation. In particular, we can traverse the
inverted lists and terminate the traversal once we are guaranteed that the top-k answers are among those records we have visited. The
way the lists are sorted and the monotonicity property of the ranking function allow us to do this early termination, which can
significantly improve the Search performance and allow us to meet the high-speed requirement in instant search. However, this
approach does not consider the proximity in ranking due to the monotonicity requirement of the ranking function.
C. Using Term-Pair Index
In order to support term proximity ranking in top-k query processing, [6] introduces an additional term-pair index, which contains all
the term pairs within a window size w in a document along with their proximity information. For example, for w = 2, the term-pair
(t1,t2) is indexed if a document contains t1 t2, t1 tx t2, or t1 tx ty t2. It is clear that the number of term pairs for the window size w can
be
. Therefore, as the window size increases, the number of additional term pairs will increase quadratically. The authors
also propose techniques to reduce index size while not affecting retrieval performance much. One of the proposed techniques is not
creating a term-pair list for a pair if both terms are very rare. The intuition behind this strategy is that the search engine does not need
too much time to process both terms even if there is no term-pair list since inverted lists of these terms are relatively short compared to
those of other terms.
Given a query q = ht1,t2i, if the index contains the pairs (t1,t2) or (t2,t1), their inverted lists are processed, their relevancy scores are
computed based on the linear combination of content-based score and the proximity score, and the temporary top-k answer list is
maintained. Then the topk answer computation continues with the inverted lists of single keywords t 1 and t2. Since the answers
computed in the first step have high proximity scores, the early termination condition can be quickly satisfied in the second step.
We can adapt the approach in [6] into instant-fuzzy search, specifically to the approach described in III-B as follows. First, we insert
the term pairs based on the specified window size w to the index as phrases. Therefore, the trie structure contains the phrase t1 t2 for
the term pair (t1,t2). When computing top-k results for a query q = ht1,t2i, first we find the phrases similar to t1 t2 and t2 t1, and
retrieve their inverted lists. Then we continue with the normal top-k computation for separate keywords t1 and t2. The main limitation
of this approach is that it only support two-keyword queries, and does not work if the query has more than two keywords.
IV.
PHRASE-BASED INDEXING AND LIFE-CYCLE OF A QUERY
To overcome the limitations of the basic approaches, we develop a technique based on phrase-based indexing.
A. Phrase-Based Indexing
Intuitively, a phrase is a sequence of keywords that has high probability to appear in the records and queries. We study how to utilize
phrase matching to improve ranking in this top-k computation framework. We assume an answer having a matching phrase in the
query has a higher score than an answer without such a matching phrase. To be able to still do early termination, we want to access the
records containing phrases first. For instance, for the query q = hheart,surgeryi, we want to access the records containing the phrase
heart surgery before the records containing heart and surgery separately. Notice that the framework sorts the inverted list of a
keyword based on relevancy of its records to the keyword. If we order the inverted list of the keyword surgery based on the
relevancy to the phrase heart surgery, the best processing order for another phrase, say, plastic surgery, may be different.
Based on this analysis, we need to index phrases to be able to retrieve the records containing these phrases efficiently. However, the
number of phrases up to a certain length in the data set can be much larger than the number of unique words [29]. Therefore, indexing
all the possible phrases can require a large amount of space [5]. To reduce the space overhead we need to identify and index those
phrases that are more likely to be searched. We consider a set of important phrases E that are likely to be searched for indexing, where
each phrase appears in records of R. The set E can be determined in various ways such as person names, points of interest, and popular
n-grams in R. Examples include Michael Jackson, New York City, and Hewlett Packard. Let W be the set of all distinct words in R.
We will refer the set W E as the dictionary D, and call each item t D a term. In Table I, the indexed phrases are shown in bold.
Figure 2 shows the index structures for the sample data in Table I. For instance, the phrase heart surgery unit is indexed in the trie in
Figure 2(a), in addition to the keywords heart, surgery, and unit. The leaf nodes corresponding to these terms are numbered as
354

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5, 3, 11, and 12, respectively. The leaf node for the term heart points to its inverted list that contains the records r1, r3, and r4. In
addition, Figure 2(b) shows the forward index, where the keyword id 3 for the term heart is stored for these records.
B. Life Cycle of a Query
To deal with a large data set that cannot be indexed by a single machine, we assume the data is partitioned into multiple shards to
ensure the scalability. Each server builds the index structures on its own data shard, and is responsible for finding the answers to a
query in its shard. The Broker on the Web server receives a query for each keystroke of a user. The Broker is responsible for sending
the requests to multiple search servers, retrieving and combining the results from them, and returning the answers back to the user.
Figure 3 shows the query flow in a server for one shard. When a search server receives a request, it first identifies all the phrases in the
query that are in the dictionary D, and intersects their inverted lists. For this purpose, we have a module called Phrase Validator that
identifies the phrases (called valid phrases) in the query q that are similar to a term in the dictionary D. For example, for the query q
= hheart,surgeryi, heart is a valid phrase for the data set in Table I, since the dictionary contains the similar terms heart and hart.
In addition, surgery and heart surgery are also valid phrases. To identify all the valid phrases in a query, the Phrase Validator uses
the trie-based algorithm in [14], which can compute all the similar terms to a complete or prefix term efficiently. The Phrase Validator
computes and returns the active nodes for all these terms, i.e.,

Fig. 2.

Index structures.

those trie nodes whose string corresponding to the path from the root to this node is similar to the query phrase.

355

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730
Query

Phrase
Validator
Valid
Phrases

Cache

Query Plan
Builder
Query
Plan

Index Searcher

Fig. 3.

Indices
Forward
Index

Trie

Inverted
Index

Server architecture of instant-fuzzy search.

If a query keyword appears in multiple valid phrases, the query can be segmented into phrases in different ways After identifying the
valid phrases, the Query Plan Builder generates a Query Plan Q, which contains all the possible valid segmentations in a specific
order. The ranking of Q determines the order in which the segmentations will be executed. After Q is generated, the segmentations are
passed into the Index Searcher one by one until the top-k answers are computed, or all the segmentations in the plan are used. The
Index Searcher uses the algorithm described in [4] to compute the answers to a segmentation. A result set is then created by combining
the result sets of the segmentations of Q.
The rest of the paper is organized as follows. In Section V we study how to identify valid phrases in a query, and present an algorithm
to do the computation incrementally. In Section VI we explain how a query is segmented based on the computed valid phrases and
how these segmentations are ranked to generate a query plan. We present our experimental results in Section VII and conclude in
Section VIII.
V.
COMPUTING VALID PHRASES IN A QUERY
In this section we study how to efficiently compute the valid phrases in an instant-search query, i.e., those phrases that match the terms
in the dictionary D extracted from the data set. We first give a basic approach that computes the valid phrases from scratch, then
develop an efficient algorithm for doing incremental computation using the valid phrases of previous queries.
A. Basic Approach
A query with l keywords can be segmented into m phrases in

different ways, because there are l 1 places to choose for m 1

separators to obtain m phrases. Therefore, the total number of possible segmentations,

2l1, grows exponentially as the

number of query keywords increases. Fortunately, the typical number of keywords in a search query is not large. For instance, in Web
search it is between 2 and 4 [30]. Moreover, we do not need to consider all possible segmentations since some of them are not valid. A
segmentation can produce an answer to a query only if each phrase of the segmentation is a valid phrase, i.e., it is similar (possibly as
a prefix) to a term in D, we only need to consider the valid phrases and segmentations that consist of these phrases.
The trie also allows incremental validation for phrases with the same prefix. To exploit this property, we need to validate the phrases
in a specific order. Specifically, for a query q = hw 1,w2,...,wli, for each keyword w i, we traverse the trie to find the prefixes similar to
a phrase starting with w i. To check all the phrases starting with w i, the keywords wi+1,wi+2,...,wl are added incrementally during the
traversal. The traversal is stopped either when all the keywords after w i are added or when the obtained active-node set is empty. In
the latter case, the phrases with more keywords will also have an empty active-node set.
B. Incremental Computation of Valid Phrases
we study how to incrementally compute the valid phrases of a query qj using the cached valid phrases of a previous query q i. The
valid phrases of q i are cached to be used for later queries that start with the keywords of q i.
Figure 4 shows the active nodes of the valid phrases in the queries q1 = hheart,surgei, q2 = hheart,surgeryi, and q3 =
hheart,surgery,uniti. In the figure, q1 and q2 have the same active nodes n1 and n2 for the phrase heart. Moreover, the phrase
surgery in q2 has an active node n5, which is close to the active node n3 of phrase surge in q1. Similarly, the phrase heart surgery
356

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

in q2 has an active node n6, which is close to the active node n4 of phrase heart surge in q1. Hence, we can use the active nodes n3 and
n4 to compute n5 and n6 efficiently. The key observation in this example is that the computation is needed only for the phrases
containing the last query keyword.
If a query qj extends a query qi by appending additional characters to the last keyword wl of qi, then each valid phrase of qi that ends
with a keyword other than wl is also a valid phrase of qj. The valid phrases of qi that end with the keyword wl have to be extended to be
valid phrases of qj. The new active-node set can be computed by starting from the activenode set of the cached phrase, and traversing
the trie for the additional characters to determine if the phrase is still valid.
h

surge
art

unit

eart

n1

n2
surge

n7

n3
ry

n4
n5

ry

n6
unit

n8

Active node for


q1 = <heart, surge >
Active node for
q2 = <heart, surgery >
Active node for
q3 = <heart, surgery, unit >

Fig. 4.

Active nodes for valid phrases.

Another case where we can use the cached results of the query qi is when the query qj has additional keywords after the last keyword
wl of qi. The queries q2 and q3 in Figure 4 are an example of this case. In this example, all the active nodes of q2 (i.e., n1, n2, n5, and n6)
are also active nodes for q3. In addition to these active nodes, q3 has the active nodes n7 and n8 for the phrases that contain the
additional keyword unit (i.e., unit and heart surgery unit). The phrase unit is a new phrase, and its active node (n7) is
computed from scratch. However, the phrase heart surgery unit has a phrase from q2 as a prefix, and its active node n8 can be
computed incrementally starting from n6. As seen in the example, if the query qj has additional keywords after the last keyword wl of
qi, then all of the valid phrases of qi are also valid in qj. Moreover, some of the valid phrases of qi that end at wl can be extended to
become valid phrases of qj. If a phrase starting with the mth keyword of qi, wm (m l), can be extended to a phrase containing the nth
keyword of qj, wn (l < n), the phrase wm ...wn can be computed by using the valid phrase wm ...wl of qi.
Based on these observations, we cache a vector of valid phrases Vi for a query qi with the following properties: (1) Vi has an element
for each keyword in qi, i.e., |Vi| = l; (2) The nth element in Vi is a set of starting points of the valid phrases that end with the keyword wn
and their corresponding active-node sets.
Figure 5 shows the vectors of valid phrases V1, V2, and V3 for the queries q1, q2, and q3, respectively.
We develop an algorithm for computing the valid phrases of a query incrementally using previously cached vector of valid phrases.
The pseudo code is shown in Algorithm 1. As
q1=< heart,surge >

357

www.ijergs.org

V1 (1 , S 1,1={n1, n 2})

(2 , S 2,2={ n 3})
(1 , S 1,2={ n 4})

International Journal of Engineering


Research and
General Science Volume 2, Issue 6, October-November, 2014
Copy
Incremental
ISSN 2091-2730

q2=< heart,surgery >


V2 (1 , S 1,1={n1, n 2})
Copy

(2 , S 2,2={ n 5})
(1 , S 1,2={ n 6})
Copy

Incremental
New

q3=< heart,surgery,unit >


V3 (1 , S 1,1={n1, n 2})

(2 , S 2,2={n 5})
(1 , S 1,2={n 6})

(3 , S 3,3={n7})
(1 , S 1,3={n8})

Fig. 5. Computing valid phrases incrementally using cached valid phrases of previous queries.
an example, Figure 5 shows how a cached valid-phrase vector is used for incremental computation. Assuming V1 in the figure is stored
in the cache, vector V2 can be incrementally computed using V1 as follows. First, the first element of V1 is copied to V2, because q1 and
q2 share the same first keyword (lines 4 5). Then, the second element of V2 is computed incrementally starting from the active-node
sets S2,2 and S1,2 in the second element of V1 (lines 814). The incremental computation from V2 to V3 is an example case where there
are additional keywords in the new query. In this case, we copy the first two elements of V2 to V3 since the queries share their first two
keywords. We compute the third element of V3 based on the active-node sets of the second element of V2 (lines 1521). In particular,
we traverse the trie starting from nodes n5 and n6 to see if it contains a term prefix similar to surgery unit or heart surgery unit,
respectively. The traversal results in no active node for n5 and the active node n8 for n6. Thus we add the pair (1, S1, 3= {n8}) to the
third element of V3, indicating that there is a valid phrase starting from the 1st keyword and ending at the 3rd keyword. We also add an
element (3, S3, 3= {n7}) for the 3rd keyword unit since it is also a valid phrase with an active node n7 (lines 2230).
VI. GENERATING EFFICIENT QUERY PLANS
As explained in Section IV, the Phrase Validator computes the valid phrases in a query using the techniques described in Section V,
and passes the valid-phrase vector to the Query Plan Builder. In this section, we study how the Query Plan Builder generates and ranks
valid segmentations.
A. Generating Valid Segmentations
After receiving a list of valid phrases, the Query Plan Builder computes the valid segmentations. The basic segmentation is the one
where each keyword is treated as a phrase.. Table II shows all possible segmentations that can be generated from the valid phrases
vector V3 in Figure 5.

Algorithm 1: ComputeValidPhrases(q,C)
Input : query q = hw1,w2,...,wli where wi is a keyword; a cache module C; Output: a valid-phrase vector V ;
1

358

(qc,Vc) FindLongestCachedPrefix(q, C)
2 m number of keywords in qc

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

We develop a divide-and-conquer algorithm for generating all the segmentations from the valid-phrase vector V . Each phrase has a
start position and an end position in the query. The start position is stored in V [end] along with its computed active-node set. If there
is a segmentation for the query hw1,...,wstart1i, we can append the phrase [start, end] to it to obtain a segmentation for the query
hw1,...,wendi. Therefore, to compute all the segmentations for the first j keywords, we can compute all the segmentations for the first i
1 keywords, where (i,Si,j) V [j], and append the
TABLE II.
THREE SEGMENTATIONS FOR QUERY q = hheart,surgery,uniti.
1.
2.
3.

359

heart surgery unit


heart surgery | unit

heart | surgery | unit

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

phrase [i,j] to each of these segmentations to form new segmentations. This analysis helps us reduce the problem of generating
segmentations for the query hw1,...,wli to solving the sub problems of generating segmentations for each query hw1,...,wi1i, where
(i,Si,l) V [l]. Hence, the final segmentations can be computed by starting the computation from the last element of V . Algorithm 2
shows the recursive algorithm. Line 3 is the base case for the recursion, where the start position of the current phrase is the beginning
of the query. We can convert this recursive algorithm into a top down dynamic programming algorithm by memorizing all the
computed results for each end position.
Algorithm 2: Generate Segmentations (q,V,end)
Input : a query with a list of keywords q = {w 1,w2,...,wl}; its valid-phrase vector V ; a keyword position end (end
l) ;
Output: a vector Pend of all valid segmentations of w1,w2,...,wend
1 Pend

foreach (start,S start,end ) in V [end] do


if start == 1 then // Base Case
Pend P end hwstart ...w end i
else
foreachseg in
GenerateSegmentations(
q, V, start - 1)
do
seg seg | hwstart ...w end i
P end P end seg

return Pend

2
3
4
5
6

B. Ranking Segmentations
Each generated segmentation corresponds to a way of accessing the indexes to compute its answers. The Query Plan Builder needs to
rank these segmentations to decide the final query plan, which is an order of segmentations to be executed. We can run these
segmentations one by one until we find enough answers (i.e., k results). Thus, the ranking needs to guarantee that the answers to a
high-rank segmentation are more relevant than the answers to a low-rank segmentation. There are different methods to rank a
segmentation. Our segmentation ranking relies on a segmentation comparator to decide the final order of the segmentations. This
comparator compares two segmentations at a time based on the following features and decides which segmentation has a higher
ranking: (1) The summation of the minimum edit distance between each valid phrase in the segmentation and its active nodes; (2) The
number of phrases in the segmentation. The comparator ranks the segmentation that has the smaller minimum edit distance summation
higher. If two segmentations have the same total minimum edit distance, then it ranks the segmentation with fewer segments higher.
As an example, for the query q = hhart,surgeryi, consider the segmentation hart | surgery with two valid phrases. Each of them has
an exact match in the dictionary D, so its summation of minimum edit distances is 0.. Using this method, we would rank the first
segmentation higher due to its small total edit distance. If two segmentations have the same total minimum edit distance, then we can
rank the segmentation with fewer segments higher. When there are fewer phrases in a segmentation, the number of keywords in a
phrase increases.
VII. CONCLUSIONS
In this paper we studied how to improve ranking of an instant-fuzzy search system by considering proximity information when we
need to compute top-k answers. We studied how to adapt existing solutions to solve this problem, including computing all answers,
doing early termination, and indexing term pairs. We proposed a technique to index important phrases to avoid the large space
overhead of indexing all word grams. We presented an incremental-computation algorithm for finding the indexed phrases in a query
efficiently, and studied how to compute and rank the segmentations consisting of the indexed phrases.

360

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES
[1] I. Cetindil, J. Esmaelnezhad, C. Li, and D. Newman, Analysis of instant search query logs, in WebDB, 2012, pp. 712.
[2]

R. B. Miller, Response time in man-computer conversational transactions, in Proceedings of the December 9-11, 1968, fall
joint computer conference, part I, ser. AFIPS 68 (Fall, part I). New York, NY, USA: ACM, 1968, pp. 267277. [Online].
Available: http://doi.acm.org/10.1145/1476589.1476628

[3]

C. Silverstein, M. R. Henzinger, H. Marais, and M. Moricz, Analysis of a very large web search engine query log, SIGIR
Forum, vol. 33 , no. 1, pp. 612, 1999.

[4]

G. Li, J. Wang, C. Li, and J. Feng, Supporting efficient top-k queries in type-ahead search, in SIGIR, 2012, pp. 355364.

[5]

R. Schenkel, A. Broschart, S. won Hwang, M. Theobald, and G. Weikum, Efficient text proximity search, in SPIRE, 2007, pp.
287 299.

[6]

H. Yan, S. Shi, F. Zhang, T. Suel, and J.-R. Wen, Efficient term proximity search with term-pair indexes, in CIKM, 2010, pp.
1229 1238.

[7]

M. Zhu, S. Shi, N. Yu, and J.-R. Wen, Can phrase indexing help to process non-phrase queries? in CIKM, 2008, pp. 679688.

[8]
[9]

A. Jain and M. Pennacchiotti, Open entity extraction from web search query logs, in COLING, 2010, pp. 510518.
K. Grabski and T. Scheffer, Sentence completion, in SIGIR, 2004, pp. 433439.

[10]

A. Nandi and H. V. Jagadish, Effective phrase prediction, in VLDB, 2007, pp. 219230.

[11]

H. Bast and I. Weber, Type less, find more: fast autocompletion search with a succinct index, in SIGIR, 2006, pp. 364371.

[12]

H. Bast, A. Chitea, F. M. Suchanek, and I. Weber, Ester: efficient search on text, entities, and relations, in SIGIR, 2007, pp.
671678.

[13]

H. Bast and I. Weber, The completesearch engine: Interactive, efficient, and towards ir& db integration, in CIDR, 2007, pp.
8895.

[14]

S. Ji, G. Li, C. Li, and J. Feng, Efficient interactive fuzzy keyword search, in WWW, 2009, pp. 371380.

[15]

S. Chaudhuri and R. Kaushik, Extending autocompletion to tolerate errors, in SIGMOD Conference, 2009, pp. 707718.

[16]

G. Li, S. Ji, C. Li, and J. Feng, Efficient type-ahead search on relational data: a tastier approach, in SIGMOD Conference,
2009, pp. 695706.

[17]

M. Hadjieleftheriou and C. Li, Efficient approximate search on string collections, PVLDB, vol. 2, no. 2, pp. 16601661, 2009.

[18]

K. Chakrabarti, S. Chaudhuri, V. Ganti, and D. Xin, An efficient filter for approximate membership checking, in SIGMOD
Conference, 2008 , pp. 805818.

[19]

S. Chaudhuri, V. Ganti, and R. Motwani, Robust identification of fuzzy duplicates, in ICDE, 2005, pp. 865876.

[20]

A. Behm, S. Ji, C. Li, and J. Lu, Space-constrained gram-based indexing for efficient approximate string search, in ICDE,
2009, pp. 604615.

[21]

R. Fagin, A. Lotem, and M. Naor, Optimal aggregation algorithms for middleware, in PODS, 2001.

[22]

F. Zhang, S. Shi, H. Yan, and J.-R. Wen, Revisiting globally sorted indexes for efficient document retrieval, in WSDM, 2010,
pp. 371380.

[23]

M. Persin, J. Zobel, and R. Sacks-Davis, Filtered document retrieval with frequency-sorted indexes, JASIS, vol. 47, no. 10, pp.
749764 , 1996.

[24]

R. Song, M. J. Taylor, J.-R. Wen, H.-W. Hon, and Y. Yu, Viewing term proximity from a different perspective, in ECIR, 2008,
pp. 346357.

[25]

T. Tao and C. Zhai, An exploration of proximity measures in information retrieval, in SIGIR, 2007, pp. 295302.

[26]

M. Zhu, S. Shi, M. Li, and J.-R. Wen, Effective top-k computation in retrieving structured documents with term-proximity
support, in CIKM, 2007, pp. 771780.

[27]

S. Buttcher, C. L. A. Clarke, and B. Lushman, Term proximity scoring for ad-hoc retrieval on very large text collections, in
SIGIR, 2006, pp. 621622.

[28]

H. Zaragoza, N. Craswell, M. J. Taylor, S. Saria, and S. E. Robertson, Microsoft cambridge at trec 13: Web and hard tracks, in
TREC, 2004.

[29]

A. Franz and T. Brants, All our n-gram are belong to you, http://googleresearch.blogspot.com/2006/08/all-our-ngram-arebelongto-you.html, Aug. 2006.

[30]

A. Arampatzis and J. Kamps, A study of query length, in SIGIR, 2008, pp. 811812.

361

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[31]

Z. Bao, B. Kimelfeld, and Y. Li, A graph approach to spelling correction in domain-centric search, in ACL, 2011.

[32]

J. R. Herskovic, L. Y. Tanaka, W. R. Hersh, and E. V. Bernstam, Research paper: A day in the life of pubmed: Analysis of a
typical days query log, JAMIA, vol. 14, no. 2, pp. 212220, 2007.

[33]

D. R. Morrison, Patricia - practical algorithm to retrieve information coded in alphanumeric, J. ACM, vol. 15, no. 4, pp. 514
534, 1968.

[34]

Keyword and search engines statistics, http://www.keyworddiscovery.com/keyword-stats.html?date=201306-01, Jun. 2013

362

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Live Hand Gesture Recognition using an Android Device


Mr. Yogesh B. Dongare
Department of Computer Engineering.
G.H.Raisoni College of Engineering and Management, Ahmednagar.
Email- yogesh.dongare05@gmail.com
AbstractIn the field of image processing it is very interesting to recognize the human gesture for general life applications. Gesture
recognition is a growing field of research among various human computer interactions; hand gesture recognition is very popular for
interacting between human and machines. It is nonverbal way of communication and this research area is full of innovative
approaches. This paper aims at recognizing 40 basic hand gestures. The main features used are centroid in the hand, presence of thumb
and number of peaks in the hand gesture. That is the algorithm is based on shape based features by keeping in mind that shape of
human hand is same for all human beings except in some situations. The recognition approach used in this paper is artificial neural
network among back propagation algorithm. This approach can be adapted to real time system very easily. In this paper for image
acquisition android camera is used, after that frames are send to the server and edge detection of the video is done which is followed
by thinning that reduce the noise, tokens are being created from thinning image after tokens are fetched. The paper briefly describes
the schemes of capturing the image from android device, image detection, processing the image to recognize the gestures as well as
few results. Keywords: Edge Detection, Sobel algorithm, Android, token detection, gesture recognition, neural network..
I.

INTRODUCTION

Among the set of gestures intuitively performed by humans when communicating with each other, pointing gestures are especially
interesting for communication and is perhaps the most intuitive interface for selection. They open up the possibility of intuitively
indicating objects and locations, e.g., to make a robot change direction of its movement or to simply mark some object. This is
particularly useful in combination with speech recognition as pointing gestures can be used to specify parameters of location in verbal
statements. Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures
via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge
between machines and humans. It enables humans to communicate with the machine (HMI) and interact naturally without any
mechanical devices. There has been always considered a challenge in the development of a natural interaction interface, where people
interact with technology as they are used to interact with the real world. A hand free interface, based only on human gestures, where
no devices are attached to the user, will naturally immerse the user from the real world to the virtual environment.
Android device brings the long-expected technology to interact with graphical interfaces to the masses. Android device captures the
users movements without the need of a controller.
II. PROPOSED ALGORITHM

363

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1. Block Diagram of Hand Gesture Recognition Model

III. BACKGROUND
A. Hand Gesture Recognition

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via
mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge
between machines and humans. Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally
without any mechanical devices. Gesture recognition can be conducted with techniques from computer vision and image processing.
Gestures of the hand are read by an input sensing device such as an android device. It reads the movements of the human body and
communicates with computer that uses these gestures as an input. These gestures are then interpreted using algorithm either based on
statistical analysis or artificial intelligence techniques. The primary goal of gesture recognition research is to create a system which
can identify specific human hand gestures and use them to convey information. By recognizing the hand symbols of a man it can help
in communication with deaf and dumb people. It helps in taking prompt action at that time.
B. Edge Detection

Edge Detection [4] is the early processing stage in image processing and computer vision, aimed at detecting and characterizing
discontinuities in the image domain. It aims at identifying points in a digital image at which the image brightness changes sharply or,
more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved
line segments termed edges. The same problem of finding discontinuities in 1D signals is known as step detection and the problem of
finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing,
machine vision and computer vision, particularly in the areas of feature detection and feature extraction [1]. Some of the different
types of edge detection techniques are: 1.Sobel Edge Detector
2.Canny Edge Detector
3.Prewitt Edge Detector
364

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C. Sobel Edge Detector

The Sobel operator is used in image processing to detect edges of an image. The operator calculates the gradient of the image intensity
at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The result
therefore shows how abruptly or smoothly the image changes at that point, and therefore how likely it is that, that part of the
image represents an edge, as well as how that edge is likely to be oriented [7].
D. Artificial Neural Network

An artificial neuron is a computational model inspired in the natural neurons. These networks consist of inputs ( like synapses), which
are multiplied by weights (strength of the respective signals), and then computed by a mathematical function which determines the
activation of the neuron. Another function (which may be the identity) computes the output of the artificial neuron (sometimes in
dependence of a certain threshold). Artificial Neural Networks ( ANN ) combine artificial neurons in order to process information.
The higher a weight of an artificial neuron is, the stronger the input which is multiplied by it will be. Depending on the weights, the
computation of the neuron will be different. We can adjust the weights of the ANN in order to obtain the desired output from the
network. This process of adjusting the weights is called learning or training.
The function of ANNs is to process information, they are used mainly in fields related with it. There are a wide variety of ANNs that
are used to model real neural networks, and study behavior and control in animals and machines, but also there are ANNs which are
used for engineering purposes, such as pattern recognition, forecasting, and data compression
[5].

E. Backpropagation Algorithm

The backpropagation algorithm [6] is used in layered feed-forward ANNs. This means that the artificial neurons are organized in
layers, and send their signals forward, and then the errors are propagated backwards. The network receives inputs by neurons in the
input layer, and the output of the network is given by the neurons on an output layer. There may be one or more intermediate hidden
layers. The backpropagation algorithm uses supervised learning, which means that we provide the algorithm with examples of the
inputs and outputs we want the network to compute, and then the error (difference between actual and expected results) is calculated.
The idea of the backpropagation algorithm is to reduce this error, until the ANN learns the training data. The training begins with
random weights, and the goal is to adjust them so that the error will be minimal [5].
IV. SYSTEM DESIGN
A. IMAGE ACQUISITION

Image acquisition is the first step in any vision system, only after this process you can go forward with the image processing. In this
application it is done by using IPWebCam android application. The application uses the camera present in the phone for continuous
image capturing and a simultaneous display on the screen. The image captured by the application is streamed over its Wi-Fi
connection (or WLAN without internet as used here) for remote viewing. The program access the image by logging to the devices IP,
which is then showed in the GUI.

Fig. 2. Original Image captured from Android device


365

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

B. IMAGE PRE-PROCESSING: EDGE DETECTION

In this program the edge detection technique used is sobel edge detector. The image captured is then passed through sobel filter.

Fig. 3. Sobel Edge Filtered Image

C. THINNING

Thinning is a morphological operation that is used to remove selected foreground pixels from binary images, somewhat like erosion or
opening. It can be used for several applications, but is particularly useful for skeletonization. In this mode it is commonly used to tidy
up the output of edge detectors by reducing all lines to single pixel thickness. Thinning is normally only applied to binary images, and
produces another binary image as output. After the edge detection, thinning has to be performed. Thinning is applied to reduce the
width of an edge to single line.

Fig. 4. Image after thinning


D. HAND TOKEN

The idea here is to make the image into a neuronal network usable form, so that the cosinus and sinus angles of the shape represents
the criterias of a recognition pattern. Each square represents a point on the shape of the hand image from which a line to the next
square is drawn.
On zooming a part of figure 5 it shows a right-angled triangle between the 2 consecutive squares, as shown in figure 6. This and the
summary of all triangles of a hand image are the representation of the tokens of a hand from which we can start the neuronal network
calculations.
The right-angled triangle in figure 5 represents a token of a single hand image. The angles A and B are the two necessary

366

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 5. Generated token of the original image

Fig. 6. Zoomed image of image tokens & effective right angled triangle
parts which will be fit into the neuronal network layers. With the two angles we can exactly represent the direction of the hypotenuse
from point P1 to P2 which represents the direction of a hand image.
E. TRAINING DATA
Another main part of this work is the integration of a feed-forward backpropagation neural network. As described earlier the inputs for
this neuronal network are the individual tokens of a hand image, and as a token normally consists of a cosinus and sinus angle, the
amount of input layers for this network are the amount of tokens multiplied by two. The implemented network just has one input,
hidden and output layer to simplify and speed-up the calculations on that java implementation.For training purpose the database of
images located on the disk is used. It contains 6 different types of predefined gestures. These gestures are shown in figure 7. These are
basic hand gestures indicating numbers zero to five.The implemented network just has one input, hidden and output layer to simplify
and speed-up the calculations on that java implementation.For training purpose the database of images located on the disk is used. It
contains 6 different types of predefined gestures. These gestures are first processed and then the tokens generated are passed to the
network for training purpose. This process of training network from set images is done automatically when the application is
initialized.
Orientation. The statistical summary of the results is as follows

367

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 7. Database of Gestures

V.

RECOGNITION

Recognition is the final step of the application. To fill the input neurons of the trained network, the previous calculated tokens
discussed in section D are used. The number of output neurons is normally specified by the amount of different type of gestures, in
this case it is fixed to 6. All other behavior of the network is specified by the normal mathematical principals of a backpropagation
network as discussed in section E. It gives percentage of recognition to each gesture with highest percentage closely matching and
lowest to the farthest matching and the closest match is considered as the result.

VI.

TESTING

Figure 8 shows the screenshot of the screen when the application is started. Figure 9 shows the screen during the process of gesture
recognition.

Fig. 8. Initialization of the pro

VII.

RESULTS

To test the application, gestures are made by three different people. Some of the gestures are closed or have different

VIII. CONCLUSION
368

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

This system is useful for communicating with a deaf and dumb person. The system database has sign gestures of size of 176X144
pixels so that it takes less time and memory space during pattern recognition. The recognition rate of all gestures in between 70-80%
which is an acceptable range. Overall accuracy of this system is 77% ( approx ).
In future, we can use a custom camera instead of the IpWebCam app which will further enhance the success rate of the system. Other
different type of gestures can also be made part of the database. Try to eliminate noises from background to improve the accuracy rate.

REFERENCES;
[1]
[2]
[3]
[4]
[5]
[6]
[7]

Lindeberg, Tony (2001), Edge detection, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608010-4.
H. Farid and E. P. Simoncelli, Differentiation of discrete multidimensional signals, IEEE Trans Image Processing, vol.13(4), pp.
496 508, Apr 2004.
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
Lindeberg, Tony (2001), Edge detection, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608010-4.
A. Bosch, A. Zisserman, and X. Munoz, Scene classification using a hybrid generative /discriminative approach, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 4, pp. 712-727, 2008 , 993-1007.
JagdishLalRaheja, Umesh Kumar, Human Facial Expression Detection Image Using Back Propagation Neural Network.,
International Journal of Computer Science and Information Technology (IJCSIT); Vol. 2, No. 1 , Feb 2010, pp. 116-112.
J. Matthews. An introduction to edge detection: The sobel edge detector, at www.generation5.org/content/2002/im01.asp, 2002

369

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

370

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

371

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

372

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

373

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

374

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

375

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Privacy Preserving in a Location Proof Updating System Using Android


B. A. Waghmode#1, S. Nandgave*2
#

Student , Department Of Computer Engineering, University of Pune, GHRCOEM, Ahmednagar, Maharashtra, India.
*

Asst. Professor, Department Of Computer Engineering, University of Pune, GHRCEM, Pune, Maharashtra, India.
1

bwaghmode15@gmail.com,

sunita.nandgave@raisoni.net

Abstract Now a days mobile devices have inbuilt GPS systems such as smartphones and PDAs which plays as increasingly
important role in location based services. In location based applications and services users require proving their locations at a
particular time. There are lots of applications in market that helps to track someones location. As location proof plays a critical role in
enabling these applications, they are location sensitive. The common gain of all these applications is that they offer certain
geographical location of a person carrying GPS enabled device. But sometimes users lie about their locations. Location Based
Services (LBS) personalize the service they provide or grant access to resources according to the current location of users. They are
used in a variety of contexts, such as real-time traffic monitoring, discount tied to the visit of a particular shop. In most of current
schemes, the location of a user/device is determined by the device itself (eg. through GPS) and forwarded to the LBS provider. By
doing so, a user can cheat by having his device transmitting a false location to gain access to unauthorized resources, thus raising the
issue of verifying the position claimed by a particular user. To counter this threat, LBS should ideally require the requesting device to
formally prove that it really is at the claimed location. This notion has been formalized through the concept of location proof which is
a proves that someone was at a particular location at a specific moment in time.

Keywords Location-based services, location proof, location privacy, pseudonym.


INTRODUCTION

Nowadays, more and more location based applications and services require users to provide location proofs at a particular
time. For example, Google Latitude and Longitude are two services that enable users to track their friends locations in real time.
These applications are location-sensitive since location proofs plays a critical role in enabling these applications. There are many
kinds of location-sensitive applications. One category is location-based access control. For example, a hospital may allow patient
information access only when doctors or nurses can prove that they are in a particular room of the hospital [14]. Another class of
location-sensitive applications requires users to provide past location proofs [20], such as auto insurance quote in which auto
insurance companys offer discounts to drivers who can prove that they take safe routes during their daily commutes, sales officer are
roaming out so the organization can track report of their rout and client so they cant cheat with their organization, and location-based
social networking in which a user can ask for a location proof from the service requester and accepts the request only if the sender is
able to present a valid location proof. The common theme across these location sensitive applications is that they offer a reward or
benefit to users located in a certain geographical location at a certain time. Thus, users have the incentive to cheat on their locations.
In general approach, location-sensitive applications require users to prove that really are (or were) at the claimed locations.
Although most mobile users have devices capable of discovering their locations, some users may cheat on their locations and there is a
lack of secure mechanism to provide their current or past locations to applications and services. One possible solution [11] is to build a
trusted computing module on each mobile device to make sure trusted GPS data is generated and transmitted. For example, Lenders et
al. [11] proposed such a solution which can be used to generate unforgettable geo tags for mobile content such as photos and video;
however, it relies on the expensive trusted computing module on the mobile devices to generate proofs. Although cellular service
providers have tracking services that can help verify the locations of mobile users in real time, the accuracy is not good enough and
the location history cannot be verified. Recently, several systems have been designed to let end users prove their locations through WiFi infrastructures. For example, Saroiu and Wolman [20] proposed a solution suitable for third-party attestation, but it relies on PKI
and the wide deployment of Wi-Fi infrastructure.
In our approach, we propose A Privacy Preserving LocAtion proof Updating System (APPLAUS), which does not rely on the
wide deployment of network infrastructure or the expensive trusted computing module. In APPLAUS, Android mobile devices in
range mutually generate location proofs, which are uploaded to an untrusted location proof server than can verify the trust level of
each location proof. An authorized verifier can query and retrieve location proofs from the server. Moreover, out location proof
376
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

system guarantees user location privacy from every party. More specifically, we use statistically updated pseudonyms at each mobile
device to protect location privacy from each other, and from the untrusted location proof server. We develop a user centric location
privacy levels in real time and decide whether and when to accept a location proof request.

THE LOCATION PROOF UPDATING SYSTEM


In this section, we introduce the location proof updating architecture, the protocol, and how mobile nodes schedule their location proof
updating to achieve location privacy in APPLAUS.
[1] System Architecture:

Prover Module:In this module, the node that needs to collect location proofs from its neighboring nodes. When a location proof is needed at time t, the
power will broadcast a location proof request to its neighboring nodes through Android. If no positive response is received, the prover
will generate dummy location proof and submit it to the location proof server.
Witness for Location:Once a neighboring node agrees to provide location proof for the prover, this node becomes a witness of the prover. The witness node
will generate a location proof and send it back to the prover.
Location Proof Server:As our goal is not only to monitor real-time locations, but also to retrieve history location proof information when needed, a location
proof server is necessary for storing the history records of the location proofs. It communicates directly with the prover nodes who
submit their location proofs. As the source identifies of the location proofs are stored as pseudonyms, the location proof server is
untrusted in the sense that even though it is compromised and monitored by attackers, it is impossible for the attacker to reveal the real
source of the location proof.
Certificate Authirity:-

377

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

As commonly used in many networks, we consider an online CA which is run by an independent trusted third party. Every mobile
node registers with the CA and pre-loads a set of public / private key pairs before entering the network. CA is the only party who
knows the mapping between the real identity and pseudonyms (public keys) , and works as a bridge between the verifier and the
location proof server. It can retrieve location proof from the server and forward it to the verifier.
Verifier Module:A third-party user or an application who is authorized to verify a provers location within a specific time period. The verifier usually
has close relationship with the prover, eg., friends or colleagues, to be trusted enough to gain authorization.
Related Work
Recently, several systems have been proposed to provide end users the ability to provide that they were in a particular place at a
particular time. The solution in [2] relies on the fact that nothing is faster than the speed of light in order to compute an upper bound of
a users distance. Capkun and Hubex [3] propose challenge-response schemes, which use multiple receivers to accurately estimate a
wireless node location using RF propagation characteristics. In [11], the authors describe a secure localization service that can be used
to generate unforgettable geo tags for mobile content such as photos and video. However dedicated measuring hardware or high-cost
trusted computing module are required. Saroiu and Wolman [20] propose a solution suitable for third-party attestation, but it relies on
a PKI and the wide deployment of Wi-Fi infrastructure. Different from these solutions, APPLAUS uses a peer-to-peer approach and
does not require any changes to the existing infrastructure. Smokescreen [4] introduces a presence sharing mobile social service
between co-located users which relies on centralized, trusted brokers to coordinate anonymous communication between strangers.
SMILE [15], [16] allows users to establish missed connections and utilizes similar wireless techniques to prove if a physical encounter
occurred. However, this service does not reveal the actual location information to the service provider thus can only provide location
proofs between two users who have actually encountered. APPLAUS can provide location proofs to third-party by uploading real
encounter location to the untrusted server while maintaining location privacy. There are lots of existing works on location privacy in
wireless networks, in [8], the authors propose to reduce the accuracy of location information along spatial and / or temporal
dimensions. This basic concept has beed improved by a series of works [7], [10]. All the above techniques cloak a nodes locations
with its current neighbors by trusted central servers which is vulnerable to DoS attacks or to be compromised. Different from them,
our approach does not require the location proof server to be trustworthy. Xu and Cai [23] propose a feeling-based model which
allows a user to express his privacy requirement. One important concern here is that the spatial and temporal correlation between
successive locations of mobile nodes must be carefully eliminated to prevent external parties from compromising their location
privacy. The techniques in [1], [6] achieve location privacy by changing pseudonyms in regions called mix zones. In this paper,
pseudonyms of each node are changed by the node itself periodically following a Poisson distribution, rather than being exchanged
between two untrusted nodes. Identifying a fundamental tradeoff between performance and privacy, Shao et al. [21], [24] propose a
notion of statistically strong source anonymity in wireless sensor networks for the first time, while Li and Ren [13] and Zhang et al.
[25] tried to provide source location privacy against traffic analysis attacks through dynamic routing or anonymous authentication.
Our scheme uses similar source location unobservability concept in which the real location proof message is scheduled through
statistical Algorithms. However, their focus is to generate identical distributions between different nodes to hide the real event source,
while our focus is to design distinct distributions between different pseudonyms to protect the real identity.

378

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ACKNOWLEDGMENT
We would like to sincerely thank Prof. Mrs. Sunita Nandgave, our mentor (Asst. Professor, GHRCEM, Wagholi, Pune), for her
support and encouragement.

CONCLUSION
APPLAUS uses Android mobile devices mutually generate location proofs and upload to the location proof server. This may be the
first work to address the joint problem of location proofs effectively, and it preserves source location privacy and its collusion
resistant.

REFERENCES:
[1] A.R. Beresford and F. Stajano, Location Privacy in Pervasive Computing, IEEE Security and Privacy, 2003.
[2] S. Brands and D. Chaum, Distance-Bounding Protocols, Proc. Workshop Theory and Application of Cryptographic
Techniques on Advances in Cryptology (EUROCRYPT 93), 1994.
[3] S. Capkun and J.-P. Hubaux, Secure Positioning of Wireless Devices with Application to Sensor Networks, Proc. IEEE
INFOCOM, 2005.
[4] L.P. Cox, A. Dalton, and V. Marupadi, SmokeScreen: Flexible Privacy Controls for Presence-Sharing, Proc. ACM
MobiSys, 2007.
[5] N. Eagle and A. Pentland, CRAWDAD Data Set mit/reality(v.2005-07-01), http://crawdad.cs.dartmouth.edu/mit/reality,
July 2005.
[6] J. Freudiger, M.H. Manshaei, J.P. Hubaux, and D.C. Parkes, On Non-Cooperative Location Privacy: A Game-Theoretic
Analysis, Proc. 16th ACM Conf. Computer and Comm. Security (CCS), 2009.
[7] B. Gedik and L. Liu, A Customizable K-Anonymity Model for Protecting Location Privacy, Proc. IEEE Intl Conf.
Distributed Computing Systems (ICDCS), 2005.
[8] M. Gruteser and D. Grunwald, Anonymous Usage of Location- Based Services through Spatial and Temporal Cloaking,
Proc. ACM MobiSys, 2003.
[9] B. Hoh, M. Gruteser, R. Herring, J. Ban, D. Work, J.C. Herrera, A.M. Bayen, M. Annavaram, and Q. Jacobson, Virtual Trip
Lines for Distributed Privacy-Preserving Traffic Monitoring, Proc. ACM MobiSys, 2008.
[10] T. Jiang, H.J. Wang, and Y.-C. Hu, Location Privacy in Wireless Networks, Proc. ACM MobiSys, 2007.
[11] V. Lenders, E. Koukoumidis, P. Zhang, and M. Martonosi, Location-Based Trust for Mobile User-Generated Content:
Applications Challenges and Implementations, Proc. Ninth Workshop Mobile Computing Systems and Applications, 2008.
[12] M. Li, K. Sampigethaya, L. Huang, and R. Poovendran, Swing & Swap: User-Centric Approaches Towards Maximizing
Location Privacy, Proc. Fifth ACM Workshop Privacy in Electronic Soc., 2006.
[13] Y. Li and J. Ren, Source-Location Privacy Through Dynamic Routing in Wireless Sensor Networks, Proc. IEEE
INFOCOM, 2010.
[14] W. Luo and U. Hengartner, Proving Your Location Without Giving Up Your Privacy, Proc. ACM 11th Workshop Mobile
Computing Systems and Applications (HotMobile 10), 2010.
[15] J. Manweiler, R. Scudellari, Z. Cancio, and L.P. Cox, We Saw Each Other on the Subway: Secure Anonymous ProximityBased Missed Connections, Proc. ACM 10th Workshop Mobile Computing Systems and Applications (HotMobile 09),
2009.
[16] J. Manweiler, R. Scudellari, and L.P. Cox, SMILE: Encounter- Based Trust for Mobile Social Services, Proc. ACM Conf.
Computer and Comm. Security (CCS), 2009.
[17] F.J. Massey Jr., The Kolmogorov-Smirnov Test for Goodness of Fit, J. Am. Statistical Assoc., vol. 46, no. 253, pp. 68-78,
1951.
[18] I. Rhee, M. Shin, K. Lee, and S. Chong, On the Levy-Walk Nature of Human Mobility, Proc. IEEE INFOCOM, 2007.
[19] J.L. Romeu, Kolmogorov-Simirnov: A Goodness of Fit Test for Small Samples, START: Selected Topics in Assurance
Related Technologies, 2003.
[20] S. Saroiu and A. Wolman, Enabling New Mobile Applications with Location Proofs, Proc. ACM 10th Workshop Mobile
Computing Systems and Applications (HotMobile 09), 2009.
[21] M. Shao, Y. Yang, S. Zhu, and G. Cao, Towards Statistically Strong Source Anonymity for Sensor Networks, Proc. IEEE
INFOCOM, 2008.
[22] A. Wald, Sequential Analysis. Dover, 2004.
[23] T. Xu and Y. Cai, Feeling-Based Location Privacy Protection for Location-Based Services, Proc. 16th ACM Conf.
Computer Comm. Security (CCS), 2009.
[24] Y. Zhang, W. Liu, and W. Lou, Anonymous Communications in Mobile Ad Hoc Networks, Proc. IEEE INFOCOM, 2005
379

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Espionage on Search Optimization using Dynamic Query Form


Prajkta Dagade1, Mansi Bhonsle2
Computer science and Engineering G.H.R.C.E.M, Ahmednagar, India1
Computer science and Engineering G.H.R.C.E.M, Wagholi, Pune, India2
prajkta.dagade@gmail.com1
mansi.bhonsle@gmail.com2

Abstract Traditional data mining technologies cannot work with huge, heterogeneous, unstructured data. Modern web database as
well as scientific database maintains tremendous and heterogeneous data. These real word databases may contain hundreds or even
thousands of relations and attributes. Query form is one of the most widely used interfaces for querying database. Traditional query
forms are designed and pre-defined by developer or DBA in various information management systems. But extracting the useful
information with this traditional query form from large dataset and streams of the data is not possible and it is difficult to design set of
static query forms to satisfy numerous ad- hoc database queries on those complex databases. In this paper, we propose a Search
optimization using dynamic query form system SODQF. SODQF is a query interface which is able to dynamically generating query
forms for user. Unlike to tradition document retrieval, users in database retrieval are often willing to perform many rounds of action
before identifying final results. Dynamic query form captures user interest during the user interaction and to adapt the query form
interactively. Each iteration consists of three types of user interactions, Selection from an assortment of the forms, Query form
Renovation and Query Execution. The Query from is enriched repeatedly until the user is satisfied with the query results. In this paper
we are mainly focusing dynamic generation if query forms and ranking of query form components.

Keywords Query Form, Query Renovation, Information Extraction, Dynamic Approach, User interaction, Keyword Search

.
INTRODUCTION

Traditional database systems require the user to construct the database query from language primitives [1]. Such systems are powerful
and expensive but not easy to use, especially when user is not familiar with the database scheme. The continuous advancement of
wide-area network technology has resulted in rapid growth in number of data sources available online and the demand for information
access over the internet from a diversity of clients. Query form is one of the most important user interfaces used by users for querying
databases. With the fast development of web information and scientific databases, modern databases became tremendous and
complex. Many databases such as Freebase, DBPedia have thousands of structured web entities[2] [4]. Hence it is difficult to design
set of static query forms to satisfy different ad-hoc database queries on those large databases.
Many database management and development tools, such as Easy Query [5], Cold Fusion [3], SAP and Microsoft Access,
provides several mechanisms to let users create customized queries on databases. This customized query totally depends on users
manual editing [6]. It will be critical for user if user is not familiar with the database schema, in short those thousands of data
attributes confuse user.
380

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

EXISTING APPROACHES
In the analysis of the query form which is one of the most useful user interfaces for users for querying database, some different
approaches have been proposed for prompt response to user. Lots of research works focus on database interfaces which assist users to
query the relational database without structured query language. Query by example and Query form are two most widely used
database querying interfaces. Query Form has been utilized in most real-time business or scientific information systems. Main goal in
this paper is to focus how to generate Query forms and to discover techniques that assist users who are unfamiliar to and do not want
to use Structured Query language in posing ad-hoc structured queries over relational databases. However, the creation of customized
queries totally depends on users manual editing [6].
Dynamic Faceted Search is a type of search engines where relevant facets are presented for the users according to their navigation
paths [7] [8]. Dynamic faceted search engines are similar to dynamic query forms if we only consider selection components in a
query. However besides Selections, a database query firm has other important components such as Projection components. Projection
components control the output of the query form and needs to be considered. Moreover, designs of Selection and Projection have
inherent influences to each other.
Autocompletion for Database Queries, in [9] ,[10], novel user interfaces have been developed to assist the user to type database
queries based on query workload, the data distribution and the database schema. Queries in their work are in the forms of SQL and
keywords.
An Efficient Sql-Based Rdf Querying Scheme is a devising scheme for efficient and scalable querying of Resource
Description Framework (RDF) data has been an active area of current research. However, most approaches define new languages for
querying RDF data, which has the following shortcomings:
1) They incur inefficiency as data has to be transformed from SQL to the corresponding language data format, and
2) They are difficult to integrate with SQL queries used in database applications.
This paper proposes the SQL based scheme that avoids these problems. Specifically, it introduces a SQL table function RDF_MATCH
to query RDF data. The results of RDF_MATCH table function can be further processed by SQLs rich querying capabilities and
seamlessly combined with queries on traditional relational data. Furthermore, the RDF_MATCH table function invocation is rewritten
as a SQL query, thereby avoiding run-time table function procedural overheads [11].
An SQL-ish query language for RDF provides consistent, human-understandable, access to repositories of semantic data,
whether stored files or large databases, enabling application programmers to create semantic web applications quickly. This paper
describes the conceptual framework for the query language: this is closely tied to RDF graph, providing a base level of query of RDF
data. There are number of other query languages of RDF available, one of the earliest was rdfDB [12] and this is the basic for
SquishQL syntax. It is simple graph query languages designed for use in the rdfDB database system. It differs from SquishQL and
does not contain the constraints on the variables used by SquishQL. It returns result as a table. This paper describes a refined
framework for the querying of RDF data and have implemented SquishQL in tree systems: 1) Inkling, stores RDF data in relational
database or in external XML files, 2) The second implementation, RDQL, is part of the Jena RDF toolkit and combines query with
manipulation of the RDF graph at a fine-grained level through the jena RDF API and, 3) The third, in RDFStore, is close coupling of
RDF and Perl data access styles [13].

381

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SeeDb is a DBMS that practically automates the especially laborious aspects of the search for useful data insights or a query.
In a nutshell, given an input query Q, the new DBMS optimizer will explore not only the space of physical plans for Q, bus also the
space of possible visualization of potentially interesting or useful visualizations, where each visualization is coupled with a
suitable query execution plan. SeeDb provides analysts with visualization highlighting interesting aspects of the query results. There
are several concrete problems in architecting SeeDb, relating to areas ranging from multi-query optimization and approximation to
multi-criteria optimization. As per this work the general area of bringing visualization closer to the DBMS is challenging, yet,
important direction for database research in the future [14].
Automated Technique for forms-based interface seeks to maximize the ability of a forms-based interface to support queries a
user may ask, while bounding both the number of forms and the complexity of any one form. This automated technique generates a
reasonable set of forms, one that can express 60-90% of user queries without any input from the database administrator. This
technique does not consider any actual query log at hand, solely based on the schema and data content of a database. This technique
break the forms interface design problem down into three challenges:
1) First, Determining the schema fragments(s) most likely to be of interest to a querying user.
2) Second, Partition the filtered collection of schema elements into groups (not necessarily distinct).
3) Convert each of these groups of schema elements into a form that a user can employ to express a desired query [6].
Algae are another early query language for RDF. It uses S-expression syntax to do graph matching. It is used to power the
W3Cs Annotea annotations system [15], and other software at the W3C. It is written in pearl, and can be used with an SQL database.
It returns a set of triples in support of each result.
RQL [16, 17] is a combined RDF store and query system. It also provides a schema validating parser and has a syntax
targeted at RDFS [18] applications. It can perform similar queries to SquishQL, with the added power of support for transitive closure
on RDFS subclass and sub property. This is also the query language used by Seasame [19].
Skyline Query can be used for addressing multi criteria decision making. This will enhance the performance of Dynamic
Query Forms, which is a query interface capable for generating query forms dynamically for users. As the success of Internet search
engines makes abundantly clear, when faced with discovering documents of interest, the database querying can be formalized by
combining the keyword search and forms. Here at query time, a user with a question to be answered issues standard keyword search
queries, but instead of returning tuples, the system returns forms relevant to the question. So the concept of multi criteria decision
making can be in cooperated and for addressing multi criteria decision making the concept of skyline query has been used [20].
QueryScope, is a prototype query visualization system. It is a novel query workload visualization and exploration system. It
uses compact visual semantics to capture key elements of queries. Enhanced with common query pattern mining and similarity search,
it enables database consultants to look up queries captured in related data warehouse projects and examine opportunities for
performance tuning. Main Goals of this system are:
1) To communicate the essence of a query (or a collection of queries) pictorially through a controlled visual semantics, 2) To provide
a variety of visualization options so that a user can focus on the aspects of queries of relevance,
3) To visualize queries in the context of a physical schema, 4) To facilitate searches for similar queries,
5) To make the tuning process productive and repeatable.
QueryScope was successfully used in some of engagements, a thorough assessment of the value of QueryScope would
require putting it in the hands of many practitioners for use in real customer projects to judge tuning knowledge accumulation and
repeatability of the tuning advice. Ling hue [21] intends to pursue this revenue in the future and release the tool for public trial [21].
382

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

One more system for Query, Analysis, and Visualization of Multidimensional Relational Databases is Polaris. Polaris is an
interface for exploration of multidimensional databases that extends the Pivot table interface to directly generate a rich, expressive set
of graphical displays. Polaris builds table using an algebraic formalism involving the fields of the database. The use of tables to
organize multiple graphs on a display is a technique often used by statisticians in their analysis of data [22], [23], and [24]. The Polaris
interface is simple and expressive because it is built upon formalism for constructing graphs and building data transformations. Polaris
can be extended as future work; one area of future work is exploring database performance issues. A related area is expanding Polaris
to expose the hierarchical structure of data cubes. Another area of future work is to leverage the direct correspondence of graphical
marks in Polaris to tuples in the relational databases in order to generate database tables from a selected set of graphical marks [25].
DIOM, dynamic query processing framework and the strategies used for improving query responsiveness. There are some
features that distinguish the dynamic query processing in DIOM from other approaches, such as Carnot, Garlic[26], TSIMMIS [27]
are:
1) Allow users to pose queries on the fly, without relying on a predefined view that integrates all available information sources,
2) It identify the importance of query routing step in building efficient query scheduling framework for distributed open environment,
3)It is three tier approach i.e. Query routing, Heuristic-based optimization, and cost-based planning to eliminate the worst schedules as
early as possible.
In future it could be possible to adapt and incorporate the state of art research results in improving query responsiveness with
the DIOM system, such as the query scrambling approach for dynamic query plan modification [28] and the online aggregation for
fast delivery of aggregate queries by continuous sampling.
Usher is the probabilistic approach that can be used to design intelligent data entry forms that promote high data quality.
Usher basically learns the probabilistic model over the questions of the form. Usher then applies this model at every step of the data
entry process to improve data quality. This system focuses on:
1) Data-driven approach 2) Learning a model for data Entry, i.e. it uses probabilistic model of the data, represented as a Bayesian
network over form questions, 3) Question Ordering, 4) Question re-asking, 5) Evaluation of the benefits of Usher and on the quality of
this model.
USHER leverages data-driven insights to automate multiple steps in the data entry pipeline. Before entry, there is an ordering of form
fields that promotes rapid information capture, driven by a greedy information principle. This demonstrates the data quality benefits,
question ordering allows better prediction accuracy and the re-asking model identifies erroneous responses effectively.

This

model makes assumptions both about how errors are distributed, and what errors look like. Based on this, future work would be to
learn a model of data entry errors and adapt our system to catch them [29].

383

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

OUR APPROACH
In this paper we propose search optimization using Dynamic Query form which is capable of dynamically generating query forms for
users. Dynamic Queries let users fly through database and it captures user interests during user interactions and adapt the query
form repeatedly. Each iteration consists of three types of interaction:

Selection
from
an
assortment
of
form
components

Recommends a ranked list of query form


components to user

User selects the desired form components


from current query form
Query Form Renovation

Query Execution

User fills out the current query form and


submit a query

OQF executes the query and displays the


desired results
The user gives the feedback about the
query results

Selection from an assortment of forms, Query Form Renovation, Query Execution. We mainly focus on developing the methods to
capture users interest besides the click through feedback. For instance we can add a text box for keyword queries as an input from
users, ranking of query form components and dynamic generation of query forms.
ACKNOWLEDGMENT
This Project is by far the most significant accomplishment in my PG and it would be impossible without people (especially
my family) who supported me and believed in me.
I am thankful to Prof. Mansi Bhonsle, ME Coordinator of department of Computer Engineering, Raisoni College, Pune for giving me
the opportunity to work under her and lending every support at every stage of this work. I truly appreciate and value her esteemed
guidance and encouragement from the beginning to the end of this work. I am indebted to her for having helped me shape the problem
and providing insights towards the solution. Her trust and great support inspired me in the most important moments of making right
decisions and I am very glad to work with her.

CONCLUSION
We discussed different Form Generation Approaches; from them we can compare three basic approaches to generate query forms:
1) DQF: The dynamic query form system mainly focused in this paper,
2) SQF: The static query form generation approach proposed in [30]. It also uses query workload. Queries in the workload are first
divided into clusters. Each cluster is converted into query form,

384

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3) CQF: the customized query form generation used by many existing database clients, such as Microsoft Access, EasyQuery, and
ActiveQueryBuilder. In this paper we focused on an innovative form based approach called Dynamic Query Form for search
optimization. This form generation approach can indeed produce forms, of manageable number and complexity, which are capable of
posing a majority of user queries to a given database. We consider number of issues that arise in the implementation for this approach
such as: designing and generating forms in a systematic fashion, handling keyword queries, filtering out forms that that would produce
no results with respect to a users query, and ranking and displaying the forms in a way that help users find useful forms more
promptly.
Dynamic approach often leads to the higher success rate and simpler query compared with static approach and offer a
dramatic change from existing methods for querying databases. As a future work we can extend this approach to non relational data
and we can also incorporate natural language processing in this dynamic query form system. We also plan to develop multiple
methods to capture users interest for the queries besides the click feedback. For instance, we can add a text-box for user to input some
keyword queries; this would be helpful for user when data which user wants is not there on the click through forms. In particular,
developing automated techniques for generating better form descriptions, especially in the presence of grouping of forms, appears to
be a challenging and important problem.
REFERENCES:
[1] Jeffrey D. Ullman. Principles of Database Systems.

Computer Science Press, 1980.

[2] DBPedia. http://DBPedia.org.


[3] ColdFusion. http://www.adobe.com/products/coldfusion.
[4] Freebase. http://www.freebase.com.
[5] EasyQuery. http://
[6] M. Jayapandian, H. V. Jagadish. Automated Creation of a Form-based Database Query Interface. VLDB 2008.
[7] C. Li, N. Yan, S.B.Roy, L. Lisham, and G. Das. Facetedpedia: Dynamic generation of query-dependent faceted interfaces for
Wikipedia. In proceeding of WWW, pages 651-660, Raleigh, North Carolina, USA, April 2010.
[8] D. Rafiei, K. Bharat, and A. Shukla. Diversifying web search results. In Proceedings of WWW, pages 781-790, Raleigh, North
Carolina, USA, April 2010.
[9] N. Khoussainova, Y. kwon, M, Balazinska, and D. Suciu. Snipsuggest: Context-aware autocompletion for sql. PVLDB, 4(1):2223, 2010.
[10] A. Nandi and H. V. Jagadish. Assisted querying using instant response interfaces. In proceedings of ACM SIGMOD, pages 11561158, 2007.
[11] Eugene Inseok Chong. An Efficient SQL-based RDF Querying scheme, 2005
385

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[12] R. V. Guha,rdfDB : An RDF Database, web page: http://guha.com/rdfdb/


[13] Andy Seaborne, Alberto Reggiori, Libby Miller. Tree Implementation of SquishQL, a Simple RDF Query Language, April 26,
2002.
[14] Aditya Parameswaran. SeeDB: visualizing Database Queries Efficiently.
[15] J. Kahan, M. Koivunen, E. PrudHommeaux, R. R. Swick Annotea: An Open RDF Infrastructure for Shared Web Annotations,
http://www10.org/cdrom/papers/488/
[16] Greg Karvounarakis, The Rdf Query Language (RQL)
[17] G. Karvounarakis, V. Christophides, D. Plexussakis, S Alexaki, Querying Community Web Portals, SIGMOD2000,
http://www.ics.forth.gr/proj/isst/RDF/RQL/rql.html
[18] Dan Brickley, R. V. Guha (aditors), Resouce Description Framework (RDF) Schema Specification 1.0, 27 march 2000 (W3C
Candidate recommendation).
[19] Sesame, http://sesame.administrator.nl/, part of the OntoKnowledge project, http://www.ontoknowledge.org/
[20] Dr. Anil Rajput. Multicriteria data Retrieval in database using Advanced Database Operator. March 2014
[21] Ling Hu, Yuan-Chi Chang. QueryScope: Visualizing Queries for Repeatable databse Tunning
[22] J. Bertin, Graphics and Graphics Information Processing. Berlin: Walter de Gruyter, 1980.
[23] W.S. Cleveland, The Elements of Graphing Dta. Pacific Grove, Calif.: Wadsworth Advanced Books and Software, 1985
[24] E.R. Tufte, The Visual Display of Quantitative Information. Ghesire, Conn.:Graphics Press, 1983.
[25] Chris Stolte, Diane Tang. Polaris: A System For Query, Analysis, and Visualization of Multidimenal Relational Dtabases.
[26] L. Haas, D. Kossmannn, E. Wimmwers, and J. Yan. Optimizing queries across diverse data source. In The international
Conference on Very large Data Bases, 1997.
[27] Y. papakonstantinous, S. Abiteboul, and H. Garcia-Molina. Object Fusion in Mediator Systems. In VLDb 96, Bombay, India,
Sept. 1996.
[28] L. Amsaleg, M.J. franklin, A. Tomasic, and T. Urhan. Scrambling query plans to cope with unexpected delays. In proceeding of
the International Conference on Parallel and Distributed Information System, Miami Beach, Florida, December 1996.
[29] Kaung Chen, Harr Chen, Neil Conway. USHAR: Improving Data Quality with Dynamic Forms, June 2011.
[30] Sathu G Rajan, K. Sathya Seelan. Dynamic Query recommendation for interactive database Exploration, 2014

386

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Survey on Image Classification Algorithm Based on Per-pixel


S.ARUNADEVI1Dr. S. DANIEL MADAN RAJA2
PG Scholar, Department of IT, Bannari Amman Institute of Technology, Sathyamangalam,
2
Associate Professor, Department of IT, Bannari Amman Institute of Technology, Sathyamangalam,
arunadevi.se13@bitsathy.ac.in ,daniel@bitsathy.ac.in
1

AbstractIn this paper, we presents a literature survey on the various approaches used for classifying scenes which is
mainly based on object in the given image. In scene classification, classification of images is an intricate process which is
the necessity to classify, organize and access them using an easy, faster and efficient way to achieve higher image
accuracy within less execution time. The classification of images into semantic categories is an interesting and significant
problem. Many different approaches have been proposed relating to object scene classification in the last few years.
Keywords- Image accuracy, Image classification, Supervised classification, Unsupervised classification
I.Introduction
Image classification is an important and challenging task in various application domains, including biomedical imaging,
biometry, video surveillance, vehicle navigation, industrial visual inspection, robot navigation, and remote sensing.
Classification is an information processing task in which images are categorized into several groups. Categorization of
scene allows us to efficiently and rapidly analyze surroundings. A scene is characterized as a place in which we can move.
Classifying scenes into semantic categories (such as outdoor, indoor, and sports) is not an easy task. The scene
classification problem has two critical components representing scenes and learning models for semantic categories using
these representations. When images include occlusion, poor quality, noise or background clutter it is very difficult to
recognizing an object in an image and this task becomes even more challenging when an image contain multiple objects.
The main objective of image classification is to identify the features occurring in an image. Supervised classification and
unsupervised classification are the two main image classification methods. In supervised classification, trained database is
needed and also human annotation is required. In unsupervised classification, human annotation is not required and it is
more computers automated. For scene classification many algorithms are referred for classifying the images into semantic
categories (e.g. games, sports, street, bedroom, mountain, or coast) [18]. Classification is one of the several primary
categories of machine learning problems [6]. The indoor - outdoor scene retrieval problem, how high-level scene
properties can be inferred from classification of low-level image features [1]. An automated method has proposed based
on the boosting algorithm to estimate image orientations [18]. The classification of indoor and outdoor images based on
edge analysis [4]. Analysis of texture requires the identification of proper attributes or features that differentiate the
textures of the image [2][6]. For classification of scene images into war scene and nature scene images, the major tasks
are identification feature extraction method and suitable classifier. In this paper, presents a literature survey on the various
approaches used for classifying images based on Per-pixel Classification
II.Various Classifiacation Methods
There are several ways of grouping the existing scene classification algorithms. Grouping could be based on analysts
contribution in classification methods, or based on parameters on data used, or based on pixel information used, or based
on knowledge available from ancillary data, or based on image attributes used. Based or analysts role, scene
classification can be supervised and unsupervised classification. Based on parameters on data used, scenes can be
classified as parametric and non-parametric classification. Based on pixel information, scene classification can be perpixel, sub-pixel, per-field, and contextual classification. Based on availability of knowledge, images can be classified as
387

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

knowledge-free and knowledge-based classification. In this paper, Scene classification algorithms are described based on
Per-pixel Classification techniques in the following subsection
A. Pre-pixel Classification
In per-pixel classification each pixel is assigned to a class by considering the spectral similarities with the different classes
[21]. Per-pixel classification can be parametric or non-parametric. In parametric classification it is assumed that the
probability distribution of each of the classes is known. Usually, parameters like mean vector and covariance matrix are
obtained from the training data. However, the assumption of normal probability distribution of each class is often violated
for complex landscapes. Moreover, insufficient training samples may lead to a singular covariance matrix.
The most commonly used parametric classifier is the maximum likelihood classifier (MLC). Unlike parametric
classification, non-parametric classification is neither based on any assumption nor uses statistical parameters. This
classifier assigns pixels to classes based on pixels position in discretely positioned feature space [22]. Some of the most
commonly used non-parametric classifiers are nearest neighbor (NN), support vector machine (SVM), artificial neural
network (ANN) based classifiers, and decision tree-based classifiers.

1) Nearest Neighbor Classification:


Nearest neighbor based algorithms are simple but effective methods used in statistical classification. Categorizing
unlabeled samples is based on their distance from the samples in training dataset. Let a set of n labeled training samples
be given as S = {X1, X2,., Xn}, where Xi Rd. According to the nearest neighbor classification rule, an unlabeled
sample t is assigned to the class of Xi S if Xi happens to be the nearest neighbor of t. Usually Euclidean distance is used
as a measure of nearest neighbor. On the other hand, according to kNN classification a set of k nearest neighbors is
computed for an unlabeled sample instead of a single nearest neighbor. Then, the test sample is assigned to the class that
occurs most frequently among the k-nearest training samples. If the ranges of the data in each dimension vary
considerably, this can affect the accuracy of the nearest neighbor based classifications. Thus, both the training and testing
data need be normalized [27]

2) Support Vector Machine Classification:


SVM is an efficient supervised binary classification technique. SVM classification methods have often found to provide
higher accuracies compared to other methods, such as MLC, ANN-based classifications. SVM classifiers always deliver
unique solutions, since the optimality problem is convex. Some of the significant contributions in SVM classification
include cluster assumption based active- learning for classifying remote sensing images proposed by Patra et al. [30],
fusion of texture and SIFT-based descriptors for remote sensing image classification proposed by Risojevi et al. [31],
image classification based on linear distance coding proposed by Wang et al. [32]. These algorithms are presented briefly
as follows:
Patra et al. [30] develops a reliable active learning based classification for remote sensing images. Collecting labeled
samples is time consuming and costly. Also, redundant samples slow down the training process. Thus, training set needs
to be kept as small as possible to avoid redundancy, and at the same time, patterns with the largest amount of information
need to be included in the training set. The proposed active learning method is implemented in the learning phase of the
SVM classifier. The SVM classifier is first trained with a small number of labeled samples. Each unlabeled sample is
given an output score based on how likely or unlikely it is a member of a class. These output scores are plotted into a
histogram. Thus, the most ambiguous samples generate output scores located in the valley region of the histogram. A
threshold is chosen to determine which unlabeled samples should be considered. This technique is not strongly affected by
the initial training samples chosen and it is simple in terms of computational complexity. Thus, it has important advantage
388

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

in remote sensing applications. Risojevi et al. [31] proposes a hierarchical fusion of local and global descriptors in order
to classify high resolution remote sensing images. They suggest use of a Gabor filter bank at S scales and K orientations.
An Enhanced Gabor texture descriptor (EGTD) is developed based on cross correlation between the spatial-frequency
sub-bands of Gabor image decomposition.
Wang et al. [32] develops a linear distance coding (LDC) based classification method. Bag of Words (BoW) based
classifier uses the three-step method: extraction of local features of an image, generating codebook and then
quantize/encode local features accordingly, finally pooling all the codes together to generate a global image
representation. However, because of the quantization process, the information loss is inevitable in such a feature
extraction-coding-pooling based method. Nave Bayes Nearest Neighbor (NBNN) method tackles this information loss by
avoiding the quantization/coding process. Instead, it uses image-to-class distance, which it calculates based on local
features. Since, spatial context of images needs to be explored more effectively for better performance of a classifier,
Spatial Pyramid Matching (SPM) is often used as coding-pooling based methods. However, SPM strictly requires that the
involved images exhibit similar spatial layout. The proposed method uses the advantages of both BoW and NBNN, and at
the same time relieve the strict spatial layout requirement for SPM. In this method each local feature is transformed into a
distance vector, whose each element represents certain class-specific semantics. Since image representation produced by
LDC is complementary to the one produced by original coding-pooling method, their combination can result in
performance improvement of a classifier. Performance is evaluated using both Locally-constrained Linear Coding (LLC)
and Localized Soft-Assignment Coding (LSA) as the linear coding method. LLC and LSA are individually used as coding
methods, whereas max pooling is always employed. Original coding pooling based image representation, LDC based
image representation, and their concatenation are used for evaluation. It is observed that the concatenated representation
outperforms the other two in terms of classification accuracy.
3) Decision Tree-based Classification:
A supervised classifier which requires less complicated training compared to the ANN is based on a decision tree. A
decision tree breaks up a complex decision into multiple simpler decisions so that the final solution resembles the desired
solution. Decision tree is a hierarchical structure consisting of nodes and directed edges. Each node is an attribute of an
observation that needs to be classified, whereas each edge represents a value the attribute can take. The root node is the
attribute, which best divides the training data, whereas each leaf node is assigned a class label. Hunts algorithm is the
most commonly used method for building a decision tree. Hunts algorithm recursively partitions the training data until all
the members of each partition belongs to the same class label, or there are no more attributes remaining for partitioning
[37]. Selecting the best split (also known as attribute selection) is a challenging task while building a decision tree and
consequently several measures are proposed in literature. The goodness of a split can be measured quantitatively by
several metrics, such as information gain, information gain ratio, Gini index etc. While using a large dataset, a decision
tree representation can be significantly complex and, hence, classification may suffer from substantial complexity. As a
result, a number of pruning methods are employed to reduce the size of the decision tree by removing sections of the tree,
which are insignificant in classifying observations. Two of the significant contributions in decision tree based
classification are discussed as follows
Pal et al .[38] proposes the use of a univariate decision tree classifier with error based pruning (EBP). They use four
different attribute-selection measure metrics to verify that the classification accuracy is not affected by the choice of
attribute selection-measure metric. The accuracy of the decision tree classifier is measured while using different pruning
methods, such as reduced error pruning (REP), pessimistic error pruning (PEP), error-based pruning (EBP), critical value
pruning (CVP), and cost complexity pruning (CCP). It reveals that the EBP outperforms the other pruning methods. They
also perform a comparative evaluation between ANN-based classification and the proposed decision tree-based
classification. Accuracy and processing time are recorded for both the ANN-based classifier and the decision tree based
classifier, using ETM+ and InSAR datasets. It shows that for both the datasets, the decision tree based classifier performs
389

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

better than the other in terms of both classification accuracy and processing time. Thangaparvathi et al. [39] proposes a
modification to the RainForest algorithm, which was developed to address the scalability issue when a large dataset is
used. The data structure used in this proposed method IAVC set and IAVC group is the improved version of AVC set
(attribute-value class) and AVC group used in the RainForest algorithm.
Several other decision tree-based classifiers have been proposed, which use a variation of the Hunts algorithm as the
decision tree induction method. [40, 41] use the classification based on the ID3 algorithm. [42] Uses the C4.5 decision
tree classifier and [43] utilizes the CART based decision tree
4) Artificial Neural Network-based Classification:

ANN is a computational model inspired by the biological neural network. It could be considered as a weighted directed
graph in which nodes are neurons and edges with weights are connection among the neurons. Each artificial neuron
computes a weighted sum of its input signals and generates an output, based on certain activation functions, such as
piecewise linear, sigmoid, Gaussian, etc. It consists of one input layer, one output layer, and depending on the application
it may or may not have hidden layers. The number of nodes at the output layer is equal to the number of information
classes, whereas the number of nodes at the input is equal to the dimensionality of each pixel. Feed-forward ANN with the
back propagation learning algorithm is most commonly used in ANN literature. In the learning phase, the network must
learn the connection weights iteratively from a set of training samples. The network gives an output, corresponding to
each input. The generated output is compared to the desired output. The error between these two is used to modify the
weights of the ANN. The training procedure ends when the error becomes less than a predefined threshold. Then, all the
testing data are fed into the classifier to perform the classification.
For very high dimensional data, the learning time of a neural network can be very long, and the resulting ANN can be
very complex [33]. Consequently, several ANN-based classification algorithms have been proposed in literature, aiming
to minimize the complexity. Both [34, 35] suggest use of adaboost algorithm, i.e., building a strong classifier, using linear
combination of several weak classifiers. Both of them use a back propagation learning algorithm. A two-layer-ANN with
a single hidden layer is used in [34] as a weak classifier. It uses 50 nodes in the input layer and 25 nodes in the hidden
layer. The proposed algorithm works by using weak classifier in a number of iterations (t) and by maintaining a
distribution of weights for the training samples. Initially the training samples are assigned equal distribution. However, in
subsequent iterations weights of poorly predicted training samples are increased. Finally, the weak classifier finds a weak
hypothesis which is suitable for the distribution of the samples at that iteration. A confidence score for the weak
hypothesis is also calculated.
AVIRIS data is used in [34] to compare the performance of the proposed algorithm with MLC. It is evaluated that the
maximum likelihood-based classifier requires 4,554 parameters for learning, whereas the proposed algorithm requires
only 975 parameters for learning but still the proposed method outperforms the maximum likelihood-based classifier. The
INRIA human database is used in [35] for evaluation. Three different combinations of weak classifiers are tested by
varying the number of nodes in the hidden nodes from 1 to 3. When more hidden nodes are used, the accuracy of the
proposed classifier is seen to be better. Comparing the proposed classifier with global linear SVM, global kernel SVM,
and cascade linear SVM based classifiers show that the proposed algorithm performs better than the others.
In order to address the complexity of the ANN in case of high dimensional data, feature reduction mechanisms have also
been investigated in literature. Majhi et al. [36] proposes a low complexity ANN for recognition of handwritten numerals.
The proposed method uses an image database of handwritten numerals. Three hundred ninety six data for each numeral is
used for training purposes. At first, binary images of the numerals are converted into gray scale images. Then gradient and
curvature values are computed for each image, and subsequently 2,592 dimensional gradient feature vectors and 2,592
dimensional curvature feature vectors are generated. A principal component analysis (PCA) technique is used to compress
the data and generate gradient feature vectors and curvature feature vectors of dimensions 66 and 64, respectively. These
390

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

feature vectors are extended to trigonometric terms and fed to a low complexity single layer classifier. Each numeral with
100 data is used for testing purposes. The classification accuracy obtained using gradient feature vectors and curvature
feature vectors are 98% and 94% respectively. Also, it is evaluated that the performance of the proposed algorithm is
comparable to the modified quadratic discriminant function (MDQF) based classifier, but it offers low complexity.
III. CONCLUSION
Scene classification plays a key role which is affected by many factors in the field of computer vision. Classification
algorithms can be per-pixel, sub-pixel, per-field, contextual, knowledge-based, and high-level. Success of a classification
method depends on several factors.Per-pixels classification methods are mostly used in practice. However, they suffer
from mixed pixel problem, particularly for medium and coarse spatial resolution data. This paper aims at providing a
guide for selecting appropriate classification method based on Per-pixel classification by giving brief knowledge about
different classification methods

REFERENCES:
[1] G. Yan, J. F. Mas, B. H. P. Maathuis, Z. Xiangmin, and P. M. Van Dijk, "Comparison of pixelbased and
objectoriented image classification approachesa case study in a coal fire area, Wuda, Inner Mongolia, China,"
International Journal of Remote Sensing, vol. 27, pp. 4039-4055, 2006/09/01 2006.
[2] B. R. Kloer, "Hybrid parametric/non parametric image classification," in Technical Papers, ACSM-ASPRS Annual
Convention, 1994, pp. 307-316.
[3] M. P. Sampat, A. C. Bovik, J. K. Aggarwal, and K. R. Castleman, "Supervised parametric and non-parametric
classification of chromosome images," Pattern Recognition, vol. 38, pp. 1209- 1223, 2005.
[4] S. Patra and L. Bruzzone, "A Fast Cluster-Assumption Based Active-Learning Technique for Classification of Remote
Sensing Images," Geoscience and Remote Sensing, IEEE Transactions on, vol.49, pp. 1617-1626, 2011.
[5] V. Risojevic and Z. Babic, "Fusion of Global and Local Descriptors for Remote Sensing Image Classification,"
Geoscience and Remote Sensing Letters, IEEE, vol. 10, pp. 836-840, 2013.
[6] W. Zilei, F. Jiashi, Y. Shuicheng, and X. Hongsheng, "Linear Distance Coding for Image Classification," Image
Processing, IEEE Transactions on, vol. 22, pp. 537-548, 2013.
[7] R. Ablin and C. H. Sulochana, "A Survey of Hyperspectral Image Classification in Remote Sensing."
[8] Q. Sami ul Haq, L. Tao, and S. Yang, "Neural network based adaboosting approach for hyperspectral data
classification," in Computer Science and Network Technology (ICCSNT), 2011 International Conference on, 2011,
pp. 241-245.
[9] Y. Ren and B. Wang, "Fast human detection via a cascade of neural network classifiers," in Wireless, Mobile and
Multimedia Networks (ICWMNN 2010), IET 3rd International Conference on, 2010, pp. 323-326.
[10] B. Majhi, J. Satpathy, and M. Rout, "Efficient recognition of Odiya numerals using low complexity neural classifier,"
in Energy, Automation, and Signal (ICEAS), 2011 International Conference on, 2011, pp. 1-4.
[11] D. Pop, C. Jichici, and V. Negru, "A combinative method for decision tree construction," in Symbolic and Numeric
Algorithms for Scientific Computing, 2005. SYNASC 2005. Seventh International Symposium on, 2005, p. 5 pp.
[12] M. Pal and P. M. Mather, "A comparison of decision tree and backpropagation neural network classifiers for land use
classification," in Geoscience and Remote Sensing Symposium, 2002. IGARSS'02. 2002 IEEE International, 2002,
pp. 503-505.
391

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[13] B. Thangaparvathi, D. Anandhavalli, and S. Mercy Shalinie, "A high speed decision tree classifier algorithm for huge
dataset," in Recent Trends in Information Technology (ICRTIT), 2011 International Conference on, 2011, pp. 695700.
[14] T. Srinivasan, M. Sathish, V. G. Krishna, and V. Krishnamoorthy, "TC-ID3: A TESTCODE Based ID3 Classifier for
Protein Coding Region Identification," in Computational Intelligence for Modelling, Control and Automation, 2006
and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, International
Conference on, 2006, pp. 110-110.
[15] F. Yang, H. Jin, and H. Qi, "Study on theapplication of data mining for customer groups based on the modified ID3
algorithm in the ecommerce," in Computer Science and Information Processing (CSIP), 2012 International
Conference on, 2012, pp. 615-619.
[16] A. Taherkhani,"Recognizing sorting algorithms with the C4.5 decision tree classifier," in Program Comprehension
(ICPC), 2010 IEEE 18th International Conference on, 2010, pp. 72-75.
[17] Z. Zili, Q. Qiming, G. Junping, D. Yuzhi, Y.Yunjun, W. Zhaoqiang, et al., "CART-Based Rare Habitat Information
Extraction For Landsat ETM+ Image," in Geoscience and Remote Sensing Symposium, 2008. IGARSS 2008. IEEE
International, 2008, pp. III-1071-III-1074.
[18] Andrew Payne and Sameer Singh.2005. Indoor vs outdoor scene classification in digital photographs, Pattern
Recognition, pp. 1533-1545.
[19] Arivazhagan S, Ganesan L. 2003. Texture Segmentation Using Wavelet Transform. Pattern Recognition Letters, pp.
3197 3203.
[20] Bosch A, Zisserman A. 2008.Scene classification using a hybrid generative/discriminative approach, IEEE Trans. on
Pattern Analysis and Machine Intelligence, vol. 30, no. 4, pp. 712727.
[21] Chang T, Kuo C,1993.Texture Analysis and classification with tree structured wavelet transform, IEEE Transactions
on Image Processing, Vol. 2, No.4, P. 429-441.
[22] Lei Zhang, Mingjing Li, Hong-Jiang Zhang, 2002. Boosting Image Orientation Detection with Indoor vs. Outdoor
Classification, Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision, December 03-04, pp. 95

392

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Improvement in Radiation Parameters of Rectangular Microstrip Patch


Antenna
Monika Kiroriwal and Sanyog Rawat
Jaipur, Rajasthan
m.kiroriwal@rediffmail.com

Abstract This paper introduces a new geometry of a rectangular microstrip patch antenna that improves the performance of a
conventional microstrip patch antenna .This antenna is designed to operate at 5.38 GHz with enhanced bandwidth of 11.15%. For the
desired result, a triangular notch is inserted into the patch antenna. The proposed geometry provides improvement in other radiation
parameters like gain, efficiency and impedance behavior, when it is compared with conventional antenna.

Keywords Rectangular Microstrip Patch Antenna, Antenna Feed, Antenna Radiation Pattern.
I INTRODUCTION
For communication purpose antenna is widely used. An antenna is a conductor that can transmit, send and receive signals such as
microwave, radio or satellite signals. Many fields are there, where antenna is used like space technology, aircrafts, mobile
communication, missiles tracking, remote sensing and satellite broadcasting [1].There are different types of antenna e.g. monopole,
dipole , leaky-wave, aperture, reflector, microstrip antenna and many more . Type of antenna depends on the application. Due to
development in communication systems, these systems require development of low cost, light weight, low profile antennas those are
capable to give high performance over a wide band of frequencies[2][3]. To fulfill these requirements use of microstrip patch antenna
is increasing day by day. Microstrip patch antennas are most widely used antenna in microwave frequency range. A microstrip patch
antenna consists of conducting patch on a ground plane separated by dielectric substrate. Conducting patch is made of conducting
material such as copper or gold. The shape of the patch could be square, rectangular, circular, elliptical, semicircular [4], hexagonal,
triangular or other common shape [5] Length, width, input impedance, gain and radiation patterns are main parameters to characterize
a microstrip antenna. For proper matched input impedance there are four types of feeding techniques like Microstrip line feed, Coaxial
feed, Aperture coupled feed, Proximity coupled feed. Main advantage of coaxial feeding technique is that the location of feed can be
changed at desired location on the patch to match with its input impedance [6]-[8]. In this paper coaxial feed technique is used.
Various methods are used to analyze the microstrip patch antenna these are transmission line model, cavity model and full- wave
model. In this paper full-wave method is used to analyze the proposed geometry because this model is accurate, versatile and can work
on single element, stacked element, different shaped element and coupling, other two models are complex in nature. Here new
geometry of rectangular microstrip patch antenna is proposed. Narrow bandwidth and low gain is main limitations of a microstrip
patch antenna. These limitations can be overcome by some modification in the patch geometry. There are some examples that shows
the work of researchers to overcome these limitations like bandwidth improved upto 3.5% using H shaped patch [9] and in other
example 4.01% & 11% bandwidth was obtained using pi shape slot loaded patch antenna[10]. In this paper there is improvement in
bandwidth upto 11.15% which is better than the past proposed results.
In this paper, a novel geometry is proposed and simulated results are compared with conventional patch results. The
geometry was simulated using IE3D electromagnetic simulator [11]. This software is a full-wave, method of moments based
electromagnetic simulator solving the current distribution on 3D and multilayer structure of general shapes. The second section
comprises of antenna geometry and in the third section of the paper simulated results are discussed followed by conclusion in the forth
section.
II. ANTENNA GEOMETRY
Rectangular and circular microstrip patch antennas are most widely used antenna in wireless communication. This paper introduces a
new geometry of compact size rectangular antenna. Here conventional rectangular microstrip patch antenna is considered as reference
antenna. Results of reference antenna are compared with the results of that simulated new proposed patch antenna. The geometry of
the conventional rectangular MPA is shown in Fig.1(a) using FR4 as dielectric with dielectric constant, = 4.4 and the thickness of
the substrate, h = 1.59 mm, is simulated by applying IE3D full-wave electromagnetic simulator. The patch has length and width of
393

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

14mm and 20mm. A 50 coaxial probe is used to connect the microstrip patch at coordinates and it is made fixed for both the
conventional and the proposed new geometry of rectangular MPA.

Fig.1(a). Rectangular microstrip patch antenna

Fig.1(b).Proposed geometry of rectangular microstrip patch antenna

Fig 2 Manufactured rectangular microstrip patch antenna

The geometry is proposed to improve the radiation parameter of probe-fed patch antenna is shown in Fig.1(b).Fig 2 shows the
manufactured rectangular patch antenna with coaxial feed. There is a cut of 90 angle from the centre. Impedance bandwidth of about
11.15% can be obtained from the above geometry.
III SIMULATED RESULTS
A) Results of conventional rectangular MPA
1) Radiation Pattern: A plot through which it is visualizes where the antenna transmits or receives power. The microstrip antenna
radiates normal to its patch surface. So, the elevation pattern for = 0 and = 90 degrees are important for the measurement.

394

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.3. 2D Radiation Pattern for rectangular microstrip patch antenna

Fig.4. Simulated Return Loss for rectangular microstrip patch antenna

The simulated E-plane and H-plane pattern, 2D pattern of rectangular MPA is illustrated in Fig. 3. Radiation pattern is smooth and
uniform over the band of frequencies.
2) Return Loss and Bandwidth: Return Loss is a measure of how much power is delivered from the source to a load and measured by
S11 parameters. Bandwidth is the range of frequencies over which the antenna can operate effectively. Bandwidth can be calculated by
going 10 dB down in return loss. Return Loss shown in Fig. 4 of the rectangular microstrip patch antenna is -14.19 dB at resonating
frequency 4.55 GHz and from the Return Loss curve the bandwidth obtained is 4.56%.
3) Smith Chart: Smith Chart provides the information about polarization and the impedance match of the radiating patch. The smith
chart for the conventional octagonal MPA is given in Fig.5. Fig 5 shows the input impedance of 73.60 j1.86 at resonant frequency
4.55 GHz. This smith chart shows that the antenna is linearly polarized.

Fig.5. Smith Chart of rectangular microstrip patch antenna

Results of proposed new geometry of rectangular MPA.


1) Radiation Pattern: The 2D Radiation pattern is given in Fig. 6. Radiation pattern of proposed new geometry of rectangular
microstrip patch antenna is also smooth and uniform over the frequency range.
2) Return Loss and Bandwidth: The Return Loss shown in Fig.7 of the proposed rectangular micro strip patch antenna is -26.04 dBi at
resonating frequency 5.39 GHz and from the Return Loss curve the bandwidth obtained is 11.15%. Fig 7 shows that the return loss of
proposed microstrip antenna improves to -26.04 dB from the conventional antenna of -14.19dBi and bandwidth is wider compared to
the conventional geometry.
395
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.6. 2D Radiation Pattern for proposed geometry of rectangular


microstrip patch antenna

Fig.7. Simulated Return Loss for proposed geometry of rectangular


microstrip pat ch antenna

3) Smith Chart: The Smith Chart for proposed geometry rectangular MPA is given in Fig.8. Fig 8 shows that the 53.11 j3.995
input impedance is obtained for the proposed antenna and the antenna is circularly polarized with some impurity.

Fig.8. Smith Chart of proposed geometry of rectangular Microstrip patch antenna

Table I shows that the comparison of simulated result on conventional geometry and new proposed geometry.

396

Sr.No
.
1.

Characteristics

Conventional Rectangular Patch

Porposed Rectangular Patch

Return Loss (dBi)

-14.19

-26.04

2.

Gain (dBi)

2.08

3.44

3.

Bandwidth (%)

4.56%

11.15%

4.

Antenna Efficiency (%)

35.24%

44.44%

5.

Radiation Efficiency (%)

43.33%

44.62%

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

I V. CONCLUSION
In this paper, the radiation performance of proposed new rectangular microstrip patch antenna is compared with conventional
rectangular patch antenna. Simulated results indicate that the new proposed antenna exhibits bandwidth upto 11.15%. There is also
improvement in radiation characteristics like gain and efficiency. The radiation pattern is also found to be stable over the entire
bandwidth.

REFERENCES:
[1] Constantine A. Balanis,Antenna Theory Analysis and Design, Third Edition, Wiley Publication.
[2] R.Garg, P.Bhartia,I.J.Bhal and A. Ittipiboon ,Microstrip Antenna Design Book , Artech House, New York 2001.
[3] D.M.Pozar,Microstrip Antennas, Proc. IEEE , Vol 80 ,pp 79-91
[4] Kin-Lu-Wong ,Compact and Broadband Microstrip Antenna John Wiley & sons 2002.
[5] K.Kumar, Sukhdeep kr,Investigation On Octagonal Microstrip Patch Antenna For Radar & Spacecraft Application,
International Journal Of Scientific &Engineering Research, vol 2, 2011.
[6] K.O.Odeyemi ,D.O.Akande &,E.O.Ogunti, Design Of S band Rectangular Microstrip Patch Antenna, European journal Of
Scientific Research , Vol 55, 2011
[7]
T.D.prasad,K.V.S.Kumar,K. Muinuddin
Chisti ,Comparisons
Of Circular and Rectangular Microstrip Patch
Antenna ,International Journal Of Communication Engineering-IJCEA Vol 02, 2011
[8] Abolfazi Azari,A New Super Wideband Fractal Microstrip Antenna IEEE Transaction on Antenna &Propagation , vol 59, N0
5 , May 2011
[9] Sudhir Bhaskar & Sachin kr Gupta,Bandwidth Improvement Of Microstrip Patch Antenna using H shaped Patch International
Journal Of Engineering Research and Applications ,Vol 2, Feb 2012
[10] Rajesh Kr. Tripathi & Rakesh Khanna,Pi Shape Slot Loaded Patch Antenna For Wi-Max Application Special Issue of
International Journal of Computer Applications (0975 8887) on Electronics, Information and Communication Engineering ICEICE No.6, Dec 2011
[11] Zealand IE3D software

397

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An Implementation of LSB Steganography Using DWT Technique


G. Raj Kumar, M. Maruthi Prasada Reddy, T. Lalith Kumar
Electronics & Communication Engineering#,JNTU A University
Electronics & Communication Engineering*,SVU University
Electronics & Communication Engineering#,SVU University
Kadapa,A.P,India
1

rajkumarbalu.2008@gmail.com,
maruthiprasadareddy@gmail.com
3
lalith.tappeta_cdp2005@yahoo.co.in
2

Abstract Steganography is the art or practice of concealing a message, image, or file within another message, image, or file. In
steganography, there is a technique in which the least significant bit is modified to hide the secret message, known as the least
significant bit (LSB) steganography. Least significant bit matching images are still not well detected, especially, at low embedding
rate. In this paper, we have improved the least significant bit steganalyzers by analyzing and manipulating the features of some
existing least significant bit matching steganalysis techniques. This paper explains the LSB Embedding technique with lifting based
DWT schemes by using Micro blaze Processor implemented in a FPGA using System C coding.
Keywords DWT, FPGA, LSB, Micro Blaze, Steganography.

Introduction
The art or practice of concealing a message, image, or file within another message, image, or fileis called Steganography. The
word steganography combines steganos meaning "co-vered, concealed, or protected", and graphein meaning "writing". Generally,
the hidden messages will appear to be (or be part of) something else: images, articles or some other cover text.
Steganalysis develops theories, methods and techniques that can be used to detect hidden messages in multimedia docu-ments. The
documents without any hidden messages are called cover documents and the documents with hidden messages are denoted by stego
documents. In steganography, there is a technique in which the least significant bit is modified to hide the secret message, this
technique is known as the least significant bit (LSB) steganography or LSB embedding.
A digital image is described using a 2-D matrix of the intestines at each grid point (i.e. pixel). Typically, gray images use 8 bits,
whereas coloured utilizes 24 bits to describe the colour model, such as RGB model. The steganography system which uses an image
as the cover object is referred to as an image steganography system.
The shift from cryptography to steganography is due to that concealing the image existence as stego-images enable to embed the
secret message to cover images. Steganography conceptually implies that the message to be transmitted is not visible to the informal
eye. Steganography has been used for thousands of years to transmit data without being intercepted by unwanted viewers. It is an art
of hiding information inside other information. The main objective of Steganography is mainly concerned with the protection of
contents of the hidden information. Images are ideal for information hiding because of the large amount of redundant space is created
in the storing of images. Secret messages are transferred through unknown cover carriers in such a manner that the very existence of
the embedded messages is undetectable.Carriers include images,audio,video,text or any other digitally represented code or
transmission.The hidden message may be plaintext or another as a bit stream.
2. THE LSB TECHNIQUE
We have implemented the LSB steganography algorithm in gray scale images to reduce the complexity of the system. It is the process
of embedding data within the domain of another data, this data can be text, image, audio, or video contents and the scope of the
current paper covers only codes (integer values). The embedded data is invisible to the human eye i.e., it is hidden in such a way that it
cannot be retrieved without knowing the extraction algorithm. In this paper we evaluated the technique using gray scale images of size
398

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

64*64 in which each pixel value was represented with 8 bit representation.
Example:
Take the number 300, and its binary equivalent is 100101100 embedded into the least significant bits of pixel values of the Cover
image. If we overlay these 9 bits over the LSB of the 9 bytes cover image pixel values, we get the following (where bits in bold have
been changed)
10010101 00001100 11001000
10010111 00001110 11001011
10011111 00010000 11001010
After embedding the message into the cover image, the stego image will be obtained, then this stego image will be transformed with
DWT transformation technique so that any hacker cant find where the message was embedded. At the receiver end the inverse DWT
is applied, after LSB decryption the original image and message will be obtained.

3. PROPOSED METHODOLOGY

399

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.1.
THE LSB ENCRYPTION AND DECRYPTION PROCESS
LSB encryption process consists of two steps namely masking process and generation of stego image.
Masking process is done by replacing LSB of all pixel values with 0. This is done by performing AND operation with 1111 1110
(254). Now the LSB of all pixels will be zeroes.
Now the binary value of the information is placed in the LSB of the pixel values. This is done by identifying the position of 1s and
placing them in the LSB of respective pixel value.
For Example, we have n-bit binary message (binary value of the integer), now AND operation is performed between n-bit binary
message and 1 in the nth position, rest all 0s. Then OR operation is performed between the output of the above AND operation and
the binary values of nth pixel. This operation is repeated for (n-1), (n-2), (n-3)till it completes LSB. So for an 8 bit binary
message, the above operations start with 8th bit and completes till 1st bit of the message. Now the LSB of pixel values are embedded
with message, and now the image is termed as stego image. LSB decryption process contains extraction of the message from LSB of
pixel values.
4. DISCRETE WAVELET TRANSFORM
Lifting schemes, also known as integer-based wavelets, differ from wavelet transforms in that they can be calculated in-place. Similar
to wavelet transformations, lifting schemes break a signal, the image, into its component parts trends that approximates the original
values and details which refers to the noise or high frequency data in the image.

A lifting scheme produces integers and this allows the original space to be used to hold the results. Lifting operation requires two
steps, one to calculate the trends i.e. low frequency values and another to calculate the details i.e High frequency values. Trends give
the original signal values i.e. low frequency components and details give the noise values i.e. High frequency components. In this
lifting scheme based discrete wavelet transformation scheme we used the mathematical calculation method to convert the image into
frequency domain. In our project we propose a two level transformation in lifting based DWT schemes to convert image pixel values
into frequency domain.
The lifting schemes which we have implemented in our project are based on Haar lifting schemes.
lifting calculation of the High and the low frequency values for image pixels are shown below respectively.
High frequency value = odd-even samples
Low frequency values = even + high/2 samples

Where i, j are the rows and columns of 2D pixel matrix of a stego image.
4.1.

2-D TRANSFORM HEIRARCHY

The 1-D wavelet transform can be extended to a two-dimensional (2-D) wavelet transform using separable wavelet filters. With
separable filters the 2-D transform can be computed by applying a 1-D transform to all the rows of the input, and then repeating on
all of the columns.

400

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

`
Fig 4.1.1: Sub band Labeling Scheme for a one level, 2-D Wavelet Transform

DWT
Fig 4.1.2: Pictorial representation of Sub band Labeling Scheme for a one level, 2-D Wavelet Transform

4.2.
LIFTING BASED DWT SCHEMES
It is composed of three basic operation stages:
Splitting: Where the signal is split into even and odd pixels.
Predicting: Even samples are added by a prediction factor derived from odd and even pixels to get low frequency values.
Updating : The detailed co-efficients computed by the predict step are multiplied by the update factors and then the results are
subtracted to the even samples to get the high frequency values.
Merging/Combining: The reverse process of DWT has done to merge all the LL,LH,HL,HH to reconstruct the original image.In this
procedure Image pixels values are divided into even samples and odd samples then for getting high frequency value= odd-even
samples and for low frequency values =even + high/2 samples then this procedure is repeated for two phases . The first phase is
known as Column Filter that means performing the DWT calculations on columns to get Low and High frequency values and second
phase is known as Row filter. That means we are going to apply DWT calculation on rows in order to get the LL, LH, HL, HH.
The Inverse DWT is also fallows the same process in reverse manner to construct original image pixel values from LL, LH, HL, HH
values.it is a process converting frequency domain to image pixel values.
5. RESULTS
The Xilinx Platform Studio (XPS) is the development environment or GUI used for designing the hardware portion of your embedded
processor system. Visual basic is used to observe the input image, compressed image and decompressed image. Hyper terminal is to
see the input and output message.

401

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 5.1: Input stego Image

Fig 5.3: Decompressed Image

402

Fig 5.2: Compressed Image using DWT

Fig 5.4:Input and Output message

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

6. CONCLUSION:
In this paper we have presented a new method of LSB Steganography using lifting DWT Process. This process was
implemented by developing Micro Blaze processor in FPGA.
Future work can be extended to RGB or color image processing and can be extended to video processing level also.

REFERENCES:
1.G. Xing, J. Li, and Y. Q. Zhang, Arbitrarily shaped videoobject coding by wavelet, IEEE Trans. Circuits Syst. Video
Technol., vol. 11, no. 10,pp. 11351139, Oct. 2001.
2.S. C. B. Lo, H. Li, and M. T. Freedman, Optimization of wavelet decomposition for image compression and feature
preservation, IEEE Trans.Med. Imag., vol. 22, no. 9, pp. 11411151, Sep. 2003.
3.K. K. Parhi and T. Nishitani, VLSI architecture for discrete wavelet transforms, IEEE Trans. Very Large Scale Integr.
(VLSI) Syst., vol. 1, no. 2, pp. 191 202, Jun. 1993.
4.X. X. Qin and M. Wang, A review on detection of LSB matching steganography, Inf. Technol. J., vol. 9, pp. 1725
1738, 2010.
5. A. D. Ker, Locating steganographic payload via WS residuals, inACM Proc. 10th Multimed. Secur. Workshop, 2008,
pp. 2731.
6.A. D. Ker, A general framework for
Workshop, ser. Springer LNCS,

thestructural

steganalysis

of LSBreplacement, in Proc. 7th Inf. Hiding

2005, vol. 3727, pp. 296311.


7..J. Fridrich, M. Goljan, and R. Du, Detecting LSB steganography in color and grayscale images, IEEE Multimedia,
vol. 8, no. 4, pp. 2228, 2001.
8..S. Dumitrescu, X.Wu, and Z.Wang, Detection of LSB steganography via sample pair analysis, IEEE Trans. Signal
Process., vol. 51, pp.19952007, Jun. 2003.
9..A. Ker, Steganalysis of LSB matching in grayscale images, Signal Process. Lett., vol. 12, no. 6, pp. 441444, Jun.
2005.
10..A. D. Ker, A fusion of maximum likelihood and structural steganalysis, in
Proc. 9th Inf. Hiding Workshop, ser. Springer LNCS, 2007, vol. 4567, pp. 204 219.
11..K. Lee, A. Westfeld, and S. Lee, Generalised category attackImproving histogram-based attack on JPEG LSB
embedding, Inf.Hiding07, pp. 378391, 2007.
12..J. Fridrich and M. Goljan, On estimation of secret message length in LSB steganography in spatial domain, in
Secur., Steganogr. Watermarkingof Multimed. Contents VI, ser. Proc. SPIE, 2004, vol. 5306,pp

403

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fault Tolerant Over Hardware Efficient FIR Filter


P.Jeevitha 1, B.Ganesamoorthy 2
P.G. Student, Adhiparasakthi Engineering College,jeevitha.pece@gmail.com1
Assistant Professor, Adhiparasakthi Engineering College, bganesamoorthy@gmail.com,96003212272

Abstract In todays world there is a great need for the design of low power and area efficient high performance DSP system.FIR
filter is considered to be the fundamental device in the broad application of wireless as well as the video and image processing system.
With the aim of getting the reliable operation, these filters are protected using the Error correction Code. The pipelined FIR filter
design which reduces the critical path by interleaving the pipelined latches along the datapath, with the sense of increasing the number
of latches and then the system latency. But the parallel processed FIR filter design increases the sample rate thereby replicating the
hardware, so that the multiple number of inputs gets processed parallely and at the same time generating multiple number of outputs
with the disadvantage of increased area in the design. To overcome this disadvantage and in the sense of retaining these such
advantage of parallel processing,the hardware efficient filter structure is to be proposed, and these filter structure is to be recovered
from error by the application of Error Correction Code.

Keywords FIR filter, Error Correction Code, Parallel Processing.


INTRODUCTION

The Digital Filters plays a vital role in the analog and digital communication. The main purpose of using the filters is to eliminate
the undesired signal components thereby providing the better quality signal at the output. The digital filters having the unique
characteristics of generating the stabilized signal at the output while compared with the analog filters. So that the digital filters are
more preferable than the analog one. There are two main kinds of digital filters they are 1.FIR (Finite Impulse Response) and 2.IIR
(Infinite Impulse Response) filter. The FIR filter is preferred over the IIR filter because of efficient hardware implementation with
fewer precision errors and also giving the stabilized response with the linear phase [1],also helps to know more about parallel
processing.
The Pipelining as well as the Parallel Processing techniques can reduce the power consumption by lowering the supply voltage
when the sampling speed does not increase. In order to reduce the large amount of hardware cost a new technique is being proposed
called as the Iterated Short Convolution Algorithm (ISC) [2].This ISC based technique is being transposed to get the hardware
efficient FIR Filter structure. This technique is highly effective when the length of the FIR filter is large. This method is based on the
mixed radix algorithm and the fast convolution algorithm. The application of Error Correction Code is being briefly studied using
[3].One beneficial method that the exchange of adders with the multipliers because the adders which are weighing less as compared
with the multipliers in case of silicon area [4].This proposed filter structure exploits the symmetric filter coefficients thereby reducing
the number of multipliers in the sub filter section with the expense of increasing the additional adders in the pre-processing and post
processing blocks.
.
The FFA based FIR filter structure having the additional pre-processing and post processing blocks, these adders mainly uses
the full adder with the ripple carry adder which causes more timing delay because of taking longer time to execute the program [5].So
to overcome that the ripple carry adders are replaced with the carry save adder in order to provide efficient hardware structure thereby
reducing the timing delays has been proved. A new efficient FIR filter implementation as been proposed to reduce the hardware cost.
For that they are considering the two contributions:1.The filter spectrum characteristics are being exploited in order to select the fast
filter structure,2.Introduction of Novel block filter quantization algorithm is introduced [6].These technique which reduces the number
of binary adders upto 20%.
The DSP system is used for further hardware efficient operation designing the system with multiplier less implementation of
DSP system. This system effectively replaces all the multipliers and adders into the Look Up table (LUT) and Shifter-accumulator
thereby saving more hardware space [7].The significant improvement in area, power and delay can be achieved by using the truncated
multipliers. In this proposed technique the LSB bits in the output are operated with the operation of deletion, reduction, truncation,
404

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

rounding and final addition.So.here there is no requirement for the error compensation circuits [8].The multiplier in the filter design is
replaced with the shifter and adder. These shifter and adder in the design which enhances the performance of the system thereby
reducing the unwanted additions in order to reduce the switching power dissipation [9] to enhance the performance. The error
correction technique in the design is used to provide the realiable signal at the output.
In this brief, this paper helps to provide the error corrected hardware efficient filter structure with the modification in the filter
convolution structure as compared with the traditional FIR filter convolution. This paper helps to maintain the constant pre-processing
and post processing blocks at the same time to minimize the number of multipliers in the efficient filter design.
II.EXISTING FILTER STRUCTURE

The existing parallel filter structure is shown in the Fig.1.The filter structure is designed for four input and four coefficients. The
four filter inputs are considered to be x(4k), x(4k+2),x(4k+3) and x(4k+1) and the filter coefficients are considered to be h0,h1,h2 and
h3.The generated outputs are found to be y(4k), y(4k+2),y(4k+3) and y(4k+1).

A.Original Module
The original module should be represented in the Fig.1.In this module the applied input gets convoluted by using its filter
coefficients then it generates the convoluted output. The original module operates on the equation (1) given below.

yn =

=0

x[n l] h[l]

(1)

Fig.1.Existing Filter Structure

B.Redundant Module
The redundant module is the module used for achieving the reliable operation over the original module. The redundant module is said to
be the parity module which is used to generate the parity bits. These parity bits are represented as y(3k),y(3k+1) and y(3k+2).The module
takes a block of k bits and generated the block of n bits and the parity is obtained as n-k bits. The Parity check bit equations are given in

(2).
405

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

p1 = d1d2d3
p2 = d1d2d4

(2)

p3 = d1d3d4
The redundant module is shown in the Fig.2

Fig.2 Redundant Module

C.Single Error Correction Code


The single error correction module is used to correct the single bit error in the generated or convoluted output at the original
module. By applying the parity over the original module the error in the bit at the convoluted output is to be detected and corrected
with the help of equation (3).
1[] =

2 =

3[] =

=0

=0

=0

x1 n l + x2 n l + x3 n l h[l]
x1 n l + x2 n l + x4 n l h[l]
x1 n l + x3 n l + x4 n l h[l]

The single error correction module is given in the below Fig.3.

Fig.3.Single error correction module

406

www.ijergs.org

(3)

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

III.PROPOSED FILTER STRUCTURE


The proposed filter structure has a difference in the adder structure implementation as compared with the existing parallel filter
structure. This efficient adder structure implementation in the parallel filter is given in the Fig.4.

Fig.4 Proposed Hardware efficient filter


A. Original Module
The original module is represented in the Fig.4.In this module the applied input gets convoluted by using the efficient adder
structure.

B.Redundant Module
The redundant module is same as that of the existing systems redundant module used for achieving the reliable operation over the
main module. This module is used for generating the parity bits also represented as z1, z2 and z3.The parity is calculated as n-k bits.
Let us consider a simple example of Hamming code with k=4 and n=7,here the parity bits p1,p2 and p3 are computed based on the
data bits d1,d2,d3 and d4 as follows:

p1 = d1d2d3
p2 = d1d2d4
p3 = d1d3d4
C.Single Error Correction Module
The single error correction module is used to correct the single bit error in the generated convoluted output at the proposed area
efficient adder structure. The single error correction module is shown in the Fig.5.

407

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.5 Single error correction circuit


IV.SIMULATION RESULTS
The Simulation results and the RTL schematic for the existing and proposed module are given below:

Fig.6. RTL schematic view proposed filter structure

408

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.7.Output window with parity

Fig.8 Output window with error correction circuit.

Table 1. Comparison table for Existing and Proposed Blocks

The proposed module is simulated using ModelSim XE simulator

409

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

V.CONCLUSION
In this paper, the Filter is designed with efficient hardware implementation structure for getting the reduced power with reduced
hardware cost. The modified filter structure generates the similar result as that of the existing module. The power consumption should
be reduced from 320mW to 312mW.The resource utilization can be obtained by analyzing the slices,flipflops,used gate clks,and
IOBs.The single bit error correction is achieved by using the Hamming Error Correction Code in the proposed system.

REFERENCES:
[14] K.K. Parhi, VLSI Digital Signal Processing systems:Design and Implementation. New York: Wiley, 1999.
[2] Chao Cheng, Member, IEEE, and Keshab K. Parhi, Fellow, IEEE, Hardware Efficient Fast Parallel FIR Filter Structures Based on
Iterated Short Convolution, IEEE Transactions On Circuits And SystemsI: Regular Papers, Vol. 51, No. 8, August 2004.
[3] Zhen Gao, Pedro Reviriego, Wen Pan, Zhan Xu, Ming Zhao,Jing Wang, and Juan Antonio Maestro, Fault Tolerant Parallel
Filters Based on Error Correction Codes, 1063-8210 2014 IEEE.
[4] Yu-Chi Tsao and Ken Choi, Area-Efficient Parallel FIR Digital Filter Structures for Symmetric Convolutions Based on Fast FIR
Algorithm, IEEE Transactions On Very Large Scale Integration (VLSI) Systems, Vol. 20, No. 2, February 2012.

[5] G Ramya Sudha, Ch.Manohar, Reducing computation delay of Parallel FIR Digital Filter Structures for Symmetric Convolutions
Based on Fast FIR Algorithm by using CSA, IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 1, Ver. I
(Jan. 2014).
[6] J. G. Chung and K. K. Parhi, Frequency-spectrum-based low-area low power parallel FIR filter design, EURASIP J. Appl.
Signal Processing,vol. 2002, no. 9, pp. 444453, 2002.
[7] Krishnapriya P.N , Arathy Iyer, Power and Area Efficient Implementation for Parallel FIR Filters Using FFAs and DA,
International Journal of Advanced Research in Electrical,Electronics and Instrumentation Engineering, Vol. 2, Special Issue 1,
December 2013.
[8] R. Devarani, Mr. C.S. Manikanda Babu, Design and Implementation of Truncated Multipliers for Precision Improvement and Its
Application to a Filter Structure, International Journal of Modern Engineering Research (IJMER) Vol. 2, Issue. 6, Nov.-Dec. 2012.
[9] Thodeti Madhukar, M. R. N. Tagore, Dr. Giri Babu Kande, Area-Efficient VLSI Implementation for Parallel Linear-Phase FIR
Digital Filters of Odd Length Based on Fast FIR Algorithm, IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE),
Volume 8, Issue 2 (Nov.-Dec.).
[10] M. Nicolaidis, Design for soft error mitigation, IEEE Trans. Device Mater. Rel., vol. 5, no. 3, pp. 405418, Sep. 2005.
[11] B. Shim and N. Shanbhag, Energy-efficient soft error-tolerant digital signal processing, IEEE Trans. Very Large Scale Integr.
(VLSI) Syst.,vol. 14, no. 4, pp. 336348, Apr. 2006.
[12] S. Pontarelli, G. C. Cardarilli, M. Re, and A. Salsano, Totally fault tolerant RNS based FIR filters, in Proc. IEEE IOLTS, Jul.
2008,pp. 192194.
[13] Y.-H. Huang, High-efficiency soft-error-tolerant digital signal processing using fine-grain subword-detection processing,
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 18, no. 2, pp. 291304, Feb. 2010

410

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Implementation of fruit Grading System by Image Processing and Data


Classifier- A Review
1

Miss.Anuradha P. Gawande , Prof. S.S. Dhande

Department of Information Technology, sipna College of Engineering & Technology, Amravati, India

Department of Computer Science and Engineering, sipna College of Engineering & Technology, India
anuradhagawande2014@gmail.com, sheetaldhandedandge@gmail.com

Abstract Sorting of fruits and vegetables is one of the most important process in fruits production, while this process is typically
performed manually in most countries. In India,basically in Vidharbha Region, productions of oranges are on the large scale. So, for
sorting and grading of oranges, would be more helpful in industry. Machine learning and computer vision techniques have applied for
evaluating food quality as well as crops grading. Different learning methods are analyzed for the task of classifying
infected/uninfected Orange fruits from images of their external surface. Linear discriminant analysis is then used to transform the
feature space after feature fusion for better separability, while three classifiers, naive Bayes, k-nearest neighbor and supported vector
machines, and will be investigate.Abstract must be of Time New Roman Front of size 10 and must be justified alignment.

KeywordsFruit Quality, Orange fruit, color, texture, PCA,pattern classification,Linear Discriminant Analysis.
INTRODUCTION

The general aim is to fill a vital gap at intervals the applying of computer vision as a tool for business to review of fruits and
vegetables. The techniques of the computer vision detects quality of agricultural product , due to the need to hunt out another to
ancient manual review ways in which and to eliminate contact with the merchandise and increase responsibleness besides of
introducing flexibility to review lines and increasing the productivity as well as fight of agriculture industries .[1][2]
Computer application in agriculture and food industries are applied within the areas of sorting, grading of recent
merchandise, detection of defects like cracks, dark spots and bruises on recent fruits and seeds. The recent technologies of image
analysis and machine vision haven't been absolutely explored within the development of machine-driven machine in agricultural and
food industries. machine-driven sorting has undergone substantial growth within the food industries within the developed and
developing nations attributable to accessibility of infrastructure.[4]
Citrus fruits occupy a vital position in Indias fruit production. Republic of Asian country ranks sixty fourth in
productivity of oranges. Oranges are an essential maturity, firmness, texture and size. completely different fruits or vegetables once
shipped across one place to a distinct ought to be checked for internal control. The manual technique of handpicking the most effective
fruit or vegetables among the stock may be a time overwhelming method. Oranges are the foremost ordinarily adult angiospermous
tree within the world. In India, the town that's most celebrated for growing oranges is Nagpur.
Quality examination of food and agricultural product ar robust and labor intensive. at constant time, with exaggerated
expectations for nutrient of top of the range and safety standards, the requirement for proper, quick and objective quality determination
of these characteristics in nutrient continues to grow. However, these operations usually in Republic of India area unit manual that is
further as unreliable as a result of human call in distinctive quality factors like look, flavor, nutrient, texture, etc., is not consistent,
slow and subjective.[3]
A number of challenges had to be overcome to change the system to perform automatic recognition of the type of fruit
or vegetable mistreatment the photographs from the camera. several types of vegetables, grains, fruits unit subject to important
variation in color and texture, hoping on but ripe they're [20] . as an example, bananas vary from being uniformly inexperienced, to
yellow, to uneven and brown .The fruit and vegetable market is obtaining extremely selective, requiring their suppliers to distribute the
products in step with high standards of quality and presentation. Recognizing fully completely different styles of vegetables and fruits
could also be a recurrent task in supermarkets where the cashier ought to be ready to denote not only the species of a specific fruit
(i.e., banana, apple, pear) but in addition its choice, which may verify its price.[5]

II. LITERATURE SURVEY


A lot of research has been done in the fruit sorting and grading system. VON BECKMANN and BULLEY (1978)
states that simultaneous fruit sorting by size and color would save time, reducing fruit handling .
For the greater number of the fruits, color is associated to the physiological ripeness, and can be used as a
sorting pattern. ARIAS et al. (2000) report that the surface color of tomato is a major factor in determining the ripeness
411

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

of this fruit.
VAN DER HEIJDEN et al. (2000) and POLDER et al.(2000) also compared images with standard RGB images for
classifying tomatoes in different ripeness classes using individual pixels and obteined similars results.
Polder et al. 2002 used principle component analysis (PCA) in conjunction with spectral imaging to grade tomato fruits
according to their ripeness level.[9] So commodities in todays world, ought to be checked for the images from the side, to cover the
whole fruit surface unsupervised method for in-line calibration, which is a necessary requirement for real time sorting of tomatoes on
compound concentration using spectral images .

Fig1: RGB color images and concentration images of six tomatoes ranging from raw to overripe.
JAHNS et al. (2001) also report that color, spots and bruises are easily recognized by the pixel level. HAHN (2002) reports the
application of a multi color system to select tomatoes considered physiologically immature,claiming an approximation of 85%.
POLDER et al. (2003) report that they found good correlation between spectral images and the lycopen content of tomato,
that is responsible for the fruit red color, which varies according to the ripeness stage.
KADER (2002) reports that it was necessary to capture a certain number of images to obtain fruit diameter,recommending
the application of video images to inspect the fruit appearance.[10]
An initial calibration to relate the values found to true compound concentrations is still needed,but changes during
the sorting process, such as aging of light sources, drift of sensors or new batches of tomatoes of different origin or
variety can be recalibrated using the proposed method. This system validated this using the leave-one-out cross
validation technique using different tomatoes in the calibration and validation phases.[10] But for a more sound
conclusion a new experiment with tomatoes of different origin,or changes in the acquisition system needs to be done.
The proposed system could be implemented in a practical quality sorting system. A big advantage of this system compared to
supervised systems is that less reference data for the calibration are needed. This makes this system easier,faster and cheaper to use.
Lino et al.(2008) proposed a grading system for lemons and tomatoes using colour features for ripeness detection. In this
system, the ripening of tomato occurred an increase of the red color and a decrease of the green color, indicating chlorophyll
degradation meanwhile lycopen started to be produced.
Ripeness levels for tomatoes were estimated by measuring decrements in the luminance, blue and green channels as well as
increments in the red channel.
Fernando et al[11](2010) built a system to diagnose six different types of surface defects in citrus fruits using a multivariate
image analysis strategy. Images were unfolded and projected onto a reference eigenspace to arrive at a score matrix used to compute
defective maps. A 94.2% accuracy was reported.[7]
Haiguang et al. [12](2012) classified two kinds of wheat diseases based on colour, shape and texture features to train a back
propagation neural network. The resulting system achieved a classification accuracy of over 90%.
Cho et al. [13], in 2013 used hyperspectral fluorescence imaging for detecting cracking defects on cherry tomatoes.
Omid et al.[14] in 2013 used shape, texture and colour features to sort tomato fruits according to their circularity, size,maturity
and defects. They achieved 84.4% accuracy for defect detection using a probabilistic neural network (PNN) classifier. Colour,texture
and shape features have been evaluated for fruit defect detection system, also in conjunctions with PNNs[14].

CONCLUSION
In this paper, we have proposed a system for grading Orange fruits according to external surface infections. Color moments for each
RGB and HSV channels are used for color info whereas GLCM statistics and wavelets textures options are used for texture options.
The options are coalesced and normalized used Zscore normalization methodology to create the 3 options consistant. PCA has
been used for reducing the feature vector length to the foremost important twenty eight options, whereas LDA has been accustomed
cut back the entire variety of options to 2 discriminative options. The projected system was applied on 177 sample of twelve
completely different fruits disorders. Support Vector Machine, K-Nearest Neighbor and Naive Bayesian Classifier are evaluated for
412

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

grading sound fruits. The system succeeded to discover the infected/uninfected tomatoes from four sides and accomplish appropriate
accuracy just about ranged from eighty fifth to ninety four.

REFERENCES:
[15] Timmermans, A.J.M., Computer Vision System for Online Sorting of Pot Plants Based on Learning Techniques,
ActaHorticulturae, 421, pp. 91-98, 1998 .
[2] Yam, K.L., and E.P. Spyridon , A Simple Digital Imaging Method for Measuring and Analyzing Colour of Food Surfaces,
Journal of Food Engineering, 61, pp. 137-142, 2003.
[3] Francis, F.J., Colour Quality Evaluation of Horticultural crops, HortScience, 15(1), pp. 14-15, 1980.
[4] SapanNaik and Dr. Bankim Patel, Usage of Image Processing and Machine Learning Techniques in Agriculture - Fruit Sorting,
CSI Communications, October 2013.
[5] Jyoti A Kodagali and S Balaji, Computer Vision and Image Analysis based Techniques for Automatic Characterization of Fruits
a Review, International Journal of Computer Applications (0975 8887), Volume 50 No.6, July 2012.
[6]M.Turk, , A.Pentland Eigenfaces for recognition, .J. cognitive neuroscience , vol. 3, no. 1, pp. 71-86, 1991
[7] S.Mika, G.Rtsch, J.Weston, B.Schkopf, K.R.MillerFisher discriminant analysis with kernels, Neural Networks for Signal
Processing IX. .In: 1999 IEEE Signal Processing Society Workshop, pp. 41-48, 1999.
[8] E.Elhariri, N. El-Bendary, M. M. M.Fouad, J.Plato, A. E., Hussein and A. M Hassanien Multi-class SVM Based Classification
Approach for Tomato Ripeness, Innovations in Bio-inspired Computing and
Applications, Advances in Intelligent Systems and Computing, vol. 237, pp.175-186.2014.
[9] G., Polder, G.W. van der Heijden, and I.T.Young, Tomato sorting using independent component analysis on spectral images,
Real-Time Imaging, vol. 9,no. 4, pp. 253-259, 2003.
[10]A.C.L. Lino, J. Sanches and I.M.D. Fabbro,Image processing techniques for lemons and tomatoes classification, .Bragantia, vol.
67, no. 3,pp. 785-789, 2008.
[11] F. Lpez-Garca, G. Andreu-Garca, J. Blasco, N. Aleixos and J. M. Valiente,Automatic detection of skin defects in citrus fruits
using a multivariate image, .Computers and lectronics in Agriculture,vol. 71, pp. 189-197, 2010.
[12] H. Wang, G. Li, Z. Ma and X. Li,Image recognition of plant diseases based on backpropagation networks, .In: 5th International
Congress on Image and Signal Processing (CISP 2012), pp. 894-900, 2012
[13] B. K. Cho, M. S. Kim, I. S. Baek, D. Y. Kim, W. H. Lee, J.Kim,H. Bae and Y.S Kim Detection of cuticle defects on cherry
tomatoes using hyperspectral fluorescence imagery,. Postharvest Biology and Technology,vol. 76, 2013, pp. 40-49, 2013.
[14] O.O. Arjenaki, P. A.Moghaddam and A.M. Motlagh Online tomato sorting based on shape, maturity, size, and surface defects
using machine vision, .Turkish Journal of Agriculture and Forestry, vol.37, pp. 62-68,2013.
[15] D. Gadkari Image quality analysis using GLCM. University of Central Florida, Master of Science in Modeling and Simulation,
College of Arts and Sciences at the University of Central Florida, Orlando, Florida, Downloaded May 2014, http://etd. fcla.edu/CF
/CFE0000273,2004.
[16] F.Albregtsen Statistical texture measures computed from gray level concurrences matrices, Image Processing Laboratory,
Department of Informatics, University of Oslo, pp. 1-14, 1995.
[17] A. Tharwat, A.M. Ghanem and A.E. Hassanien, Three different classifiers for facial age estimation based on K-nearest
neighbor, In 9th International Computer Engineering Conference (ICENCO), pp. 55-60,2013.
[18] A. Jain, K. Nandakuma, and A. Ross, Score normalization in multimodal biometric systems, Pattern recognition, vol.38, np. 12,
pp. 2270-2285, 2005.
[19] D.D. Lewis, Naive (Bayes) at forty: The independence assumption in information retrieval Machine learning, .Proceedings of
the 10th European Conference on Machine Learning ECML-98, pp. 4-15, 1998.
[20] R. Ghaffari, F. Zhang, D. Iliescu, E. Hines, M.S. Leeson, R. Napier and J. Clarkson Early Detection of Diseases in Tomato
Crops: An Electronic Nose and Intelligent Systems Approach, In: The IEEE 2010 International Joint Conference on Neural Networks
(IJCNN), pp. 1-6,2010.
[21] M. Stricker, M. Orengo Similarity of color images, In: SPIE Conference on Storage and Retrieval for Image and Video
Databases III,vol. 2420, pp. 381-392, Feb 1995

413

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

DATA PRIVACY USING MDSRRC


Priyanka k. Dhongade1 ,Prof. Yogesh Nagargoje2
CSE Department
Everest Educational Scocietys Group of Iinstitutions,
Dr. Seema Quadri Institute of Tech, Dr.B.A.M.U, ,Aurangabad, India
priyankadhongade18@gmail.com, yogeshvcet1@gmail.com

Abstract: The use of the data mining techniques and its related application is increased in recent years to extract important
knowledge from large amount of data. This has increased the disclosure risks to sensitive information when the data is released to
outside parties. Database containing sensitive knowledge must be protected against unauthorized access. Seeing this it has become
necessary to hide sensitive knowledge in database. In this paper, we propose a heuristic based algorithm named MDSRRC (Modified
Decrease Support of R.H.S. item of Rule Clusters) to hide the sensitive association rules with multiple items in consequent (R.H.S)
and antecedent (L.H.S). This algorithm overcomes the limitation of existing rule hiding algorithm DSRRC. Proposed algorithm selects
the items and transactions based on certain criteria which modify transactions to hide the sensitive information. Experimental result
shows that proposed algorithm is highly efficient and maintains database quality.

KEYWORDS ASSOCIATION RULE, SENSITIVE PATTERN, PRIVACY PRESERVING DATA MINING (PPDM), SENSITIVITY
INTRODUCTION

Association rule mining technique is widely used in data mining to find relationship between item sets. Many organizations disclose
their information or database for mutual benefit to find some useful information for some decision making purpose and improve their
business schemes. But this database may contain some private data and which the organization does not want to disclose. The issue of
privacy plays important role when several organizations share their data for mutual benefit but no one wants to disclose their private
data. Therefore before disclosing the database, sensitive patterns must be hidden and to solve this issue PPDM techniques are helpful
to enhance the security of database.
Mining association rule techniques are wide employed in data mining to and relationship between item sets. The corporate and many
government organizations reveals their data or information for mutual benefit to search out some useful data for some decision making
purpose and improve their business schemes. But this database may contain some confidential information and which the organization
does not need to reveal.
The problem of concealment plays important role once the corporate share their data for mutual profit however there is no one need to
leak their rivet data. So before revealing the information, sensitive patterns should be hidden and to resolve this issue PPDM (Privacy
preserving data mining) techniques are helpful to boost the safety of database. These approaches have in general the advantage to
require a minimum amount of input (usually the database, the information to protect and few other parameters) and then a low effort is
required to the user in order to apply them. The selection of rules would require data mining process to be executed first. For
association rules hiding, two basic approaches have been proposed. The first approach hides one rule at a time. First selects
transactions that contain the items in a give rule. It then tries to modify transaction by transaction until the confidence or support of the
rule fall below minimum confidence or minimum support. The modification is done by either removing items from the transaction or
inserting new items to the transactions. The second approach deals with groups of restricted patterns or association rules at a time. In
our work we are concern of hiding certain association rules which contain some sensitive information which are on the Right hand
side or left hand side of the rule, so that rules containing confidential item cant be reveal. Our approached is based on modifying the
database in a way that confidence of the association rule can be reduce with the help increase or decrease the support value of RHS or
LHS correspondingly. As the confidence of the rule is reduce below a specified threshold, it is hidden or we can say it will not be
disclosed. The proposed formula is that the improved version of DSRRC. DSRRC could not hide association rules with multiple items
in antecedent (L.H.S) and resultant (R.H.S.). To overcome this limitation, we proposed an algorithmic rule MDSRRC which uses
count of things in resultant of the sensitive rules. It modifies the minimum number of transactions to cover most sensitive rules and
maintain data quality. [1][2][3]
414

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

II. LITERATURE REVIEW AND THEORETICAL BACKGROUND


Association rule hiding techniques can be classified into heuristic based approaches, reconstruction based approaches, border based
approaches, exact approaches, and cryptography based approaches. Proposed algorithm use heuristic based approach which is widely
used.
A. Heuristic Approaches for Hiding The Sensitive Rule These approaches use mainly two techniques for hiding sensitive rule:
data distortion which permanently deletes some items from database and data blocking which put ? instead of deleting items
from database.
Data Distortion changes the item value by a new value in database matrix. It alter 0 to 1 or 1 to 0 for selected items in selected
transactions to decrease the confidence, by decreasing or increasing support of items in sensitive rules. Heuristic algorithms cannot
give an optimal solution because of side effects to non sensitive rules. [4] Presented heuristic algorithm for hiding sensitive rules.
They have also provided proof of NP-Hardness of optimal solution of sanitization problem. [2] Proposed five different algorithms
with five assumptions to hide sensitive rules in database, among them three are based on reduced support of item set and two are based
on reduced confidence of the rule below the minimum threshold. [5] Considers all side effect parameters and based on that modify the
selected transactions to reduce the side effects on sanitized database. [6] Proposed two algorithms to automatically hide sensitive
association rules without pre mining and selection of hidden rules [7]. [3] Proposed algorithm using clustering to reduce the side
effects on sanitized database but it can hide rules only with single antecedent and single consequent.
Data Blocking instead of inserting or deleting item from database it replaces 1 and 0 with ? in selected transactions.
So after applying this technique, adversary will not know the original value of ?. [8] And [9] proposed algorithm which uses data
blocking technique to hide sensitive rules. [10] Proposed more efficient algorithm using clustering than presented in [8] [9]. We have
proposed heuristic based approach which is efficient than other approaches presented in above section.

III. PROPOSED MODIFIED DECREASE SUPPORT OF R.H.S ITEM OF RULE CLUSTER ALGORITHM
In order to hide the sensitive rule like AB, we can decrease either confidence or support of the rule below the user specified
minimum threshold. To decrease the confidence of the rule, we can choose two methods like (1) increase the support of A (L.H.S. of
the sensitive rule) but not support of AB, or (2) decrease the support of AB by decreasing support of B (R.H.S of the sensitive
rule) because it decrease the confidence of the rule faster than simply decreasing the support of A B. Proposed algorithm hides rules
with multiple items in L.H.S and multiple items in R.H.S. So the rule is like xAyB where x, yI and A, BI. Here y is an item
selected by proposed algorithm to decrease the support of the R.H.S. and decrease the confidence of the rule below MCT. We replace
1 to 0 in some transaction to decrease the support of selected items.
Some important definitions of terms are use in the proposed algorithm is as follow:
1.
2.

Sensitivity of Item: is number of sensitive rules which contain this item.


Sensitivity of Transaction: is the total of sensitivities of all sensitive items which are presented in that transaction.

A detail description of sensitivity is given in detail in [12]. The proposed algorithm starts with mining the association rule from the
original database D using association rule mining algorithm e.g. Apriori algorithm [11]. Then some rules as sensitive rules (SR) are
specified by user from the rules generated by the association rule mining algorithm. Then algorithm counts occurrences of each item in
R.H.S of sensitive rules. Now algorithm finds IS= {is0, is1 isk } kn, by arranging those items in decreasing order of their counts.
After that sensitivity of each item is calculated then sensitivity of each transaction is calculated. Then transactions which support is0
are sorted in descending order of their sensitivities.
Now rule hiding process starts by selecting first transaction from the sorted transactions with higher sensitivity, delete item is0 from
that transaction. Then update support and confidence of all sensitive rules and if any rules have support and confidence below MST
and MCT respectively then delete it from SR. Then update sensitivity of each item, transaction and IS. Again select transaction with
415

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

higher sensitivity and delete is0 from it. This process continues until all sensitive rules are hidden. As a result, modified transactions
are updated in the original database and new database is generated which is called sanitized database D, which preserves the privacy
of sensitive information and maintains database quality. Proposed algorithm MDSRRC is shown below, which is used to hide the
sensitive rules from database. Given a database D, MCT (minimum confidence threshold) and MST (minimum support threshold)
algorithm generates sanitized database D. Sanitized database hides all sensitive rules and maintains data quality.

MDSRRC Algorithm
INPUT:
MCT (Minimum Confidence Threshold), Original database D,
MST (Minimum support threshold).

OUTPUT:
Database D with all sensitive rules are hidden.
1.

Apply apriori algorithm [3] on given database D. Generate all possible association rules R.

2.

Select set of rules SRR as sensitive rules

3.
4.
5.

Calculate sensitivity of each item jD.


Calculate sensitivity of each Transaction.
Count occurrences of each item in R.H.S of sensitive rules, find IS={is0, is1...isk} kn, by arranging those items in
descending order of their count. If two items have same count then sort those in descending order of their actual support
count
6. Select the transactions which supports is0, then sort them in descending order of their sensitivity. If two transactions have
same sensitivity then sort those in increasing order of their length.
7. While(SR is not empty)
8. {
9.
Start with first transaction from sorted transactions,
10.
Delete item is0 from that transaction.
11. For each rule rSR
12. {
13. Update support and confidence of the rule r.
14. If(support of r < MST or confidence of r < MCT)
15. {
16.
Delete Rule r from SR.
17.
Update sensitivity of each item.
18.
Update IS (This may change is0).
19.
Update the sensitivity of each transaction.
20.
Select the transactions which are supports is0,
21.
Sort those in descending order of their sensitivity.
22. }
23. Else
24. {
25. Take next transaction from sorted transactions, go to step 10.
26. }
27. }
28. }
29. End
MDSRRC select best items so that deleting those items hide maximum rules from database to maintain data quality.
416

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IV. CONCLUSION
The proposed MDSRRC algorithm overcomes the limitations of DSRRC and provides Association rule hiding techniques
for privacy preserving data mining to hide certain crucial information so they cannot discover through association rule
MDSRRC hides sensitive association rules with fewer modifications on database to maintain data quality and to reduce
the side effect of database. In future, MDSRRC algorithm can be extended to increase the efficiency and reduce the side
effects by minimizing the modifications on database.

References
[1] Nikunj H. Domadiya and Udai Pratap Rao, Hiding Sensitive Association Rules to Maintain Privacy and Data Quality in
Database3rd IEEE International Advance Computing Conference (IACC) 2013.
[2] V. S. Verykios, A. K. Elmagarmid, E. Bertino, Y. Saygin, and E. Dasseni, Association rule hiding, IEEE Transactions on
Knowledge and Data Engineering, vol. 16, pp. 434447, 2004.
[3] C. N. Modi, U. P. Rao, and D. R. Patel, Maintaining privacy and data quality in privacy preserving association rule mining, 2010
Second International conference on Computing, Communication and Networking Technologies, pp. 16, Jul. 2010.
[4] M. Atallah, A. Elmagarmid, M. Ibrahim, E. Bertino, and V. Verykios,Disclosure limitation of sensitive rules, in Proceedings of
the 1999 Workshop on Knowledge and Data Engineering Exchange, ser. KDEX 99. Washington, DC, USA: IEEE Computer Society,
1999, pp. 4552.
[5] Y.-H. Wu, C.-M. Chiang, and A. L. Chen, Hiding sensitive association rules with limited side effects, IEEE Transactions on
Knowledge and Data Engineering, vol. 19, pp. 2942, 2007.
[6] S.-L. Wang, B. Parikh, and A. Jafari, Hiding informative association rule sets, Expert Systems with Applications, vol. 33, no. 2,
pp. 316 323, 2007.
[7] S.-L. Wang, D. Patel, A. Jafari, and T.-P. Hong, Hiding collaborative recommendation association rules, Applied Intelligence,
vol. 27, pp. 6777, 2007.
[8] Y. Saygin, V. S. Verykios, and A. K. Elmagarmid, Privacy preserving association rule mining. in RIDE. IEEE Computer
Society, 2002, pp 151158.
[9] Y. Saygin, V. S. Verykios, and C. Clifton, Using unknowns to prevent discovery of association rules, SIGMOD Rec., vol. 30,
no. 4, pp. 45 - 54, Dec. 2001.
[10] C. N. Modi, U. P. Rao, and D. R. Patel, An Efficient Solution for Privacy Preserving Association Rule Mining, (IJCNS)
International Journal of Computer and Network Security, vol. 2, no. 5, pp. 7985, 2010.
[11] J. Han, Data Mining: Concepts and Techniques. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2005.
[12] S. Wu and H. Wang, Research on the privacy preserving algorithm of association rule mining in centralized database, in
Proceedings of the 2008 International Symposiums on Information Processing, ser. ISIP08. Washington, DC, USA: IEEE Computer
Society, 2008, pp

417

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Survey on Rekeying Framework for Secure Multicast


HITESH PATIL M.TECH STUDENT AURNAGABAD(MS) INDIA
hiteshppatil15@gmail.com MOB.NO. +919028173530

Abstract Group key management (GKM) in mobile communication is important to enable access control for a group of users. A
major issue in GKM is how to minimize the communication cost for group rekeying. To design the optimal GKM, researchers have
assumed that all group members have the same leaving probabilities and that the tree is balanced and complete to simplify analysis. In
the real mobile computing environment, however, these assumptions are impractical and may lead to a large gap between the
impractical analysis and the measurement in real-life situations, thus allowing for GKM schemes to incorporate only a specific
number of users.
In this paper, we propose a new GKM framework supporting more general cases that do not require these assumptions. Our
framework consists of two algorithms: one for initial construction of a basic key-tree and another for optimizing the key-tree after
membership changes. The first algorithm enables the framework to generate an optimal key-tree that reflects the characteristics of
users leaving probabilities, and the second algorithm allows continual maintenance of communication with less overhead in group
rekeying. Through simulations, we show that our GKM framework outperforms the previous one which is known to be the best
balanced and complete structure.
.

Keywords multicast, security, group key, group key management, logical key hierarchy, batch rekeying, group dynamics
1.INTRODUCTION.
Multicast is the delivery of a message or information to a group of destination computers simultaneously in a single transmission.
Such applications need a secure group key to communicate their data. This brings importance to key distribution techniques. For
group-oriented applications, multicast is an essential mechanism to achieve scalable information distribution. Multicast describes
communication where information is sent from one or more parties to a set of other parties. In this case, information is distributed
from one or more senders to a set of receivers, but not to all users of the group. The advantage of multicast is that, it enables the
desired applications to service many users without overloading a network and resources in the server.
Security is essential for data transmission through an insecure network. There are several schemes to address the unicast security
issues but they cannot be directly extended to a multicast environment. In general, multicasting is far more vulnerable [1], [2], [3] than
unicast because the transmission takes place over multiple network channels. In multicast group communication, all the authorized
members share a session key, which will be changed dynamically to ensure forward and backward secrecy referred as "group
rekeying".

2. COMPARISON OF GROUP KEY MANAGEMENT PROTOCOLS.


2.1 Centralized key management protocols: A single entity is employed for controlling the whole group; hence a group key
management protocol seeks to minimize storage requirements, computational power on both client and server sides, and bandwidth
utilization. Although the centralized approach has a problem of a single point of failure, some applications like stock quotes are still
centralized. To overcome this problem, a mirror delivered to all group members. This causes the bandwidth waste because rekeying messages
are delivered to members who do not need them as well as intended receivers.

2.2 Decentralized key management protocols: The management of a large group is divided among subgroup managers, trying to
minimize the problem of concentrating the work in a single manager. These protocols need more trusted nodes and suffer from
encryptions and decryptions processes between subgroup managers. Some examples of decentralized protocols are: Scalable Multicast
Key Distribution using Core Based Tree (CBT) [15], Iolus [16], Dual-Encryption Protocol (DEP) [17] and Kronos [18]. Cheng and
Laih [26] modified Tsengs conference key agreement protocol based on bilinear pairing. In 2009, Huang et al. [27] proposed a noninteractive protocol based on DL assumption to improve the efficiency of Tsengs protocol.
418

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.3 Distributed key management protocols: There is no explicit manager, and the members themselves do the key generation. All
members can perform access control and the generation of the key can be rather contributory, meaning that all members contribute
some information to generate the group key, or done by one of the members. The distributed protocols have a scalability problem in
case of key update, since they require performing large computations and they are characterized by large communication overheads.
Further, they need all group members to have powerful resources. Some examples of distributed key management protocols are:
Octopus Protocol [2], Distributed Logical Key Hierarchy [15] and Diffie-Hellman Logical Key Hierarchy [8]. In the following
subsection, an overview of the proposed protocol is given.
For secure multicast services, various tree based group key management schemes have been introduced until now. Traditional tree
based approaches uses conventional encryption algorithms which focus on reducing the number of rekeying messages transmitted by
the key distribution center (group manager/controller). However, they do not consider the network bandwidth used for transmitting
each rekeying message. To provide a scalable rekeying, the key tree approach makes use of KEKs so that the rekeying cost increases
logarithmically with the group size for a join or depart request An individual key serves the same function as KEK, except that it is
shared only by the GC and an individual member [21]. To this end, KDC aggregates multiple rekeying messages into one multicast
flow, which is referred to as group oriented rekeying [7]. In group oriented rekeying, all rekeying messages are members. This
operation [12] is done by GC using one erasure decoding of certain MDS code, followed by one multicast to all the n members In this
approach, the rekeying is done at every member join or leave. The new group key is multicasted to the group members each time by
the group controller through multicasting to establish security. In this scheme, the GC has to communicate with the group members
each time. The complexity of the rekeying operation changes because rekeying is done at every member joins or leaves the group,
which results in high computational complexity. In this scheme , when a member leaves the group, rekeying operation is performed to
compute the new group key, which increases the burden on the server to recompute the group key and then multicast to all the
members of the group. Since it is dynamic in nature, several rekeying operation is taking place.

3. A New Secure Multicast Key Distribution Protocol Using Combinatorial Boolean Approach .
The proposed protocol in this scheme is based on Key Management using Boolean Function Minimization (KM-BFM)
technique [11]. KM-BFM protocol is considered an enhancement to LKH protocols. Instead of using one tree as in KMBFM; the members are divided into a number of subgroup trees. The group manager holds n key pairs and each group
member holds y keys. The proposed protocol achieves a lower storage at both the group manager and the group members
compared to KM-BFM protocol. It has to be noted that the authentication problem is not addressed in the present paper.
This protocol achieves a lower storage at both the group manager and the group members compared to KM-BFM
protocol. Also, it has a lower update message length in case of a single member leave and a comparable update message
length in case of multiple leaves. Furthermore, the probability of conducting a successful collusion attack in the proposed
protocol is less than that proposed in KM-BFM protocol.
4. A new probabilistic rekeying method for secure multicast groups .
The Probabilistic optimization of LKH (PLKH) scheme [19], optimized rekey cost by organizing LKH tree with user rekey
characteristic. This paper concentrate on further reducing the rekey cost by organizing LKH tree with respect to compromise
probabilities of members using new join and leave operations.
The key identifier assignment requires more memory to store key identifiers. Though total nodes created are less than PLKH & LKH
schemes, this scheme treats some nodes harshly in terms of depth assigned. Finally, this scheme only ensures that tree structure is
binary. It neither tries to maintain strict binary tree as PLKH nor tries to balance all nodes at same level as LKH.

5. Optimal Communication Complexity of Generic Multicast Key Distribution


This scheme deals with tight lower bound on the communication complexity of secure multicast key distribution protocols in which
rekey messages are built using symmetric-key encryption, pseudorandom generators and secret sharing schemes [22]. Updating the
group key for each group membership change is at least log2(n) O(1) basic rekey messages. Key distribution in multicast is
implemented using a central distribution authority, called the group center, responsible for establishing a shared key among all
privileged group members, and for rekeying the group every time a new member joins and/or an existing member leaves the group.
This lower bound involves defining a sequence of adversarial-chosen REPLACE operations (simultaneous execution of a LEAVE and
JOIN) and every protocol incurs an average communication cost of log2 n for such a sequence and every individual LEAVE
performed for a cost of log2(n) multicast messages and every individual JOIN for log2(n) unicast messages.

6.Rekeying using MDS code on PFMH tree


The PFMH tree follows a PACK protocol, in which each group member equally contributes its share to the group key, and
this share is never relieved to the others. PACK includes a set of rekeying protocols to update the group key upon group
419

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

membership change events for security purpose. The PACK protocol can achieve the minimum rekeying time cost upon
membership change events. For any single-user Join event, the rekeying cost is O(1), and for any single user leave event,
the rekeying time cost is of O(log n). The communication and computation costs can still be reduced by adopting PFMH
tree and by introducing phantom nodes in the key tree.
In this scheme, each member will maintain and update the global key tree locally. Each group member knows all the
subgroup keys on its key path and knows the ID and the exact location of any other current group member in the key tree.
In PACK, when a new user joins the group, it will always be attached to the root of the join tree to achieve O(1) rekeying
cost in terms of computation per user, time, and communication. When a user leaves the current group, according to the
leaving member's location in the key tree, as well as whether this member has a phantom location in the key tree, different
procedures will be applied, and the basic idea is to update the group key in O(log n) rounds and simultaneously reduce the
communication and computation costs.
TABLE 1: SAMPLE TABLE COMPARISON OF KEY RECOVERY TIME

Fig: Computation cost

3.MY PROPOSED SYSTEM BASED ON SURVEY


To develop an Efficient Rekeying Framework for Secure Multicast with Diverse-Subscription-Period Mobile Users based on
optimal GKM with dynamic mobile subscribers. The proposed framework consists of cost-efficient key-tree generation and
management. We also provide a new mathematical analysis methodology for quantifying the performance of key-trees.
This frame work consider the following assumptions.
1. We propose a new mathematical analysis method-ology that can provide the precise average value of communication
overhead for group key up-dates under general conditions. The conditions in-clude an arbitrary number of members,
non-equal leaving probabilities, and non-balanced and non-complete tree structure can support the mobile situations.
Note that unlike previous works, the average size of rekeying messages can be calculated even though the
subscription periods are diverse. Also, through our analysis, we find the conditions for the optimal tree structure that
minimizes com-munication overhead.
2. We develop a two-step mechanism for optimal key-tree generation: one for initial key-tree generation followed by
key-tree maintenance after the group membership changes. The first algorithm can gen-erate a key-tree that
corresponds to the optimal key-tree obtained by mathematical analysis.
3. For the second step of the mechanism in 2), we pro-pose an optimal key-tree maintenance algorithm for use after the group
membership changes. The algorithm optimizes the key-tree by modifying the tree structure considering the diversesubscription periods of the mobile users.

4.Acknowledgment
I am using this opportunity to express my gratitude to everyone who supported me throughout the course of this project. I
am thankful for their aspiring guidance, invaluably constructive criticism and friendy advice during the project work. I am
sincerely grateful to them for sharing their truthful and illuminating views on a number of issues related to the project.

420

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

I EXPRESS MY WARM THANKS TO PROFESSOR K.V.BHOSALE FOR THEIR SUPPORT AND GUIDANCE .

5.CONCLUSION
Key transfer protocols rely on a mutually trusted key generation center (KGC) to select session keys and transport session keys to all
communication entities secretly. Most often, KGC encrypts session keys under another secret key shared with each entity during
registration. We have optimized dynamic multicast key distribution scheme with MDS codes using PFMH tree. The computation
complexity of key distribution is greatly reduced by employing erasure decoding of MDS codes instead of more expensive encryption
and decryption computations.

REFERENCES:
[1] Peter S. Kruus and Joseph P. Macker, Techniques and issues in multicast security,"
MILCOM98,1998.
[2] Paul Judge and Mostafa Ammar, "Security Issues and Solutions in Multicast Content Distribution: A Survey", IEEE Network,
February 2003, pp 30-36.
[3] M. Moyer, J. Rao and P. Rohatgi, "A Survey of Security Issues in Multicast Communications", IEEE Network Magazine, Vol. 13,
No.6, March 1999, pp. 12-23.
[4] M. Waldvogel, G. Caronni, D. Sun, N. Weiler and B. Plattner, "The VersaKey Framework: Versatile Group Key Management",
IEEE Journal on Selected Areas in Communications, 7(8), 1614-1631, August 1999.
[5] S. Mittra, Iolus: A Framework for Scalable Secure Multicasting", Proc. of ACM SIGCOMMi'97, 277-288, Sep. 1997.
6. D. M. Wallner, E. J. Harder and R. C. Agee, "Key Management for Multicast: Issues and Architectures", Internet Draft (work in
progress), draft-wallner-key-arch-01.txt, Sep. 15, 1998.
7.
C. K. Wong, M. Gouda and S. S. Lam, "Secure Group Communications Using Key Graphs", Proc.ACM SIGCOMM'98, Sep.
1998.
8.
Y. Kim, A. Perrig, and G. Tsudik, "Tree-Based Group Key Agreement," ACM Trans. Information and System Security, vol. 7,
no. 1, pp. 60-96, Feb.
[9] S. Benson Edwin Raj, J. Jeffneil Lalith , "A Novel Approach for Computation-Efficient Rekeying for Multicast Key Distribution"
IJCSNS , VOL.9 No.3, March 2009.
[10] Lihao Xu, Cheng Huang, "Computation Efficient Multicast Key Distribution," IEEE Trans. Parallel And Distributed Systems,
Vol 19, No. 5, May 2008.
[11] Mohamed M. Nasreldin Rasslan, Yasser H. Dakroury, and
Heba K. Aslan A New Secure Multicast Key Distribution Protocol Using Combinatorial Boolean Approach ,International Journal of
Network Security, Vol.8, No.1, PP.7589, Jan. 2009
[12] C.Wong, M. Gouda, and S. Lam, secure group Communications using key graphs, Proceedings of ACM SIGCOMM, pp. 6879, Vancouver, British Columbia, September 1998.
[13] D. McGrew, and A. Sherman, Key Establishment in Large Dynamic Groups Using One-Way Function Trees, Technical Report
No. 0755, TIS Labs at Network Associates, Inc., Glenwood, MD, May 1998.
[14] I. Chang, R. Engel, D. Kandlur, D. Pendarakis, and D.
Saha,Key management for secure internet multicast using Boolean function minimization techniques, Proceedings of the IEEE
INFOCOM, vol. 2, pp. 689-698, New York, Mar. 1999.
[15]

A. Ballardie, Scalable Multicast Key Distribution, RFC 1949, 1996.

[16]
S. Mittra, Iolus: A framework for scalable secure multicasting, Proceedings of the ACM SIGCOMM, vol. 27, no. 4, pp.
277-288, New York, Sep. 1997.
[17] L. Dondeti, S.Mukherjee and A. Samal, Scalable secure
one-to-many group communication using dual encryption, IComputer and Communication, vol. 23, no. 17, pp. 1681-1701, Nov.
1999.
[18] S. Setia, S. Zhu, and S. Jajodia,Kronos: A scalable group re-keying approach for secure multicast, Proceeding of the IEEE
Symposium on Security and Praivcy, pp. 215-228, Oakland, California, May 2000.
421

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[19] A new probabilistic rekeying method for secure multicast groups Shankar Joshi, Alwyn R. Pais,
[20]
Bandwidth Efficient Key Distribution for Secure Multicast in Dynamic Wireless Mesh Networks, Seungjae Shin, Junbeom
Hur, Hanjin Lee, Hyunsoo Yoon WCNC 2009 proceedings.
[21] Joe Prathap P M. , V.Vasudevan,Analysis of the various key management algorithms and new proposal in the secure multicast
communications, (IJCSIS) International Journal of Computer Science and Information Security, Vol. 2, No.1, 2009
[22] Daniele Micciancio and Saurabh Panjwani, Optimal Communication Complexity of Generic Multicast Key Distribution,
IEEE/ACM Transactions on Networking (2008).
[23] Daniele Micciancio and Saurabh Panjwani, Optimal Communication Complexity of Generic Multicast Key Distribution,
IEEE/ACM Transactions on Networking (2008).
[24] Lein Harn and Changlu Lin , Authenticated Group Key Transfer Protocol Based on Secret Sharing, IEEE transactions on
computers, vol. 59, no. 6, June 2010
[25] E. Bresson, O. Chevassut, and D. Pointcheval, Provably-Secure Authenticated Group Diffie-Hellman Key Exchange,
ACM Trans. Information and System Security, vol. 10, no. 3, pp. 255-264, Aug. 2007.
[26] J.C. Cheng and C.S. Laih, Conference Key Agreement Protocol with Non-Interactive Fault-Tolerance Over Broadcast
Network, Intl J. Information Security, vol. 8, no. 1, pp. 37-48, 2009.
[27] K.H. Huang, Y.F. Chung, H.H. Lee, F. Lai, and T.S. Chen, A Conference Key Agreement Protocol with Fault-Tolerant
Capability, Computer Standards and Interfaces, vol. 31, pp. 401-405, Jan. 2009

422

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Survey about Key Pre-distribution Scheme in Wireless Sensor Networks


Amol Abhiman Magar
Mtech-Comp. sci. and Technology.
Email: amolmagar@outlook.com
Contact no: 0855 195 0789

Abstract Wireless sensor network (WSN) has a wide range of applications in military as well as in civilian services. As wireless
sensor networks continue to grow, so does the need for effective security mechanisms. Because sensor networks may interact with
sensitive data and/or operate in hostile unattended environments and also nearly all aspects of wireless sensor network defenses
relying on solid encryption and mainly key predistribution is a challenging task in sensor networks. Because the neighbor of a node
after the deployment of sensors is unknown. An attacker can easily obtain a large number of keys by capturing a small fraction of
nodes, and hence, can gain control of the network by deploying a replicated nodes or packets preloaded with some compromised
keys. For secure communication, neighbors must possess a secret common key or there must exist a key-path among these nodes. In
this paper I have discussed in brief about various key pre-distribution schemes for homogeneous sensor networks and I had analyzed
merits and demerits for each of them. Among various schemes a suitable scheme can be chosen based on the requirement and the
resource availability of the sensors.

Keywords Wireless Sensor Networks, Key Pre-Distribution, Resiliency, BIBD, Mobile Polynoial Pool, Grid Based Key, Hybrid
Key, Pair Wise Key Establishment.

Introduction
Wireless sensor network (WSN) has a wide range of applications in military as well as in civilian services. Sensor nodes are deployed
in a battlefield to detect enemy intrusion. They are used to measure various environmental variables such as temperature, heat, sound,
pressure, magnetic and seismic fields, etc. of a region. It has several applications in industry such as machine health monitoring, waste
water monitoring etc. As the sensor nodes are used in various applications, secure communication between the sensor nodes is needed
in order to keep the information secret. For secure communication between two sensor nodes a secret key is needed and cryptographic
key management is a challenging task in sensor networks. Sensor nodes are constrained in resources, such as they have low processing
power, less memory capacity, limited battery life. Apart from this wireless nature of the network, unknown topology of the network,
higher risk of node capture and lack of fixed infrastructure makes the key management more challenging in WSN. Use of any
cryptographic algorithm must take into account the resource availability at each node. They should be easy in computation and occupy
less storage space. Use of symmetric key cryptography means each node should maintain (N-1) keys for N number of nodes in the
network. For a large value of N, a substantial memory space is wasted in storing the keys, and hence is not memory efficient. Use of
public key cryptosystem needs a huge computational power and sensors have low processing power. Hence public key cryptosystem is
not an efficient key management technique in WSN. Key pre-distribution scheme is regarded as a promising key management in
sensor network. In key pre-distribution scheme, each sensor is assigned a set of keys from a pool of keys before deployment such that
after deployment, two nodes who are in the communication range of each other will share at least one key between them with higher
probability, so that a secure communication can be established between them.

TERMS AND DEFINITIONS


In this section I briefly discuss some of the related terminologies and definitions for the sake of completeness. A set system or design
[26] is a pair (X, A), where A is a set of subsets of X, called blocks. The elements of X are called varieties or elements. A Balanced
Incomplete Block Design BIBD (v; b; r; k; ), is a design which satisfies the following conditions:
1) |X|= v, |A|= b.
2) Each subset in A contains exactly k elements,
3) Each variety in X occurs in r blocks,
4) Each pair of varieties in X is contained in exactly blocks in A.
When v = b, the BIBD is called a symmetric BIBD (SBIBD) and denoted by SB[v, k, ].
An association scheme with m associate classes on the set X is a family of m symmetric anti-reflexive binary relations on X such that:
1) Any two distinct elements of X are i-th associates for exactly one value of i, where 1 i m.
2) Each element of X has ni i-th associates, 1 i m.
423

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3) for each i, 1 i m, if x and y are i-th associates, then there are pi jl elements of X which are both j-th associates of x and l-th
associates of y. The numbers v, ni (1 i m) and jl (1 i; j; l m) are called the parameters of the association scheme.
A partially balanced incomplete block design with m associate classes, denoted by PBIBD (m) is a design on a v-set X, with b blocks
each of size k and with each element of X being repeated r times, such that if there is an association scheme with m classes defined on
X where, two elements x and y are i-th (1 i m) associates, then they occur together in i blocks. We denote such a design by PB[k;
1; 2;.; m; v].
Let X be a set of varieties such that
X=

=1 ,

= for 1<=I <=m, = for i

The Gi s are called groups and an association scheme defined on X is said to be group divisible if the varieties in the same group are
first associates and those in different groups are second associates.
A transversal design TD (k, ;r), with k groups of size r and index , is a triple (X, G, A) where,
1) X is a set of kr elements (varieties).
2) G = (G1, G2.., Gk) is a family of k sets (each of size r) which form a partition of X.
3) A is a family of k-sets (or blocks) of varieties such that each k-set in A intersects each group Gi in precisely one variety, and any
pair of varieties which belong to different groups occur together in precisely blocks in A.

KEY PRE-DISTRIBUTION SCHEMES


All the key pre-distribution schemes can be divided into three according to the way of choosing keys for each node from the key pool.
They are :
1) Probabilistic: Keys are drawn randomly and placed into the sensors.
2) Deterministic: Keys are drawn based on some definite pattern.
3) Hybrid: Makes use of both the above techniques.
To discuss about the schemes in a better way we have divided them into some parts and we have discussed below about each part in
respective subsections.

A. Basic schemes
First Ill discuss about two basic schemes which though were not meant for WSN, have been used in context of WSN. Those two
schemes are Bloms scheme and Blundo et als scheme.
Blom [1] proposed a key pre-distribution scheme that allows any two nodes of a group to find a pair wise key. The security parameter
of the scheme is c, i.e., as long as no more than c nodes are compromised, and the network is perfectly secure. They have used one
public matrix and one secret symmetric matrix to construct this scheme. Each node will have the share of those matrix such that any
two nodes can calculate a common key between them without knowing each others secret matrix share. The problem with this
scheme is that if more than c numbers of nodes are compromised, the whole network will be compromised.
In the scheme proposed by Blundo, Santis, Herzberg, Kutten, Vaccaro, Yung [2], they used a symmetric bivariate polynomial over
some finite field GF(q). Symmetric bivariate polynomial is a polynomial P(x, y) GF (q)[x, y] with the property that P (i, j) = P (j, i)
for all i, j GF (q). A node with ID Ui stores a share in P, which is an univariate polynomial (y) = P(i, y). In order to communicate
with node Uj, it computes the common key Kij = (j) = (i); this process enables any two nodes to share a common key. If P has
degree t, then each share consists of a degree t univariate polynomial; each node must then store the t + 1 coefficients of this
polynomial. So, each node requires space for storing t + 1 keys. If an adversary captures s nodes, where s t, then it cannot get any
information about keys established between uncompromised nodes. However, if it captures t + 1 or more nodes then all the keys of the
network can be captured.

Eschenauer and Gligor first proposed a random key predistribution scheme [12] for WSN. They divided the key predistribution
mechanism into three steps: key pre-distribution, shared-key discovery and path-key establishment. In this approach, a key ring for a
node containing some fixed number of keys are chosen randomly without replacement from a key pool of large number of keys. Each
node is assigned a key ring. The key identifiers of a key ring and corresponding sensor identifiers are stored in a trusted controller
424

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

node. Now a shared key may not exist between two nodes. In that case, if there exists a path of nodes sharing keys pairwise between
those two nodes, they may communicate via that path. They have also shown that for a network of 10000 nodes, a key ring containing
250 keys is enough for almost full connectivity. When sensor nodes are compromised, key revocation is needed. For this a controller
node broadcasts a revocation message containing the list of identifiers of keys which have been compromised and all the nodes after
getting the message removes the compromised keys from the key ring. The main advantages of this scheme are that the scheme is
flexible, scalable, efficient and easy to implement. However, the main disadvantages are that it cannot be used in regions which are
prone to massive node capture attack.
Chan Perrig and Song [8] modified Eschenauer and Gligor scheme. According to their q-composite scheme two nodes must share atleast q number of keys to have a secure path between them. The path key will be formed by the hash of all the common keys. Though
for small number of node capture, resiliency was improved, the resiliency was affected drastically as number of captured nodes
increases.

B. Random pair wise scheme


In the random pairwise scheme, proposed by Chan, Perrig and Song [8], they have proposed that in a network of size N and minimum
connection probability of two nodes is p, each node will store k number of keys where k = N * p. The key pre-distribution, shared key
discovery and path key establishment is done as in [12]. Node revocation for compromised nodes are done by voting of all the nodes
in the network with a suitable threshold parameter. But the disadvantage of this scheme is that it is not scalable and choosing the
threshold value for node revocation is very important as it can lead to other problems.
The pairwise key scheme of Liu and Ning [15] is based on the polynomial pool based key pre-distribution by Blundo et al [2]. They
have shown the calculation for the probability that two nodes share a common key. They have also shown the probability that a key is
compromised. Later it was extended in [16] where they modified the scheme into a hypercube based key pre-distribution. Zhu, Xu,
Setia and Jajodia [25] also proposed a random pairwise scheme based on probabilistic key sharing where two nodes can establish
shared keys without the help of an online KDC and only knowing each others key id. Communication overhead in this scheme is very
low. But if any node in the path is compromised then the key establishment process has to be restarted.

C. Grid-based key pre-distribution schemes


Chan and Perrig was the first to propose a grid based key pre-distribution scheme where they place all the nodes of a network in a
square grid. The scheme was named as PIKE scheme [7]. In that scheme, each node will have a secret pairwise key with the nodes
which lie in the same row or same column. So for a network of size N, each node has to store 2( N - 1) number of keys. If two nodes
do not have any shared key, they will have exactly two intermediate nodes having shared key with both the nodes. Here any node can
act as an intermediatory. Hence, it reduces the battery drainage of the nodes near base station who have to serve as intermediatory
most of the time in other schemes. But the main disadvantage of this scheme is that it has high communication overhead. Because
large number of key pairs will not have common key between them, path-key establishment will be very much time consuming.
In [20], Kalindi et. al. modified the PIKE scheme. They placed the nodes as well as the keys in a grid and divide the grid into some
sub-grids. A node will have all the keys in its key chain which lie in its same row or column and which are in its same or neighboring
sub-grids. Key needed to store in each node can be much less than [7] if number of subgrids are more. It will increase the resiliency
but decrease the connectivity. The reverse will happen if number of sub-grids is lesser. Nodes belonging to the same sub-grid and in
same row or same column share more keys. But they are not allowed to use all the common keys because capturing of one node of a
row or column will reveal all the keys of that row and column.
Sadi, Kim and Park [21] proposed another grid based random scheme based on bivariate polynomials. In this scheme, they will first
arrange the nodes into a m m square grid. After that some 2m bivariate polynomials will be generated and they will be divided
into some group such that each row and each column will be assigned one group of polynomials. A node then will select some 2
number of polynomials from its row polynomial group and column polynomial group. If two nodes are in same row or in same
column, they use a challenge response protocol to find whether they are sharing a common polynomial. If they a shared polynomial,
they can setup a shared key. Otherwise they will have to go for path key establishment and they will have to find two other
intermediate nodes such that a path can be established. In this case also the communication overhead is high.

425

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

D. Group based key pre-distribution


Liu, Ning and Du observed that sensor nodes in the same group are usually close to each other and they proposed a group based key
pre-distribution scheme without using deployment knowledge [17], [16]. They divide the nodes of a network into groups and then
form cross groups taking exactly one sensor node from each group such that there will not be any common node between any two
cross groups. They presented two instantiations of pre-distribution. In the first one, hash function was used. Two nodes will share a
common key if they are in same group or in same cross group. If the number nodes in the network are N and they are divided into
n groups each containing m nodes, N = n m and each node need to store (m+n) /2 keys. In the second method, they used symmetric
bivariate polynomials and assign a unique polynomial to each group and cross group. Every node will have share of the polynomials
corresponding to their groups and cross groups. The advantages of this scheme are that it does not do not use deployment knowledge
and give resiliency and connectivity similar to the deployment knowledge based schemes. The polynomial based schemes can be
made scalable. The framework can be used to improve any existing predistribution schemes. The disadvantages of this scheme is that
the probability of secure communication between cross-group neighbors is very less. The scheme is not suitable for networks which
have small group size.
To overcome the problems of Liu et als scheme [16], Martin Paterson and Stinson [19] proposed a group based design using
resolvable transversal designs. To increase the cross group connectivity, they proposed that each node is contained in m cross groups
rather than one. Though some additional storage is required. They did not give any algorithm for the construction of such designs.

E. Key pre-distribution using combinatorial structures


In the schemes which use combinatorial structures, one of their greatest advantages is that almost all of them have efficient shared key
discovery algorithm with which easily two nodes can find their common key. Camtepe and Yener were the first to use combinatorial
structures in key predistribution [4], [3]. They have used projective planes and generalized quadrangles. A finite projective plane
PG(2, q) (where q is a prime power) is same as the symmetric BIBD, BIBD (q2+q+1, q2+q+1, q+1, q+1, 1). So, q2+q+1 number
of nodes can be accommodated in the network each node having q + 1 number of keys. It ensures 100% connectivity. But the
resiliency was very poor. So they used generalized quadrangles, GQ(s,t) where s and t are the two parameters of GQ. Three designs
were used : GQ(q,q) was constructed from PG(4,q), GQ(q, 2 ) was constructed from PG (5,q) , GQ ( 2 , 3 ) was constructed from
PG(4, q2). Camtepe and Yener have mapped these GQs in key pre-distribution [4], [3] like this:
v = number of keys = (s + 1) (st + 1), b = number of nodes = (t + 1) (st + 1), r = number of keys in each node = (s + 1), and k = key
chains that a key is in = (t + 1) for all the three GQs, these parameters are given in Table - 1. Here q is taken as any prime or prime
power.
TABLE I
VARIOUS GENERALIZED QUADRANGLES USED BY CAMTEPE YENER AND THEIR DIFFERENT PARAMETERS
Design
GQ(q, q)
GQ(q, q 2 )
GQ(q 2 , q 3 )

s
q
q
q2

t
q
q2
q3

v
(q + 1)(q 2 + 1)
(q + 1)(q 3 + 1)
(q 2 + 1)(q 5 +1)

b
k
r
(q + 1)(q 2 + 1) q + 1
q+ 1
(q 2 + 1)(q 3 + q + 1 q 2 + 1
1)
(q 3 + 1)(q 5 + ) q 2 + 1 q 3 + 1

Probability that two node will share a common key in these GQs are t (s+1) / [ (t+1)(st+1) ] . Though GQs do not give 100%
connection probability, resiliency is much better than projective planes.
Lee and Stinson [13] formalized the definitions of key predistribution schemes using set systems. They introduced the idea of common
intersection designs [23]. They used block graphs for sensors and according to them; every pair of nodes can be connected by
maximum of 2-hop path. They have shown that (v, b, r, k)-1 design or the (v, b, r, k) configuration have regular block graphs with
vertex degrees maximized. So, connectivity will be largest in this case. So, they have used (v, b, r, k) configuration. In a (v, b, r,k)
configuration having b-1 = k(r-1), all the nodes are connected to each other and its same as projective planes. But for large network,
the keychain
in each node will be large. So, they introduced - common intersection design. In that if two nodes key chain, Ai and Aj are disjoint,
then there will be at least number of nodes, who has common keys with both Ai and Aj . So, |AhA : Ai Ah | . They
have also used transversal design for key pre-distribution [13]. They have shown that for a prime number p and a integer k such that 2
k p, there exists a transversal design TD (k, p). In that design, 2 number of nodes can be arranged with k keys in each node in
such a way that ( i , j)th node will have the keys (x, xi + j mod p) : 0 x k. for 0 i p-1 and 0 j p-1. If two nodes want to
find common keys between them they just need to exchange their node identifiers and the shared key algorithm complexity is O(1).
426

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The communication overhead is O(log p) = O(log N) where N is the size of the network. They also gave the estimate of probability
of sharing a common key between two nodes and it is p1 = k(r-1) / (b-1) where k is the keys per node, r is the number of nodes a key
is in and b is the total number of nodes in the network. The estimate for resiliency for s node capture is
Fail = 1- (1 ( 2) / ( 2) ) . A multiple space has also been presented by Lee and Stinson in [14].
Chakrabarti, Maitra and Roy [5], [6] proposed a hybrid key pre-distribution scheme by merging the blocks in combinatorial designs.
They considered Lee and Stinson construction and randomly selected some fixed number of blocks and merged them to form key
chains. Though their proposed scheme increased the number of keys per node, it improved the resiliency than Lee and Stinsons
Scheme [13].
Simonova, Ling and Wang discuss a homogeneous scheme in [22]. According to them, each grid in the network will have a disjoint
key pool. Nodes from the same grid will communicate via this. There will another key pool called deployment key pool which will be
constructed from neighboring key pools. Nodes from two neighboring grid can communicate via keys of the deployment key pool.
Zhou, Ni and Ravishankar was first to propose a key pre-distribution scheme in [24] where sensors are mobile.
Eschenauer and Gilgor [10] proposed a probabilistic key predistribution scheme to bootstrap the initial trust between the sensor nodes.
The main idea was to let each sensor node randomly pick a set of keys from a key pool (mobile polynoial pool) before deployment, so
that any two sensor nodes had a certain probability of sharing at least one common key. Chan et al. [9] further extended this idea and
developed two key predistribution schemes: the q-composite key predistribution scheme and the random pair wise keys scheme. The
q-composite key predistribution scheme also used a key pool, but required two sensor nodes to compute a pair wise key from at least q
predistributed keys that they shared. The random pair wise keys scheme randomly picked pairs of sensor nodes and assigned each pair
a unique random key. Both schemes improved the security over the basic probabilistic key predistribution scheme.
ACKNOWLEDGMENT
This paper could not be written to its fullest without prof. B.S.Sonawane, who is my guide, as well as one who challenged and
encouraged me throughout my time spent studying under him. He would have never accepted anything less than my best efforts, and
for that, I thank him.

CONCLUSION
I have seen that, between all above schemes most of the probabilistic schemes are scalable in nature while deterministic schemes are
not scalable. But advantage of deterministic schemes is, they are simpler in terms of computation and also better in terms of resilliency
and connectivity because of its certainty. Schemes using combinational structures are good in terms of resiliency. The basic schemes
of Blom or Blundo et al have a good trade-off between storage and security. Mainly in the key management number of schemes are
found because they have been researched by various reserchers, they all have some advantages as well as some disadvantages. So the
scheme which satisfies both requirements and resources only that scheme we should implement. Security should be a big priority in
military services than in civilian application of wireless sensor network. Moreover there are lots of opportunities in this area so that
constrained resources of wireless sensor network can be effectively utilized.

REFERENCES:
[1] Rolf Blom. An optimal class of symmetric key generation systems. In EUROCRYPT, pages 335338, 1984.
[2] Carlo Blundo, Alfredo De Santis, Amir Herzberg, Shay Kutten, Ugo Vaccaro, and Moti Yung. Perfectly-secure key distribution for
dynamic conferences. In CRYPTO, pages 471486, 1992.
[3] Seyit A. C amtepe and Bulent Yener. Combinatorial design of key distribution mechanisms for wireless sensor networks.
IEEE/ACM Trans. Netw., 15(2):346358, 2007.
[4] Seyit Ahmet C amtepe and Bulent Yener. Combinatorial design of key distribution mechanisms for wireless sensor networks. In
ESORICS, pages 293308, 2004.

427

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[5] Dibyendu Chakrabarti, Subhamoy Maitra, and Bimal K. Roy. A key pre-distribution scheme for wireless sensor networks:
Merging blocks in combinatorial design. In ISC, pages 89103, 2005.
[6] Dibyendu Chakrabarti, Subhamoy Maitra, and Bimal K. Roy. A key pre-distribution scheme for wireless sensor networks: merging
blocks in combinatorial design. Int. J. Inf. Sec., 5(2):105114, 2006.
[7] Haowen Chan and Adrian Perrig. Pike: peer intermediaries for key establishment in sensor networks. In INFOCOM, pages 524
535, 2005.
[8] Haowen Chan, Adrian Perrig, and Dawn Song. Random key predistribution schemes for sensor networks. In SP 03: Proceedings
of the 2003 IEEE Symposium on Security and Privacy, page 197, Washington, DC, USA, 2003. IEEE Computer Society.
[9] H. Chan, A. Perrig, and D. Song, Random Key Pre-Distribution Schemes for Sensor Networks, Proc. IEEE Symp. Research in
Security and Privacy, 2003.
[10] Wenliang Du, Jing Deng, Yunghsiang S. Han, and Pramod K. Varshney. A key predistribution scheme for sensor networks using
deployment knowledge. IEEE Trans. Dependable Sec. Comput., 3(1):6277, 2006.

[11] Wenliang Du, Jing Deng, Yunghsiang S. Han, Pramod K. Varshney, Jonathan Katz, and Aram Khalili. A pair wise key
predistribution scheme for wireless sensor networks. ACM Trans. Inf. Syst. Secur., 8(2):228258, 2005.
[12] Laurent Eschenauer and Virgil D. Gligor. A key-management scheme for distributed sensor networks. In CCS 02: Proceedings
of the 9th ACM conference on Computer and communications security, pages 41 47, New York, NY, USA, 2002. ACM.

[13] Jooyoung Lee; D.R. Stinson. A combinatorial approach to key predistribution for distributed sensor networks,. Wireless
Communications and Networking Conference, 2:12001205, 13-17 March 2005.
[14] Jooyoung Lee and Douglas R. Stinson. On the construction of practical key predistribution schemes for distributed sensor
networks using combinatorial designs. ACM Trans. Inf. Syst. Secur., 11(2), 2008
[15] Donggang Liu and Peng Ning. Establishing pairwise keys in distributed sensor networks. In ACM Conference on Computer and
Communications Security, pages 5261, 2003.

[16] Donggang Liu, Peng Ning, and Wenliang Du. Group-based key predistribution in wireless sensor networks. In Workshop on
Wireless Security, pages 1120, 2005.

[17] Donggang Liu, Peng Ning, and Wenliang Du. Group-based key predistribution for wireless sensor networks. TOSN, 4(2), 2008.
[18] Donggang Liu, Peng Ning, and Rongfang Li. Establishing pairwise keys in distributed sensor networks. ACM Trans. Inf. Syst.
Secur., 8(1):4177, 2005.
[19] Keith M. Martin, Maura B. Paterson, and Douglas R. Stinson. Key predistribution for homogeneous wireless sensor networks
with group deployment of nodes, 2008.

[20] R. Kannan S.S. Iyengar R. Kalidindi and A. Durresi. Sub-grid based key vector assignment: A key pre-distribution scheme for
distributed sensor networks. Journal of Pervasive Computing and Communications, 2(1):3543, 2006.
[21] Mohammed Golam Sadi, Dong Seong Kim, and Jong Sou Park. Gbr: Grid based random key predistribution for wireless sensor
network. In ICPADS (2), pages 310315, 2005.
[22] Katerina Simonova, Alan C. H. Ling, and Xiaoyang Sean Wang. Location-aware key predistribution scheme for wide area
wireless sensor networks. In SASN, pages 157168, 2006.
428
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[23] Jooyoung Lee; D.R. Stinson. Common intersection designs,. Journal of Combinatorial Designs, 14:251269, 2006.
[24] ] Li Zhou, Jinfeng Ni, and Chinya V. Ravishankar. Supporting secure communication and data collection in mobile sensor
networks. In INFOCOM, 2006.
[25] ] Sencun Zhu, Shouhuai Xu, Sanjeev Setia, and Sushil Jajodia. Establishing pairwise keys for secure communication in ad hoc
networks: A probabilistic approach. In ICNP, pages 326335, 2003.
[26] Douglas R. Stinson. Combinatorial Designs: Construction and Analysis. Springer-Verlag, 2004

429

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Power Aware Routing for Path Selection with Minimum Traffic in Mobile
Adhoc Network
Sathya.E1,Rajpriya.G2
ME-Embedded System Technologies, Assistant professor
Dept of EEE, Nandha Engineering College,Erode-52
Tamil Nadu, India
sath017@gmail.com1,grajpriya89@gmail.com2
ABSTRACT A mobile Ad-hoc network (MANET) is a collection of wireless nodes that forms a network without central
administration. The nodes in such kind of network serve as routers as well as hosts. The nodes can forward packets on behalf of other
nodes and run user applications. These devices are operated on battery which provides limited working capacity to the mobile nodes.
Power failure and the energy consumption of the nodes is a critical factor in the operation of a mobile ad hoc network. The
performance of the node can be hampered by power failure, which affects the ability of node to forward the packet and hence affects
the overall network lifetime. So an important objective is to consider Energy Aware design of network protocols for Ad hoc network
environment. Different approaches can be applied to achieve the target and different energy-related metrics that have been used to
determine energy efficient routing path. More efficient algorithm is proposed here, which tries to maximize the lifetime of network by
minimizing the power consumption during the route establishment from source to destination. The proposed algorithm is incorporated
with the route discovery phase of AODV and by simulation using NS/2 it is observed that the proposed algorithm is better than AODV
in terms of packet delivery ratio and network lifetime.

Keywords MANET; AODV; PAR; DSR; Online Max-Min; LEAR; Packet Delivery Fraction
I. INTRODUCTION
Mobile ad-hoc network is a group of wireless mobile nodes that forms a provisional network without any centralized administration.
In MANET the communication between the mobile nodes is done via multi-hop paths. It may be necessary for one node to enroll
other nodes forwarding a packet to its destination due to the limited transmission range of wireless network interfaces. Each mobile
node operates as a host as well as a router forwarding packets for other mobile nodes in the network which may not be within the
direct transmission range of each other. Each node participates in route discovery and in an ad-hoc routing protocol which allows it to
determine multi-hop paths through the network to any other node. This idea of mobile ad-hoc network is also called infrastructure less
networking, since the mobile nodes in the network dynamically establish routing among themselves to form their own network. Nodes
operate in shared wireless medium. Network topology changes unpredictably and very dynamically. Radio link reliability is an issue.
Connection breaks are very frequent. Furthermore, parameters like density of nodes, number of nodes and mobility of these hosts may
vary in different applications. There is no stationary infrastructure. Since there is no stationary infrastructure there is are some factor
which become a critical for the communication. The main critical factor in Mobile Adhoc network is power consumption. Due to this
there delay during the data packet delivery and decrease the network lifetime. There are some many protocols for overcoming this
problem. Routing in MANET is challenging due to node mobility, limitations for transmission bandwidth, battery power, and CPU
time. In MANET nodes cooperate in routing the packets to destination. Each node in the network communicates only with those nodes
that are located within its communication range. The distance between source and destination may be at multiple hopes. Death of few
or even single node due to energy exhaust will cause the breakdown in communication of entire network. While taking accumulated
energy we will check the status of each node and it can be estimated after transmitting the required level path will be discarded.
Also as the type and size of data known, the battery status of every node can be estimated after transmitting the required data, care will
be taken while selecting the route such that any node does not get exhausted completely after the data transmission and there by
become dead. In such case the alternate route will be selected. The estimation of battery status can be done from the details send by
the node when it sends route request packet. In route request packet the header file has the following information. Source_id,
Destination_id, Type of Data to be transfer, Total Battery Status, Total Traffic level and Node_id. Total traffic level is calculated from
the packets buffered in the interface queue of the node. This problem is solved using Power Aware Routing (PAR) Protocol which
increase the network lifetime and reduce the delay.

II EXISTING SYSTEM
The MANET environment is typically characterized by energy-constrained nodes, variable-capacity, bandwidth-constrained
wireless links and dynamic topology, leading to frequent and unpredictable connectivity changes. Since those mobile devices are
430

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

battery operated and extending the battery lifetime has become an important objective, researcher and practitioners have recently
started to consider power-aware design of network protocols for the Ad hoc networking environment. As each mobile node in a
MANET performs the routing function for establishing communication among different nodes the death of even a few of the nodes
due to energy exhaustion might cause disruption of service in the entire network. In critical environments such as military or rescue
operations, where ad hoc networks will be typically used, conserving of battery power will be vital in order to make the network
operational for long durations. Recharging or replacing batteries will often not be possible. This makes the study in energy-aware
routing critical. The challenge in ad hoc networks is that even if a host does not communicate on its own, it still frequently forwards
data and routing packets for others, which drains its battery. Switching off a non-communicating node to conserve battery power may
not be always a good idea, as it may partition the network. In a conventional routing algorithm, which is unaware of energy budget,
connections between two nodes are established between nodes through the shortest path routes. This algorithm may however result in
a quick depletion of the battery energy of the nodes along the most heavily used routes in the network. The main focus of this research
is to design a power-aware routing protocol that balances the traffic load inside the network so as to increase the battery lifetime of the
nodes and hence the overall useful life of the ad hoc network.
Different approaches can be applied to achieve the target.[2] Transmission power control and load distribution are two approaches
which minimizes the active communication energy, and sleep/power-down mode is used to minimize energy during inactivity. The
primary objective is to minimize energy consumption of individual node. The load distribution method tries to balance the energy
requirement among the nodes and increases the network lifetime. This can be done by avoiding over-utilized nodes while selecting a
routing path. Transmission power control approach, the stronger transmission power is used to increase the transmission range and
reduces the hop count to the destination, if weaker transmission power is selected then it makes the topology sparse, which partitions
the network and produces high end-to-end delay due to a larger hop count. To determine energy efficient routing path, different
energy-related metrics have been used like: Energy consumed/packet, Time to network partition, Variance in node power levels,
Cost/packet, and Maximum node cost. Transmission power control approaches are discussed in Flow argumentation Routing
(FAR)[3] where the network is considered as static network and tries to find the optimal routing path for a given source-destination
pair that minimizes the sum of link costs along the path. Online Max-Min (OMM)[4] achieves the same but the data generation rate is
not known in advance. Power aware Localized Routing (PLR) assumes that a source node has all location related information of its
neighbors and the destination. Minimum Energy Routing (MER) [5] shows issues like obtaining accurate power information,
associated overheads, maintenance of the minimum energy routes in the presence of mobility and implements the transmission power
control mechanism in DSR and IEEE 802.11 MAC protocol. Some proposals considers load distribution approach are provides in
Localized Energy Aware Routing (LEAR) Protocol [6] is based on DSR but modifies the route detection procedure for balanced
energy consumption. In LEAR, a node concludes whether to forward the route-request message or not depending on its residual
battery power. Conditional max-min battery capacity routing (CMMBCR) Protocol uses the concept of a threshold to exploit the
lifetime of each node and to use the battery fairly. Existing system increases lifetime of network and reduces the power expenditure
during the route establishment using a secure cryptographic method. Only the secure node having required energy level can participate
in route discovery phase and data transmission. This algorithm can transfer both real time and non real traffic by providing energy
efficient and less congested path between a source and destination.

III PROPOSED SYSTEM


AODV routing protocol is a reactive routing protocol; therefore, routes are determined only when required. Hello messages
may be used to detect and monitor links to neighbors. If Hello messages are used, each active node periodically broadcasts a Hello
message that all its neighbors receive. Because nodes periodically send Hello messages, if a node fails to receive several Hello
messages from a neighbor, a link break is detected. When a source has data to transmit to an unknown destination, it broadcasts a
Route Request (RREQ) for that destination. At each intermediate node, when a RREQ is received a route to the source is created. If
the receiving node has not received this RREQ before, is not the destination and does not have a current route to the destination, it
rebroadcasts the RREQ. If the receiving node is the destination or has a current route to the destination, it generates a Route Reply
(RREP). The RREP is unicast in a hop-by-hop fashion to the source. Control messages are route request route reply and Hello
message. Dynamic Source Routing (DSR) also belongs to the class of reactive protocols and allows nodes to dynamically discover a
route across multiple network hops to any destination. Source routing means that each packet in its header carries the complete
ordered list of nodes through which the packet must pass. Multipath routing appears to be a promising technique for ad hoc routing
protocols. Providing multiple routes is beneficial in network communications, particularly in MANETs, where routes become obsolete
frequently because of mobility and poor wireless link quality. The source and intermediate nodes can use these routes as primary and
backup routes. Alternatively, traffic can be distributed among multiple routes to enhance transmission reliability, provide load
balancing, and secure data transmission. The multipath routing effectively reduces the frequency of route discovery therefore the
latency for discovering another route is reduced when currently used route is broken. Multiple paths can be useful in improving the
effective bandwidth of communication, responding to congestion and heavy traffic, and increasing delivery reliability.
431

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In PAR a feasible path is searched out which satisfies the bandwidth constraint. In contrast to the flooding based algorithms,
PAR search only a small number of paths, this limits the routing overheads. In order to maximize the chance of finding a feasible path,
the information is collectively utilized to make hop by hop selection. This protocol does not consider the QoS requirement only but
also considers the optimality of the routing path in terms of energy efficiency. If a specific QoS request is not being asked by a user
then high energy paths are chosen by PAR in order to improve the overall network lifetime. In case of PAR a simple energy
consumption model has been used to calculate the energy values at different times. This model is already discussed in existing system.
The nodes involved in the communication are continually in motion and also deplete their energy in transmission and reception of
each bit. An out of range node or an energy depleted node may cause a link failure. A link failure may trigger an end to end
reconstruction of the route through fresh route discovery process or a local repair that determine an alternate path to circumvent the
failed link. Global reconstruction is costly and prohibitive when frequent link failures occur. PAR uses the local repair for route
maintenance. Most of the routing protocols depend on IEEE 802.11 with acknowledgement to confirm packet delivery. When a node
does not receive any acknowledgement in a limited period of time, the link is considered as broken; and route maintenance starts.
Whenever a link failure takes place either due to energy depletion or mobility, PAR invokes a route maintenance phase.

POWER AWARE ROUTING ALGORITHM


The general algorithm for power aware routing is shown below
If (T_O_L = =NRT)
Let N different values of R are received, Where R 1
If (N = = 0)
Send negative acknowledgement to the source that path cannot be established.
Else-if (N = = 1)
Acknowledge the source with this path.
Else-if (N > 1)
Select the path with min {T_T_L} acknowledge the source with the selected path.
Else-if (T_O_L = =RT)
Let N different values of R are received, where R2
If (N = = 0)
Send negative acknowledgement to the source that path cannot be established.
Else-if (N = = 1)
Acknowledge the source with this path.
Else-if (N > 1)
Select the path with min {T_T_L} Acknowledge the source with the selected path.
Block Diagrams For Modules
The below Figure 1 explains the network formation with route discovery. Each time the route is discovered it will be updated in the
routing table.

NETWORK
FORMATIO
N

ROUTE
DISCOVER
Y

UPDATING
ROUTE TABLE

Fig 1 Block diagram for Network Formation


432

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Implementing Power Aware Routing Algorithm


The below Figure 2 explains the selection of path with minimum traffic level to select the path it go for a condition of selecting N
value received by R. The value of R should be greater than 1.
CHECKING
THE TYPE
OF DATA

SELECTING
THE PATH
WITH
MINIMUM

SELECTING
N VALUE
BASED R

TRAFFIC
Fig 2 Block diagram for path selection withLEVEL
minimum traffic
Route Discovery Algorithm
The above Figure 3 describes how the path is elected with maximum energy. The node create a route request packet with three
special field. The node will be attached with energy metrics. The node will be attached with energy metrics. This packet will be
forwarded to the neighboring nodes. The maximum energy is calculated and received by destination. The destination will select a path
with maximum energy.

List of Modules

The implementation of the system, which is composed of following modules:


Network model
Parameters of route search
Power aware routing (PAR) Algorithm
Route Discovery Algorithm
Energy Based Path Selection
Maintenance and Performance evaluation

Create a Route Request


Packet

Attaching the Battery


Status

Maximum energy of
packet is calculated

Forwarding Route
Request

Maximum energy of
packet is calculated

Received by Destination

Path With Maximum


Energy Is Selected
Fig 3 Block diagram for path selection with maximum energy
433

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Network Model
We setup for 20, and 50 nodes in an area of 1000m*1000m. In the different scenarios from small network to large networks,
value for packet delivery ratio has been observed by varying pause times from 0 to 500 and the speed has been changed form 1 meter
per second to 25 meters per second.
Parameters on each node each node has 3 variables:
1. Node_ID: Used for node identification. Each node is identified by unique ID.
2. Battery Status (B_S): Total energy at node.
3. Traffic Level (T_L):Number of packets stored in the interface queue of the node.

Route Search Mechanism


At the time of route discovery phase, a route request (RREQ) packet send or broadcasted by the source to all its neighbor
nodes for getting information about destination. RREQ packets header includes source_id, destination_id, T_O_L (type of data to be
transfer), T_B_S (Total Battery Status), T_T_L (Total Traffic Level), and Node_IDs.

Maintenance and Performance Evaluation


The performance of the proposed system is analyzed through the following
Network Lifetime: Network life time is defined as the time taken for 50 % of the nodes in a network to die. The effect of
pause time and speed of nodes on network lifetime is evaluated.
Packet Delivery Fraction (PDF): Packets may be lost due to sudden link failures, or during route maintenance phase. PDF is the
fraction of successfully received packets, which survive while finding their destination. This performance measure determines the
efficiency of the protocol to predict a link breakage and also the efficacy of the local repair process to find an alternate path. The
completeness and correctness of the routing protocol is also determined

Fig 4 Network Formation

Route Reconstruction in Q-PAR: Whenever a link failure takes place either due to energy depletion or due to mobility, route
reconstruction or route repair phase is invoked in the routing protocols. Average end-to-end delay is the delay experienced by the
successfully delivered packets in reaching their destinations.
IV SIMULATION RESULT
The performance evaluation graph of the data packet delivery ratio for the Power Aware Routing (PAR) is estimated. It is
also compared with the other energy aware routing protocol named Localized Energy Aware Routing Protocol (LEAR). The Figure 4
and Figure 5 show the output for the proposed system. The Figure 4 shows the output for the network model. The parameters are set as
default in each node. The Figure 5 shows the output for data transmission. The data packet is transmitted from the source to the
434

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

destination based on certain criteria. From the available paths, the path with nodes of minimum traffic and with high battery status is
selected as the optimal path. The output is shown below. The below graph 1 shows the data packet delivery ratio of a node while
transmitting the data from source to destination. The graph1 represent the
packet delivery ratio of a node based on distance between the nodes.

Fig 5 Data delivering through optimal path

Fig 6 Graph 1 Data Packet Delivery Ratio of LEAR vs PAR


In this the protocol from Existing System and proposed system is compared. That is LEAR is compared with Power Aware Routing.
Table 1 Data Packet Delivery Ratio for LEAR

435

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SNO

STABILITY
WEIGHT

PERCENTAGE

1.

99.0000

2.

5.0000

98.3000

3.

10.0000

98.2000

4.

15.0000

93.8000

5.

20.0000

92.0000

The below Table 5.6 show, how the data packet delivery ratio of a node varies based on its distances by using LEAR.
V CONCLUSION
Energy efficiency is one of the main problems in a mobile ad hoc network, especially designing a routing protocol. The
existing work aims at discovering an efficient power aware routing scheme in MANETs and analyzing the derived algorithm with the
help of NS-2. This scheme is one of its types in ad-hoc networks which can provide different routes for different type of data transfer
and ultimately increases the network lifetime. However, delivery latency is increased by using SPAR of existing system; hence we
proposed the energy stable PAR routing technique that determines bandwidth constrained paths that are most likely to last for the
session in ad-hoc networks that have paucity of energy. The protocol considers only energy stability for local reconstruction of the
routes to avoid packet loss and costly global reconstruction. The protocol is able to enhance the network lifetime by performing delay
repair which occurs due to energy depletion of nodes and significantly improve the overall efficiency of packet delivery.
FUTURE ENHANCEMENT
However, a priori estimation of the bandwidth and admission control to ensure bandwidth availability between wireless links is
required to ensure the performance of the protocol. This prior knowledge may be an overhead and in future this can be avoided
ACKNOWLEDGEMENT
First of all we sincerely thank the almighty who is most beneficent and merciful for giving us knowledge and courage to complete the
Research work successfully. We also express our gratitude to all the teaching and non-teaching staff of the college especially to our
department for their encouragement and help done during our work. Finally, we appreciate the patience and solid support of our
parents and enthusiastic friends for their encouragement and moral support for this effort.
REFERENCES
[1] Ajina A , G.R.Sakthidharan , Kanchan M. Miskin Study of Energy Efficient Power Aware Routing Algorithm and their
Applications This paper appears in: 2010 Second International Conference on Machine Learning and Computing
[2] AL-Gabri Malek, Chunlin Li, Li Layuan, WangBo New Energy Model:Prolonging the Lifetime of Ad-hoc On-Demand
Distance Vector Routing protocols (AODV) 2010 International Conference
[3] Alokes Chattopadhyay, Markose Thomas, Arobinda Gupta An Energy Aware Routing Protocol for Mobile Ad-Hoc Networks
15th International Conference on Advanced Computing and Communications 0-7695-3059-1/07 $25.00 2007 IEEE DOI
10.1109/ADCOM.2007.70
[4] Forman G., Zahorjan J., The Challenges of Mobile Computing, IEEE Computer 1994; 27(4):38-47.
[5] I. Stojmenovic and S. Datta, Power Aware Routing Algorithms with Guaranteed Delivery in Wireless Networks, 2001.
[6] J. Broch, D.A. Maltz, D.B. Johnson, Y.C. Hu, and J. Jetcheva, A Performance Comparison of Multi Hop Wireless Ad Hoc
Network Routing Protocols,Proc. Conf. Mobile Computing, MOBICOM, pp. 85-97, 1998.
[7] Morteza Maleki, Karthik Dantu, and Massou Pedram Power-aware Source Routing Protocol for Mobile Ad Hoc Networks
This research was sponsored in part by DARPA PAC/C program under contract no. DAAB07-00-C-L516. ISLPED02, August 12-14,
2002, Monterey, California, USA.
[8] Perkins C., Ad Hoc Networking Addison-Wesley: 2001; 1-28.
[9] Pinyi Ren, Jia Feng and Ping Hu Energy Saving Ad-hoc On-Demand Distance Vector Routing for Mobile Ad-hoc Networks
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the
IEEE ICC 2009 proceedings 978-1-4244-3435-0/09/$25.00 2009 IEEE
436

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[10] S. Basagni, I. Chlamtac, V.R. Syrotiuk, and B.A. Woodward, A Distance Routing Effect Algorithm for Mobility (DREAM),
Proc. Conf. Mobile Computing, MOBICOM, pp. 76-84, 1998.
[11] W.R. Heinzelman, A. Chandrakasan, and H. Balakrishnan, Energy-Efficient Routing Protocols for Wireless Microsensor
Networks, Proc. Hawaii Int'l Conf. System Sciences, Jan. 2000
[12] Y.B. Ko and N.H. Vaidya, Location-Aided Routing (LAR) in Mobile Ad Hoc Networks, Proc. Conf. Mobile Computing,
MOBICOM, pp.66-75

437

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Emoticon-based unsupervised sentiment classifier for polarity


analysis in tweets
Geeta.G.Dayalani1
CSE Department,
Everest Educational Society's Group of Institutions,
Dr.Seema Quadri Institute of Tech,Dr. B.A.M.U,
Aurangabad, Maharashtra, India.
geetadayalani@gmail.com
Prof.B.K.Patil2
Everest Educational Society's Group of Institutions,
Dr.Seema Quadri Institute of Tech,Dr. B.A.M.U,
Aurangabad, Maharashtra, India.
Cseroyal7@gmail.com
________________________________________________________________________________________________________________

Abstract Today, Micro blogging has become a very popular communication tool among millions of Internet users. Vast number of
users shares their opinions on different aspects of life everyday.Microblogging web-sites are rich sources of data for sentiment
analysis. Sentiment Analysis is the process of detecting the contextual polarity of text. In our paper, we focus on one of the famous
micro blogging platform, Twitter, for performing sentiment analysis on it. We propose a simple and completely automatic approach
for analyzing the sentiment of users in Twitter. Firstly, we built a Twitter entirety by grouping the tweets. Tweets express positive and
negative polarity through a completely automatic procedure by using only emoticons in tweets. Then, we have built a sentiment
classifier where an actual creek of tweets is processed and its content is classified as positive, negative or neutral. The classification is
made without the use of any pre-defined dictionary or polarity thesaurus. The thesaurus is automatically inferred from the creeking of
tweets. We observe that our simple system captures the polarity perceptions matching reasonably well with the classification done by
human judges.

Keywords Sentiment classifier, Twitter, Emoticon, tweets, creek, polarity, entirety, corpus
I.INTRODUCTION
Micro blogging websites have evolved to become a source of varied kind of information. Millions of messages are appearing daily in
popular web-sites that provide services for micro blogging such as Twitter, Facebook. Authors of those messages write about their
life, share opinions on variety of topics and discuss current issues. Because of a free format of messages and an easy accessibility of
micro blogging platforms, Internet users tend to shift from traditional communication tools (such as traditional blogs or mailing lists)
to micro blogging services. As more and more users post about products and services they use, or express their political and religious
views, micro blogging web- sites become valuable sources of peoples opinions and sentiments. Such data can be efficiently used for
marketing or social studies.
We use a dataset formed of collected messages from Twitter. Twitter contains a very large number of very short messages created by
the users of this micro blogging platform. The contents of the messages vary from personal thoughts to public statements. As the
audience of micro blogging platforms and services grows every day, data from these sources can be used in opinion mining and
sentiment analysis tasks.
In our paper, we study how micro blogging can be used for sentiment analysis purposes. We show how to use Twitter as a corpus for
sentiment analysis and opinion mining.We use micro blogging and more particularly Twitter for the following reasons:
Microblogging platforms are used by different people to express their opinion about different topics, thus it is a valuable source of
peoples opinions.
Twitter contains an enormous number of text posts and it grows every day. The collected corpus or the entirety can be arbitrarily
large.
Twitters audience varies from regular users to the most dynamic celebrities, company representatives, politicians, and even country
presidents. Therefore, it is possible to collect text posts of users from different social and interests groups.
Twitters audience is represented by users from many countries. Although users from U.S. are prevailing, it is possible to collect data
in different languages.
We collected a corpus of 300000 text posts from Twitter evenly split automatically between three sets of texts:
438

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1. Texts containing positive emotions, such as happiness, amusement or joy


2. Texts containing negative emotions, such as sadness, anger or disappointment
3. Objective texts that only state a fact or do not express any emotions
We perform linguistic analysis of our corpus and we show how to build a sentiment classifier that uses the collected corpus as training
data.Twitter is one of the most popular social networking websites and has been growing at a very fast pace. The number of active
users exceeds 500 million and the number of tweets posted by day exceeds 500 million (as of May 2012)5. Through the twitter
applications, users shared opinions about personalities, politicians, products, companies, events,etc. Twitter users generate about 200
million tweets (short messages of up to 140 characters) per day. Textual information in tweets can be divided in two main categories:
facts and opinions. Facts are objective news about entities and events while opinions reflect peoples sentiments.
In this paper, we propose an unsupervised approach for classifying the sentiment of tweets. The approach is based on two main
phases: an automatic generation of a training dataset and a polarity classification phase. The proposed methodology does not use any
external resources such as thesaurus dictionaries or other manually tagged datasets but it uses only the sentiment expressed by
emoticons in the training dataset as a source of information. The word emoticon is a combination of the words emotion and icon.
Emoticons are used online to covey intonation or voice inflection, bodily gestures and emotion behind statements that might otherwise
be misinterpreted. Emoticons are graphic representations of users moods, obtained by proper combinations of characters. When web
users use an emoticon, they are effectively marking up their own text with an emotional state. Considering as positive and
negative the tweets containing respectively positive and negative emoticons, the proposed approach is based on the assumption that
a word appearing frequently in positive and rarely in negative tweets, should have a high polarity positive score and analogously that a
word appearing frequently in negative rather than in positive tweets, should have a low polarity score. Therefore, according to this
assumption the method performs a classification of tweets analyzing their composing words. We use this corpus to create a polarity
dictionary to recognize positive, neutral and negative posts without using any standard classifier, language translators or manually
tagged texts. Since the approach is unsupervised is adaptable for other languages without any manual intervention.
The rest of the paper is organized as follows:
In Section 2, we discuss prior works in literature on opinion mining and sentiment analysis and their application for blogging and
microblogging and most particularly twitter. , In Sections 3, we describe the proposed system. Finally, in section 4 we conclude and
specify the possible future directions for this study.

II.LITERATURE SURVEY
With the growth of blogs and social networks, sentiment classification became a field of interest for many researches. An overview of
the existing work was presented in (Pang and Lee, 2008). In their survey, the authors describe existing techniques and approaches for
an sentiment-oriented or opinion-oriented information retrieval. However, not many researches in opinion mining considered blogs
and even much less addressed microblogging.
Sentiment analysis has been a burning topic for quite a few years [6]. Recently, it is used as an effective tool to understand the
opinions of the public and also in various social media applications [7].Similar to conventional sentiment analysis on product and
movie reviews, most existing methods in social media can fall into supervised learning methods [8, 9] and unsupervised learning
methods [10,11]. Unsupervised learning becomes more and more prominent in real-world social media applications. The reason
behind this is the lack of label information and the large-scale data produced by social media services. Generally, the approaches
proposed in literature rely on the use of polarity and sentiment lexicons containing lists of words with semantic or the linguistic
orientations. The most representative way to perform the unsupervised sentiment analysis is the lexicon-based method. The methods
rely on a pre-defined sentiment lexicon to determine the general sentiment polarity of a given document. The existing methods can be
generally divided into three categories. The first is to employ a group of human annotators to manually label a set of words to build
the sentiment lexicon, The second is dictionary-based methods [12, 13], which employ a dictionary, e.g., WorldNet, to learn the
orientation of sentiment of a word from its semantically related words that are mined from the dictionary. As an example, [16] propose
a system to create an emoticon based dictionary and judge the emotion of a news article based on emotion word, idiom and modier.
The dictionary is created by using WordNet which is a standard English thesaurus and a set of articles that are manually tagged with
emotions, phrases and idioms. They calculate an emotion score by eliminating the number of negative emotion words such as sad,
anger, etc from the number of positive emotion words like happy, excited, etc. to give an emotion score. On the other end, [17] [18]
used WordNet [19] for measuring semantic orientations of adjectives. Furthermore, [20] presented WordNetAffect while [21]
presented SentiWordNet as an additional sentiment resource. The third method to classify the sentiments is called corpus-based
methods [14,15], which infer sentiment orientation of the words from a given corpus by exploring the relation between the words and
some observed seed sentiment words/information, and then build a domain-dependent sentiment thesaurus. Other approaches exploit
annotated data set, where the annotation is made mostly manually. [3] Introduce a corpus of manually annotated tweets with seven
emotions: anger, disgust, fear, joy, love, sadness and surprise. They use the annotated corpus to train a classifier that automatically
439
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

discovers the emotions in tweets. Pak and Paroubek [2] collect a corpus for sentiment analysis and opinion mining purposes using
Twitter API. They query Twitter for two types of emoticons: happy emotions and sad emoticons.The two types of collected corpora
are then used to train a multinomial Naive Bayes classier to recognize positive and negative sentiments. They query accounts of 44
newspapers to collect a training set of objective texts. [1] Acquire 11,875 manually annotated Twitter data from a commercial source
and use Google Translator to convert it into English. Each tweet is labeled by a human annotator as positive, negative, neutral or junk.
For obtaining the prior polarity of words, they use Dictionary of Affect in Language (DAL). [22] Combine semantic analysis with a
syntactic parser at statement level to capture opposite sentiments in the same expressions. They include a manually defined sentiment
lexicon and use a Markov-modal based tagger for recognizing part of speech to identify sentiments related to subject. [23] Combine
emoticons, negation word position, and domain-specific words. These approaches are restricted to specific domains and the process is
very time-consuming, subjective and reducing its real-time applications specially considering big data. The work presented in this
paper is quite similar but it follows a diverse approach: the tweets are collected in streams and therefore represent a true mock-up of
actual tweets in terms of the language use and content. Different from traditional lexicon-based methods, we perform unsupervised
sentiment analysis from a novel perspective.

III.PROPOSED WORK
A) Tweets polarity Evaluation
Twitter is a popular microblogging service where users create status messages called "tweets". These tweets sometimes express
opinions about different topics. Twitter messages are also used as data source for classifying sentiment. The sentiment can be analyzed
by various preprocessing and the overall sentiment for the sentence can be analyzed. In the proposed method, one of the simplest
approach is used that utilizes the presence of emoticons in tweets.Emoticons are the facial expressions that are pictorially represented
using the punctuation marks and letters.Emoticons express the mood of users and are used for the calculation of tweets. Our
assumption says that the presence of an emoticon in a tweet intimates us about the sentiment that the microblogger wants to express.
.Due to the limitation of the length of a tweet which is 140 characters per tweet, the emotion expressed by a particular emoticon, may
generally suggest the sentiment or the subjectivity of the whole text in the tweet. From this consideration, we demonstrate an
automatic procedure for the creation of a lexical resource. This procedure does not require any kind of predefined dictionaries, but it
only requires the processing of tweets containing emoticons. The advantage is that in this manner we are able to map and enrich
common expressions with newly created words, slangs, grammatical errors. Our proposed system grabs the polarity differences and
classifies tweet messages as positive, negative or neutral in accordance with their emotional content.
B) Sentiment Analysis Procedure
Generally, Sentiment Analysis is done in two stages:
i) Training Stage(Figure 1)
ii)Formulation Stage(Figure 2).
The Figure below shows the phases of each stage in detail.
In a Training Stage(Figure 1),automatically a training dataset is generated without using any predefined dictionaries, external lexicons,
external thesaurus or any other manually tagged documents. Tweets messages containing emoticons are retrieved by using the Twitter
APIs (Data Gathering phase) and then they are grouped into two sets: positive set and negative set, containing positive and negative
emoticons respectively (Positive/Negative Emoticon Class Formulation phase). The tweets belonging to each one of the sets are
then selected according to a specific language (Language Identification phase) and then they are pre-processed (Data
Preprocessing phase). A polarity score is assigned to each word of these tweets (Word Polarity Evaluation phase).
We assign:a high and low polarity score depending on the frequency of the appearance of a word.
a high polarity score is assigned to a word if it frequently appears in the positive set and rarely in the negative set simultaneously;
a low polarity score is assigned to a word if it frequently appears in the negative set and rarely in the positive set simultaneously;In
this way, a thesaurus is created.
The created dictionary of words can then be used in the next stage that is the Formulation Stage (Figure 2) for the detection of
emotional content in a generic twitter stream. We assign a polarity score to each tweet as the average of the sum of the polarity scores
of its words (Tweet Polarity Evaluation phase). If the polarity of a tweet exceeds a given positive threshold value we classify it as
positive; if its score is below a negative threshold value we classify it as negative. If neither of the condition holds, then it is
considered as being neutral.

440

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Formulation of class with


Positive/Negative emoticons

Data
Gathering

Language
Identification

Emoticon
Pattern
Identification

Data
preprocessing

Word
polarity
evaluation

Figure1: Training Stage

Word
polarity
Thesaurus

Data
Gathering

Language
Identification

Data
preprocessing

Positive
Tweets

Negative
Tweets

Tweet
polarity
evaluation
Neutral
Tweets

Figure 2: Formulation Stage

441

www.ijergs.org

Word
polarity
Thesaurus

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The following section describes the above steps in detail.


i) Dictionary of polarized words
To obtain our dictionary of polarized words, we focus on emoticons in tweets to select messages and associate a sentiment mood
within a message. Due to the limitation of the length of a tweet which is 140 characters per tweet, the emotion expressed by a
particular emoticon, may generally suggest the sentiment or the subjectivity of the whole text in the tweet.This is true except for a very
few number of cases.It does not hold true for ironic or sarcastic tweets which are difficult to classify even for a human expert. Thus,
we state that if a word occurs more frequently in one class, it is more strongly related to the according emotion. We exploit a set of
emoticons classified as being positive or negative. Table I and Table II show samples of positive and negative emoticons. As a
consequence, we split the tweets messages into two classes:
Positive class: tweets containing positive emoticons;
Negative class: tweets containing negative emoticons.
TABLE I: Examples of Positive Emoticon Icons with its meanings.
ICON
:-) =) :) 8) :] =] => 8-) :-> :-] :) :)
=* :-* :*
<3
;-) ;) ;-] ;] ;-> ;> %-}
B) B-) B| 8|
:P =P
:-D :D =D :-P =3 xD
:3 :> :) :-3 => :-> :-V =v :-1
O.o o.O

MEANING
Smiley
Kiss
Heart
Wink
Feel Cool
Tongue sticking out
Laughing
Happy Face
Surprised

TABLE II: Examples of Negative Emoticon Icons with its meanings.


ICON
:( :,( :-( :,-( :( :-(
:-( :( :[ :-< :-[ =( :-@ :-& :-t :-z :<) }-(
:o :O :-o :-O
:-\ :/ :-/ :\

MEANING
Cry
Angry , Sadness
Speechless
Troubled, Annoyed

ii) Word Polarity Evaluation


For each word wa, we count the number of occurrences in positive and negative emoticons sets and we propose the following formula
to calculate the polarity score of it:
Polarity(w)= oc+( wa) - oc-(wa)
oc+( wa) + oc-(wa)
where,
oc+( wa )= word occurrences in the positive class.
oc-(wa)= word occurrences in the negative class.
The polarity value of a word is a value between 1 and 1 where 1 means strongly negative, 0means neutral, 1 means strongly
positive.
In our system, a word has a high polarity score if it frequently appears in the positive class and rarely, at the same time, in the negative
class. On the contrary, a word that appears more often in the negative class rather than in the positive one, has a low polarity score.
The list of all the polarized words automatically extracted from the training corpus collectively forms the opinion words of the
dictionary to be used in the classification step of a new tweet. Since tweets from Twitter usually contain noisy text, i.e. text that does
not comply having the standard rules of orthography, syntax and semantics, we filter out not frequent words with less than k
characters, where k is an integer experimentally fixed to k = 3.

442

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

iii) Tweet Polarity Evaluation


The polarity score of a tweet is given by the average of the sum of the polarity scores of its words. We consider all the words as
sentiment indicators also verbs and nouns in addition to adjectives. Consider a tweet message M which can be regarded as a collection
of n words w1, w2, ...,wn, we define its polarity score as the mean of polarity scores of all the terms having more than k characters:
T
Polarity (M) = polarity (wa)
a=1
T
Where,
wa = is an opinion word.
polarity(wa) = is the polarity score of a term wa calculated by using the constructed affective lexicon in the previous step according to
the positive and negative tweets in the training corpora.
T = is the number of words in the tweet s.
Sentiment classification of message M is obtained exploiting
the polarity(M) as value:
If polarity(M) polarityAvg + then the text is considered to have a positive polarity;
If polarity(M) < polarityAvg + and polarity(M) >polarityAvg then the text is considered neutral;
If polarity(M) polarityAvg then the text is considered to have a negative polarity.
where
polarityAvg is the mean polarity of all the words in the lexicon.
is the threshold value experimentally calculated. It defines the polarity of the neutral tweets.

IV. CONCLUSION AND FUTURE WORK


Today, Microblogging has became one of the major types of the communication. A recent research on it has recognized
microblogging as online word-of-mouth branding. The huge amount of data contained in microblogging web-sites such as twitter
makes them an attractive source of data for opinion mining and sentiment analysis.
In our research, we presented a paper in which we have discussed a method that automatically analyzes the sentiment of users in
Twitter. The sentiment analysis of Twitter data is significantly different from other sentiment classification on structured text. We
have exposed that it is not necessary to use external dictionaries or any other lexicons of polarized words to catch the sentiment
polarity in daily tweets. The method that we used reduces the human intervention. By using this proposed method, we are able to
automatically generate a training dataset by referring to the sentiment present in tweets containing emoticons. It is able to map all
common expressions with new words, slangs and errors. Then, we have built a sentiment classifier where an actual stream of tweets is
processed and its content is classified as positive, negative or neutral. The classification is made without the use of any pre-defined
dictionary or polarity thesaurus. The thesaurus is automatically inferred from the streaming of tweets. Our system can be applied to
any kind of language.
As the future work, we plan to explore the phenomenon of irony or indirect sarcasm in the text. Sarcasm utters the part within the text
which shows one opinion but actually represent totally different opinion. We will also strive to build a model to identify this sarcasm
automatically.
REFERENCES:
[1] A. Agarwal, B. Xie, I. Vovsha, O. Rambow, and R. Passonneau,Sentiment analysis of twitter data, in Proceedings of the
Workshop on Languages in Social Media, ser. LSM 11. Stroudsburg, PA, USA: Association for Computational Linguistics, 2011, pp.
3038.
[2]A. Pak and P. Paroubek, Twitter as a corpus for sentiment analysis and opinion mining, in Proceedings of the Seventh
International Conference on Language Resources and Evaluation (LREC10), N. C. C. Chair), K. Choukri, B. Maegaard, J. Mariani,
J. Odijk, S. Piperidis, M. Rosner, and D. Tapias, Eds. Valletta, Malta: European Language Resources Association (ELRA), may 2010.
[3] K. Roberts, M. A. Roach, J. Johnson, J. Guthrie, and S. M. Harabagiu, Empatweet: Annotating and detecting emotions on
twitter. in LREC, N. Calzolari, K. Choukri, T. Declerck, M. U. Dogan, B. Maegaard, J. Mariani, J. Odijk, and S. Piperidis, Eds.
European Language Resources Association (ELRA), 2012, pp. 38063813

443

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[4]Diego Terrana, Agnese Augello, Giovanni Pilato, Automatic Unsupervised Polarity Detection on a Twitter Data Stream in 2014
IEEE International Conference on Semantic Computing
[5] B. Pang and L. Lee, Opinion Mining and Sentiment Analysis,Foundations and Trends in Information Retrieval, vol. 2, nos.
1/2,pp. 1-135, 2008.
[6]B. Liu. Sentiment analysis and subjectivity. Handbook of Natural Language Processing, 2010.
[7]S. Prentice and E. Huffman. Social medias new role in emergency management. Idaho National Laboratory, 2008.
[8]A. Go, R. Bhayani, and L. Huang. Twitter sentiment classification using distant supervision. Technical Report, Stanford, pages
1{12, 2009}.
[9]X. Hu, L. Tang, J. Tang, and H. Liu. Exploiting social relations for sentiment analysis in microblogging in Proceedings of WSDM,
2013.
[10]J. Bollen, H. Mao, and X. Zeng. Twitter mood predicts the stock market. Journal of Computational Science, 2011.
[11]E. Kim, S. Gilbert, M. Edwards, and E. Graeff. Detecting sadness in 140 characters: Sentiment analysis of mourning michael
jackson on twitter. 2009.
[12]A. Andreevskaia and S. Bergler. Mining wordnet for fuzzy sentiment: Sentiment tag extraction from wordnet glosses. In
Proceedings of EACL, 2006.
[13]W. Peng and D. H. Park. Generate adjective sentiment dictionary for social media sentiment analysis using constrained
nonnegative matrix factorization. In ICWSM, 2011.
[14]Y. Lu, M. Castellanos, U. Dayal, and C. Zhai. Automatic construction of a context-aware sentiment lexicon: an optimization
approach. In Proceedings of WWW, pages 347{356, 2011.
[15]P. Turney. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of
ACL, pages 417{424, 2002.
[16]D. B. Bracewell, J. Minato, F. Ren, and S. Kuroiwa, Determining the emotion of news articles. in ICIC (2), ser. Lecture Notes
in Computer Science, D.-S. Huang, K. Li, and G. W. Irwin, Eds., vol. 4114. Springer, 2006, pp. 918923.
[17]V. Hatzivassiloglou and K. R. McKeown, Predicting the semantic orientation of adjectives, 1997, pp. 174181.
[18]J. Kamps, M. Marx, R. J. Mokken, and M. D. Rijke, Using wordnet to measure semantic orientation of adjectives, in National
Institute for, 2004, pp. 11151118.
[19] C. fellbaum, ed., wordnet. an electronic lexical database. the mit press, 1998.
[20] R. Valitutti, Wordnet-affect: an affective extension of wordnet, in Proceedings of the 4th International Conference on
Language Resources and Evaluation, 2004, pp. 10831086.
[21] A. Esuli and F. Sebastiani, Sentiwordnet: A publicly available lexical resource for opinion mining, in In Proceedings of the 5th
Conference on Language Resources and Evaluation (LREC06, 2006, pp. 417422.
[22] T. Nasukawa and J. Yi, Sentiment analysis: capturing favorability using natural language processing. in K-CAP, J. H. Gennari,
B. W. Porter,and Y. Gil, Eds. ACM, 2003, pp. 7077.
[23] K. Zhang, Y. Cheng, Y. Xie, D. Honbo, A. Agrawal, D. Palsetia, K. Lee, W. keng Liao, and A. N. Choudhary, Ses: Sentiment
elicitation system for social media data, in ICDM Workshops, 2011, pp. 129136.
[24] Reynier Ortega, Yoan Gutierrez,Andres Montoyo, SSA-UO: Unsupervised Twitter Sentiment Analysis
[25]Xia Hu, Jiliang Tang, Huiji Gao, and Huan Liu, Unsupervised Sentiment Analysis with Emotional Signals
444

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[26] Hassan Saif1, Miriam Fernandez1, Yulan He2 and Harith Alani1, Evaluation Datasets for Twitter Sentiment Analysis-A survey
and a new dataset, the STS-Gold
[27] Namrata Godbole? Manjunath Srinivasaiah? Steven Skiena,, LargeScale Sentiment Analysis for News and Blogsin
ICWSM2007 Boulder, Colorado, USA
[28]Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using Twitter hashtags and smileys. In
Proceedings of the 23rd International Conference on Computational Linguistics: Posters.
[29] Jiang, Long, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. Targetdependent twitter sentiment classification. in Proceedings
of the 49th Annual Meeting of the Association for Computational Linguistics (ACL- 2011). 2011

445

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Rfid Authentication Protocol for security and privacy Maintenance in Cloud


Based Employee Management System
ArchanaThange
Post Graduate Student, DKGOIs COE, Swami Chincholi, Maharashtra, India
archanathange7575@gmail.com,
9620099711
AmritPriyadarshi
Assistant Professor,DKGOIs COE, Swami Chincholi, Maharashtra, India

Abstract Cloud based RFID authentication is becoming a large area of interest among researchers and engineers these days.
However, there is not much attention given to the RFID authentication concern. Most of the work done on this issue till now has its
focus on RFID functionality without considering security and privacy. Classical RFID authentication schemes do not meet the security
and privacy requirements of cloud based RFID. Main features available in traditional backend-server-based RFID authentication are
secure backend channels and purely trustworthy database, which are not available when moved to cloud based scenarios. Here the
concept is implemented on employee management system. Many organizations now days wants have a high-tech employee
management system using which a salary, attendance and location of a particular employee within an organization can be easily find
out. An organization invests a large amount of money for developing and maintaining its own software and system. As compared to
traditional model, newly adapted cloud based IT model is more cost effective but they are having the problems of insecure database
and disclosing the privacy of tags and readers. So, there is a RFID authentication protocol is developed and VPN agency is suggested
to build secure backend channels. It will secure the database using hashing technique and preserves the privacy of tags and readers by
providing backend communication over secure channels. Proposed scheme has the advantages like deployment cost saving,
pervasiveness of authentication, database security and mobile reader holders privacy.

Keywords Cloud computing, RFID, AL, CA, Authentication.


1. INTRODUCTION
The objective of the paper is to discuss the issues of Radio frequency identification (RFID), what are the concepts of employee
management system, cloud based employee management system and give proposed cloud based RFID authentication protocol. RFID
is a technology, which has gained increased attention of researchers and practitioners. This enables acquisition of data about an object
without need of direct line of sight from transponders and readers [4]. To make RFID system more secure, authentication is one
solution. It also maintains the security and privacy of system. Tag identification without authentication raise a security problem.
Attackers sometime may tap, change and resend messages from the tag as if it has the tagged object.
An Employee management system using RFID becomes efficient on cloud system because it has several advantages like (1) Verifier
is enabled to authenticate the tagged objects using any reader over internet. (2) Pay on demand resource distribution is effective for
small and medium scale organizations. (3) Cloud is more robust due to resource sufficiency. However, this cloud-based RFID is
insufficient in two aspects. (1) Most current works focused on functionalities without giving importance to security and privacy. (2)
No study shows that classical RFID schemes meet the security and privacy requirements of cloud based systems. This system can be
effectively used for calculating the attendance, salary of a particular employee for particular month and for track keeping of him/her.
Classical RFID schemes when moved to cloud, they are having the problems of insecure database, privacy of tags and a reader gets
revealed which is not acceptable. Because cloud is publically available to any one and the clients cannot fully trust on the cloud
service providers. So to recover from these drawbacks, cloud-based RFID authentication protocol should get developed. It also secures
the database using hashing technique like SHA1 and maintains the privacy of tags, readers [3]. Successful implementation of this
system brings the advantages like deployment cost saving, pervasiveness of authentication. Tag verification complexity reduces to O
(1). It preserves the privacy of tag and reader holders. It makes the database more secure.

446

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. BACKGROUND
2.1 Traditional RFID
There is an extensive work on RFID authentication schemes which are, backend server based and server-less [1] [2]. The backend
server based RFID is shown in Figure 1. It is composed of tags, readers and backend server. Readers identify and verify the tags by
querying to backend server. However, the drawback is limited mobility of readers.

Figure 1: the backend-server-based RFID architecture


The server-less RFID scheme is shown in Figure 2. It is composed of tags, readers and CA (Certification Agency).

Figure 2: Server-less RFID architecture


There are two phases in server less scheme: initialization and authentication [5]. The mobile reader accesses the CA (Certification
Agency) and downloads the AL (Access List) through the secure connection in an initialization phase. The mobile reader is generally
a portable device such as notebook computer or smart phones. It might be stolen. Then AL is stored in it. It may wrongly be used to
imitate the tags. Credentials of tag authentication are derived with the help of tags key and RID. It makes AL exclusively usable for
the reader. But as a result, it is not possible for tag to create a valid request without the RID. In the authentication phase, the reader
challenges a tag with RID, waiting for tags response with H (RID, Kt). The reader then searches its AL and finds the matched value to
verify the Kt. It then identifies the TID. Sever-less RFID architecture provides readers with the scalability. However, there are
drawbacks in server-less authentication protocol. (1) All they transmit RID in plaintext. (2) Searching through AL has complexity of
O (N), where N denotes the number of tags. (3) Computational processes of searching and verifying are executed all by single
personnel portable device, which reduces the performance significantly.
447

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.2 Requirements of cloud based RFID


Cloud based RFID has many advantages, but has the challenges of security and privacy. Existing authentication protocols are
inapplicable to cloud based employee management system because of lacking two primary capabilities. Firstly, instead of providing
protection to front end communication, protection of backend communication is important in Cloud based authentication schemes.
Also in cloud based schemes, mobile readers often accesses the public cloud using open wireless connections. There are two solutions
for this issue. (1) Establish VPN connections among readers and cloud in a network layer. (2) In application layer design an RFID
authentication protocol protecting backend security [1]. Secondly, these schemes are required to prevent the privacy of tags/readers
from untrustworthy cloud. Therefore, readers/tags should provide the confidentiality about data storage, which is against the cloud.

3. PROPOSED SOLUTION
3.1 VPN AGENCY
Existing cloud based RFID system has four participants. They are tag owner, verifier, VPN agency and cloud provider [1] [6]. The
VPN agency has VPN routing between readers and the cloud. Cloud service of RFID authentication is given by cloud provider to the
verifier and tag owner. VPN routing makes communication between the reader and cloud as secure as in privet intranet. Attackers are
able to intercept, block and resend the TCP/IP packets in the network layer. On the other hand, network-layer-anonymity of readers
accessing the cloud is achieved.

3.2 Encrypted Hash Table

Figure 3: Proposed Cloud-based RFID authentication scheme


An EHT is proposed to prevent clients data confidentiality and access anonymity from revealing to the cloud provider. It is
illustrated in Figure 4. The index which is a hash digest H (RID||TID||SID) uniquely denotes the session with SID from the reader
with RID to tag with TID. The record indexed by H (RID||TID||SID) is E (RID||TID||SID||Data). It is a cipher text according to the
reader defined encryption algorithm with a reader managed key. The RID field is used to check the integrity of the cipher text
after decryption by reader. Mutual authentication is done using TID between reader and the tag. The SID field is the identifier
which is a term in a reader defined sequence. The data field stores any application data such as location of tag and access time.
3.3 Proposed Cloud-based RFID authentication Protocol
The proposed cloud based RFID authentication protocol is illustrated in Figure 4 [1]. For simplicity, replace RID by R, replace
TID by T and SID by S. Encrypted function E (RID||TID||SID||Data) in EHT is now simplified to E (R||T||S). Other notations
listed in this section are as shown in Table 1. S+1 is the next incremental term after S th term in the sequence defined by reader.
1st step is for the reader to obtain T and S. The tag generates H (T||R||S) as an authentication request and sends generated
authentication request to the reader. H (R||T||S) acts as an index to the cipher text E (R||T||S).
448

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 4: Proposed cloud based RFID authentication Protocol


The reader does the following operation. (1) It reads the cipher text from the cloud EHT and decrypts it in the useful form (2) It then
verifies R for integrity and obtains T and S. Authentication of the tag take place in 2nd step. The reader challenges the tag by
generating a random number Nr. The tag calculates hash based on challenge Nr received from reader. The tag uses H (R||T||Nr) to
generate random nonce Nt and sends Nt as its challenge to the reader. Synchronization of S between the encrypted hash table (EHT)
and the tag take place in 3rd step. The reader checks the integrity after reading the next record indexed by H (R||T||(S+1)) from the
EHT. Tag has been desynchronized if the valid record is present in EHT. The reader continues the same action until it finds the last
valid record, assuming its SID is M. In 4th step the cloud EHT gets updated. In EHT, cipher text E (R||T||M) having index H
(R||T||M) is added by reader, where M=M+1. A message of H (R||T||M) + H (E (R||T||M)) is send back to the reader from the cloud
to confirm that the updating is successful. 5th step is to send a comprehensive response to be verified by the tag. In response to tags
random challenge Nt, reader calculates H (R||T||Nt). For simple encryption, response is XORed with M. Encrypted M is sent to tag
with index H (R||T||M) and get verified by a tag. 6th step does authentication of the reader and also it repairs the desynchronization.
To obtain M, tag calculates H (R||T||Nt). Calculated value gets XORed with the received value H(R||T||Nt) M. It then calculates and
verifies H(R||T||M). If success, it means the M is not modified. By updating S=M on the tag, synchronization is achieved.

4. IMPLEMENTATION
4.1 Mathematical Model
In the proposed cloud based RFID authentication scheme readers anonymously access the cloud through wireless or wired
connections. An encrypted hash table stores the clients secrets in encrypted form so that secrets should not be easily revealed to the
cloud [3]. The first RFID authentication protocol is proposed which preserves the privacy of readers and tags.
Defining System:
Consider an organization with N employees S = {}
Identifying input:
Let, S = {Db, U, T, A, R, Cp} Where,
Db = System database with two tables:
- Employee
- RFID track
U is set of users such that,u1, u2, u3 un U
T is set of users track such that, t1, t2, t3 .tn T
A is set of users attendance such that, a1, a2, a3 an A
R is set of users RFID tag such that, r1, r2...rn R
449

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Venn diagram of the System:

Figure5: Set diagram of employee management system

Functionalities:
The functions of the system are as follows:
(1) U = newEmployee (u_id, u_name, u_add, RFID)
Where,
RFID = SHA1 (RFID tag)
SHA1 algorithm:
Input: Max 264-1 bit uses 512 bit block size
Output: 160 bit digests
Steps:
Padding message with single one followed by zeros until the final block has 448 bits and appends the size of original
message as an unsigned integer of 64 bit.
Initialize the 5 hash blocks (h0, h1, h2, h3, h4) to the specific constants defined in SHA1 standard
Hash (for each 512 bit block)
Allocate an 80 words array for the message schedule
First 16 words are obtained by splitting message into 32 bit block.
Rests of the words are generated by following algorithm:
Word [i-3] XOR Word [i-14] XOR Word [i-16] then left rotate it by 1 bit.

Loop 80 times doing the following:


Calculate SHAfunction() and constant K(these are based on current round number).
e=d
d=c
c = b(rotated left 30)
b=a
a = a(rotated left 5) + SHAfunction() + e + k + Word [i]
Add a, b, c, d and e to the hash output.
Output the concatenation (h0, h1, h2, h3, h4) which is the message digest.

(2) t = newTrackInfo (RFID, location, date, time)


Keep track of particular employee at different time.
(3) a = getAttendenceInfo (RFID, Date)
Calculate employee attendance for particular period.
4.2 Block Diagram of the System
Block diagram of employee management system using cloud based RFID authentication is illustrated in Figure 6. System contains the
following parts:
450

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

i.
ii.

RFID Reader:
It is used to read the tags using radio channels and passes to the verifier for the authentication.
Webcam:
Webcam is used to take the snapshot when reader tries to read the tag.

Figure 6: Employee management system using Cloud based RFID authentication


iii.

iv.

v.

Cloud Service provider:


Cloud service providers are those who provide private/public cloud. Different terminals can be connected to the cloud
through the internet network. Deploying employee database, tracking database, admin web service module on the public
cloud enable users to access database and web service from anywhere/anytime using Internet.
Admin:
Admin may be the individual machine which has administrative functionality. Admin is connected to cloud service provider
via internet connectivity. Admins web service is deployed on the public cloud which client can access.
Client:
Client may be the individual machine or a thin client. Client is connected to cloud service provider using internet
connectivity.

4.3 Hardware and Software used:


Hardware Configuration
i.
Processor- Pentium-IV
ii.
Speed-1.1 GHZ
iii.
RAM-256 MB (Min)
iv.
Hard Disk-20 GB
v.
Key Board- Standard Windows Keyboard
vi.
Monitor-SVGA
Software Configuration
i.
Operating System - Windows XP/7/8
ii.
Programming Language - Java
iii.
Database - MySQL
451

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

iv.
v.

Tools, Net beans


Cloud Provider: Jelastic

5. EVALUTION RESULT
In this system of employee management, a new cloud based RFID authentication protocol is developed which provides security
and confidentiality. It helps for calculating attendance, salary of an employee and also keeps a track of him/her. It provides other
advantages like deployment cost savings, pervasiveness of service and scalability.
Also, Comparison of the tag verification complexity in traditional sever based model, server-less model and cloud-based RFID model
is shown in Figure 7. Both server-based and server less RFID authentication protocols depends on search through the database or AL
to find matching TID. It makes computational complexity to verify tags as O (n), where n is number of tags. It means these two
protocols are not well scalable. In proposed protocol index H(R|T|S) is generated by tag, it is then send to reader. Reader then read the
matched record from EHT instead of searching through all TIDs. Hence, complexity of proposed scheme is only O (1). That means, it
is better scalable than most of the other protocols.

Figure 7: Comparison in the complexity of protocols

Figure 8: Scalability v/s tag verification complexity graph


Other points of comparison are listed in the following table.

452

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1: Comparisons
RFID
authentication
schemes
Tag operations
Pay on demand
deployment
Offline
Authentication
Pervasive
Authentication
Mutual
Authentication
Verification
Complexity
Tag owners
Privacy
Reader holders
privacy
Database
Encryption

Backendserver Based

Server-less

Proposed
cloud-based

PRNG/CRC

PRNG/HASH

PRNG/HASH

No

No

Yes

No

Yes

No

No

No

Yes

Yes

No

Yes

O (n)

O(n)

O (1)

Reveled

Preserved

Preserved

Revealed

Preserved

Partial

Entire

Undefined
Not at all

6. CONCLUSION
The employee database is structured as an EHT. It prevents private user data from leaking to malicious cloud provider. It gives the
first RFID authentication protocol which preserves tags and readers privacy against the database keeper. According to comparisons
with two classical schemes the proposed scheme has advantages as like (1) the resource deployment is pay-on-demand. (2) The cloudbased service is pervasive and customized. (3) This scheme is more scalable and having complexity O (1) to verify a tag. (4) The
proposed scheme preserves mobile reader holders privacy.

REFERENCES:
[1] WieXie, et al., cloud-based RFID Authentication, in IEEE International Conference on RFID 2013, pp.168-175.
[2] I. Syamsuddin, et al., A survey of RFID authentication protocols based on Hash-chain method, in 3rd International Conference
on Convergence and Hybrid Information Technology, ICCIT 2008, November 11, 2008-November 13, 2008, Busan, Korea, Republic
of ,2008,pp.559-564.
[3] J. Guo, et al., The PHOTON family of lightweight hash functions, in 31st Annual International Cryptology Conference,
CRYPTO 2011, August 14,2011-August 18,2011, Santa Barbara, CA, United States, 2011, pp.222-239.
[4] A.Chattopadhyay ,et al., Web based RFID asset management solution established on cloud services, in 2011 2nd IEEE RFID
technologies and Applications Conference, collocated with the 2011 IEEE IMWS on Millimetre Wave Integration Technologies,
,September 15,2011-September 16, 2011, Sitges, Spain,2011,pp.292-299.
[5] T.-C. Yeh, et al., "Securing RFID systems conforming to EPC class 1generation 2 standard," Expert Systems with Applications,
vol. 37, pp.7678-7683, 2010.
[6] B.AndalSupriya, et al., RFID based cloud supply chain management, in International Journal of Scientific & Engineering
Research, vol.4, issue 5, May 2013, pp.2157-2159

453

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Optimal Low Switching Frequency Pulsewidth Modulation of FifteenLevel Hybrid Inverter


Vasanth V1, Prabu M2
1

PG scholar Department of EEE, Nandha Engineering College, Erode, India

Associate professor Department of EEE, Nandha Engineering College, Erode, India.


Corresponding author: vasanthpaul23.v@gmail.com, prabunec2013@gmail.com

AbstractThe object of this paper is to operate a fifteen-level Hybrid inverter of an induction motor drive at an average device
switching frequency limited to rated fundamental frequency by using Synchronous optimal pulse width modulation(SOP) technique.
To reduce the number of separate dc sources, a three-level transistor clamped inverter was used as a cell in the fifteen-level hybrid
inverter. Using SOP technique, optimal fifteen-level waveforms were obtained by offline optimization assuming steady-state operation
of the induction machine. The switching angles for each semiconductor switch are then obtained from optimal fifteen-level waveforms
based on the criteria to minimize the switching frequency as well as unbalance in dc-link capacitor voltages. Simulation results
obtained from the 1.5-kW induction motor drive show THD <5% for stator currents. The results indicate that SOP technique reduces
the switching frequency of operation without compromising on THD.

Keywords Transistor clamped, H-bridge, medium-voltage ac drives, synchronous optimal pulse width (SOP) modulation.
I INTRODUCTION

Multilevel inverters are now well-established and standard solution for medium and high-voltage, high power applications and powerquality demanding solutions. The advantages of multilevel inverters over two-level inverters are higher voltage operating capability
with medium voltage semiconductor devices, improved output voltages with less harmonic distortion, lower common-mode voltages,
less dv /dt stress, near sinusoidal input currents, smaller input and output filters, increased efficiency due to possibility of low
switching operation, reduced electromagnetic interference problems and possible fault-tolerant operation [1][9]. In addition, the
torque ripple also reduces as number of levels increases in case of multilevel inverter fed ac drives. In high-power applications, the
switching losses contribute to major portion of total device losses andthus low switching frequency operation is necessary to achieve
higher efficiency.However, minimizing switching frequency increases the harmonic distortion. Therefore, the challenge is to minimize
the harmonic distortion while reducing the switching frequency. Presently, the most popular topologies are diode-clamped or neutralpoint-clamped (NPC), capacitor-clamped or flying capacitor (FC) and cascaded H-Bridge (CHB) [3]. The CHBtopologies are
preferred for higher-level inverters due to requirementof least number of components and ease of controlcompared to other topologies.
In addition, modularized circuitlayout and packaging is also possible with CHB topology becauseeach level has similar structure [10].
However, one major drawback of this topology is requirement of multiple numbers of dc-sources, which is not feasible in many
applications. One ofthe method for reducing the number of dc sources is to replace H-Bridge cell with NPC or FC inverters [11].
However, an important issue with the topologies having NPC or FC invertersis voltage unbalance of dc-link capacitors that further
addsto harmonic distortion of output voltage waveforms. An auxiliary capacitor-based balancing approach has been proposed to
equalize the dc-link capacitor voltages for NPC five-level inverter [12].Several low switching frequency modulation techniques have
been proposed for high-power applications. A new modulation method for modular multilevel inverter operating at fundamental
switching frequency while successfully eliminating fifth harmonic was proposed in [13]. A novel switching sequence design for the
space-vector modulation (SVM) of high-power multilevel inverters optimized for the improvement of harmonic spectrum and the
minimization of device switching frequency was proposed in [14]. A new control method for seven-level cascaded inverter operating
at fundamental switching frequencywas proposed by Zhong Du et al. [15].Model predictive currentcontrol algorithm has been
demonstrated for CHB nine-levelinverter with switching frequency between 425 to 500 Hz [16].Also, an adaptive duty-cycle
modulation algorithm that reducesthe switching frequency by using the slope of the voltage reference to adapt the modulation period
was proposed by Kouroet al. [17]. By using this method, the switching frequency of operation has been maintained between 285 to
785 Hz for fifteen level asymmetric CHB inverter. The selective harmonic elimination (SHE) method is one of the low-switching
frequency control technique which eliminates(n 1) lower order harmonic components, n being the number of switching angles.
Generalized SHE technique in Fourier domain for two-level single phase and three phase inverters has been proposed by patel et al.
454
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[18], [19]. Programmed PWM technique for minimizing the harmonics has been reported. [20], [21]. A new method for SHE based on
six-step symmetry[22] and by use of Walsh functions to obtain Fourier spectral equations has been reported [23], [24]. A new solution
to convert the transcendental equations into polynomial equations for SHE has been proposed [25]. The general problem
formulationand selected solutions for both unipolar and bipolar switching patterns to eliminate the fifth and seventh harmonics are
presented by Wells et al. [26]. A novel method to achieve fasttransient response and efficient harmonic (disturbance) filteringhas been
achieved by using signal processing methods [27]. A minimization method to derive multiple sets of solutions forthe bipolar SHE
PWM method for both single-phase and three phase inverters has been presented in [28]. A real-time method by using modified
triangle carrier has been proposed insteadof conventional offline solution of switching angles [29].In this method, initial guess is not
required as well as switching frequency is not restricted to integer multiple of fundamental frequency [30]. SHE is also extended for
multilevel inverters.A unified approach to solving the harmonic elimination equationsin multilevel inverter to obtain the switching
angles inthe lower range of modulation indices has been reported [31].A Bee algorithm for SHE has been reported by Kavousi et al for
cascaded multilevel inverter [32]. In steady-state operating conditions, SOP method for controlling five-level inverters [33], [34] and
dual three-level inverters [35], [36] with maximum switching frequency of 200 Hz have been demonstrated. In case of high
performance drives which are subjected to frequent transient conditions, using traditional closed-loop control techniqueswith SOP
technique intervenes with optimal switching patterns and hence, a real-time optimization is required. Thus, initially optimal stator
current trajectory tracking method has been proposed [37], [38]. Then, trajectory of an optimal statorflux vector which is independent
of machine parameters or load conditions has been suggested as tracking agent [39][42].Nonetheless, the application of SOP has not
been reported for multilevel inverters with more than five-levels. Therefore, the objective of the present study is to demonstrate SOP
technique for operating nine-level cascade inverter of induction motor drive at an average device switching frequency limited to rated
fundamental frequency (50 Hz) in open loop (v,f ) control mode. It should be pointed out that proposed SOP technique can be used for
any fifteen-level inverter topology by modifying the method of assigning switching angles for each power semiconductor switch based
on optimal fifteen-level waveforms.
IIHYBRID MULTI LEVEL INVERTER TOPOLOGY
The hybrid multilevel inverter is a series connection of cascade half bridge cell and H-bridge inverter. In recent years multilevel
inverters have been paying attention on and preferred as high power and high voltage ones [5]. Use of multilevel inverters is becoming
popular for high power applications especially in the distributed generation where a number of batteries, fuel cells, solar cell, and
micro-turbines can be connected through a multilevel inverter to feed a load or the ac grid without voltage balancing problems.
Another major advantage of hybrid multilevel inverters is that their switching frequency is lower than a traditional low-level inverter,
which leads to reduced switching losses. The topologies for high power multilevel inverters are classified into three types as shown in
Fig 1. The transistor clamped inverter, the flying capacitor inverter and the H-bridge inverter.
(m-1)

VC

S(m-1)

S(m)

(m-2)

VC

S (m-2)

BidirectionalValves

Vou t
(2)

VC

S(2)
S(1)

(1)

VC

Fig 1: Multilevel Inverter Topologies


455

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Among these inverters, the cascaded inverter has the advantages that the DC-link voltage is balanced, circuit layout flexibility,
Cascaded multilevel inverterarchitecture has the ability to tolerate a fault for several cycles but if the fault typeand location can be
detected and identified, compared with the diode-clamped and flying capacitor inverters it requires the least number of components to
achieve the same number of voltage levels, switching patterns and the modulationindex of other active cells can be adjusted to
maintain the operation under abalanced load condition.
Multilevel PWM and harmoniceradication are techniques that can be used in cascade multilevel inverters in orderto achieve
voltage waveforms with low total harmonic distortion (THD) withminimum switching losses and low filtering necessities. Using a
multilevel layout, an effectual high switching frequency can beachieved in the output voltagewaveform with each of the H-bridge
modules havinga relatively low switching frequency as shown in Fig 2. This approach will facilitateincreased converter efficiency.
Tumbling the filtering requirements would help toreduce the cost and improve the reliability and dynamic performance of the
wholesystem.

Fig2:Low loss low switching frequency multilevel inverter waveforms.

III. SYNCHRONOUS OPTIMAL PULSEWIDTHMODULATION (SOP)


SOP generates optimal switching pulse patterns of semiconductor devices in a multilevel inverter [43]. Synchronized PWM is used
in low-switching frequency applications where carrier signal at frequency (fs) and sinusoidal control signal at frequency (f1) are
synchronized with each other, i.e., fsf1is an integer in order to eliminate sub harmonic frequencies which are undesirable in many
applications. Synchronized PWM results in lower number of switching instants per fundamental period and even a little variation in
these switching angle values will have considerable influence on the harmonic distortion of output voltage [44]. Optimization methods
are suggested to predetermine switching angles offline to reduce harmonic distortion [45]. The precalculated switching angles are
stored and retrieved during real-time operation.

456

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3.1 Signal Flow Graph of Fifteen level Inverter

3.1 Optimization Method


The goal of optimization is to generate optimal switching patterns for each steady state operating point (m, N) in order to minimize
DF. The flowchart of optimization algorithm is shown in Fig. 5. The constraints of optimization are as follows [33]:
1) Sufficient gap (10 s) between consecutive switching angles to allow for minimum ON times and OFF times of the power
semiconductor devices;
2) In order to maintain current modulation index value, it is mandatory to satisfy the relation (9);
3) Continuity of switching angles for a given pulse number over its associated modulation index range in order to avoid transients
in machine currents.
In the beginning, all the possible structures for N = 4 to 13are obtained in the form of switching transitions s (i). For each pulse
number N, modulation index range is determined and then for each operating point (m, N), MATLAB function random is used to
generate the initial values of switching angles for further optimization while satisfying the relation (9).
We are performed computer-aided simulations to prove availability of the proposed multilevel inverter. The simulations are
implemented using Matlab and it was considered to a pure resistive load. The conventional method simulation output has been
obtained by using the carriers. In this proposed method the output has been obtained by using the Matlab/Simulink.

457

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3.2 Model Simulation Results

IV SIMULATION RESULTS
Simulation of a 15-level multilevel inverter uses four variable dc sources to minimize the harmonics. Four variable dc sources are
V1=81.5, V2=81.5, V3=81.5 & V4=163. This simulation achieves a speedup of 500x and the execution time range is 20 mille
seconds. The output voltage of 15 levels multilevel inverter as shown in Fig 5.The solution of this approach is 4.81% of THD by using
genetic algorithm as shown in Fig.6.

458

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 5: Output of 15 Levels multilevel Inverter

Fig 6: FFT Analysis of 15 Levels Multilevel Inverter

459

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

VI. CONCLUSION
Cascade NPC H-bridge topology is selected for implementing FIFTEEN-level inverter due to limitations of NPC and FC topologies
.Low-switching frequency operation of multilevel inverters is essential to reduce the switching losses in medium voltage high power
applications. Proposed SOP technique permits multilevel inverter to operate at an average device switching frequency limited to rated
fundamental frequency without compromising on harmonic distortion. Optimal nine-level waveforms are produced using SOP
technique and then switching instants for each semiconductor device is determined based on the criteria to reduce device switching
frequency as well as to ensure minimal unbalance in the dc-link capacitor voltages. Experimental results for four different operating
points demonstrate effectiveness of the proposed modulation in limiting average device switching frequency to rated fundamental
frequency without compromising on THD as well as resulting in low ripple at dc-link voltages. Compared to other low-switching
frequency control algorithms for nine-level inverter like model predictive control (fs = 425 to 500 Hz) [16] and adaptive duty-cycle
modulation algorithm (fs = 285 to 785 Hz) [17], the switching frequency of operation has been reduced more than five times without
compromising on THD of current waveforms.
REFERENCES:
[1]Amarendra Edpugantiand Akshay K. Rathore, Optimal Low Switching Frequency Pulsewidth Modulationof Nine-Level Cascade
Inverter IEEE transactions on power electronics, vol. 30, no. 1, january 2015
[2] H. Abu-Rub, J. Holtz, J. Rodriguez, and G. Baoming, Medium-voltage multilevel converters -state of the art,challenges, and
requirements in industrial applications, IEEE Trans. Ind. Electron., vol. 57, no. 8, pp. 2581 2596, Aug. 2010.
[3] S. Kouro, M. Malinowski, K. Gopakumar, J. Pou, L. Franquelo, B. Wu, J. Rodriguez, and M. Perez, J. Leon, Recent advances
and industrial applications of multilevel converters, IEEE Trans. Ind. Electron., vol. 57, no. 8, pp. 25532580, Aug. 2010.
[4] M.Malinowski, K. Gopakumar, J. Rodriguez, and M. Perez, A survey on cascaded multilevel inverters, IEEE Trans. Ind.
Electron., vol. 57, no. 7, pp. 21972206, Jul. 2010.
[5] J. Rodriguez, S. Bernet, P. Steimer, and I. Lizama, A survey on neutral point- clamped inverters, IEEE Trans. Ind. Electron.,
vol. 57, no. 7, pp. 22192230, Jul. 2010.
[6] J. Rodriguez, L. Franquelo, S. Kouro, J. Leon, R. Portillo, M. Prats, and M. Perez, Multilevel converters: An enabling technology
for high-power applications, Proc. IEEE, vol. 97, no. 11, pp. 17861817, Nov. 2009.
[7] L. Franquelo, J. Rodriguez, J. Leon, S. Kouro, R. Portillo, and M. Prats, The age of multilevel converters arrives, IEEE Ind.
Electron. Mag., vol. 2, no. 2, pp. 2839, Jun. 2008.
[8] J. Rodriguez, S. Bernet, B.Wu, J. Pontt, and S.Kouro, Multilevel voltage source- converter topologies for industrial mediumvoltage drives, IEEE Trans. Ind. Electron., vol. 54, no. 6, pp. 29302945, Dec. 2007.
[9] L. Tolbert, F. Z. Peng, and T. Habetler, Multilevel converters for large electric drives, IEEE Trans. Ind. Appl., vol. 35, no. 1, pp.
3644, 1999.
[10] J.-S. Lai and F. Z. Peng, Multilevel converters-a new breed of power converters, IEEE Trans. Ind. Appl., vol. 32, no. 3, pp.
509517, May/Jun. 1996.
[11] S. Fazel, S. Bernet, D. Krug, and K. Jalili, Design and comparison of 4-kv neutral-point-clamped, flying-capacitor, and seriesconnected h-bridge multilevel converters, IEEE Trans. Ind. Appl., vol. 43, no. 4, pp. 10321040, Jul/Aug. 2007.
[12] W. Hill and C. Harbourt, Performance of medium voltage multi-level inverters, in Proc. IEEE 34th Annu. Meet. Ind. Appl.
Conf.,, vol. 2, 1999, pp. 11861192.
[13] Z. Shu, X. He, Z. Wang, D. Qiu, and Y. Jing, Voltage balancing approaches for diode-clamped multilevel converters using
auxiliarycapacitor-based circuits, IEEE Trans. Power Electron., vol. 28, no. 5, pp.21112124, May 2013.
[14] K. Ilves, A. Antonopoulos, S. Norrga, and H.-P. Nee, A new modulation method for the modular multilevel converter allowing
fundamentalswitching frequency, IEEE Trans. Power Electron., vol. 27, no. 8,pp. 34823494, Aug. 2012.
[15] Z. Cheng and B. Wu, A novel switching sequence design for five-level npc/h-bridge inverters with improved output voltage
spectrum and minimized device switching frequency, IEEE Trans. Power Electron., vol. 22, no. 6, pp. 21382145, Nov. 2007.
[16] Z. Du, L. Tolbert, B. Ozpineci, and J. Chiasson, Fundamental frequency switching strategies of a seven-level hybrid cascaded hbridge multilevel inverter, IEEE Trans. Power Electron., vol. 24, no. 1, pp. 2533, Jan.2009.
[17] P. Cortes, A. Wilson, S. Kouro, J. Rodriguez, and H. Abu-Rub, Model predictive control of multilevel cascaded h-bridge
inverters, IEEE Trans. Ind. Electron., vol. 57, no. 8, pp. 26912699, Aug. 2010.
[18] S. Kouro, J. Rebolledo, and J. Rodriguez, Reduced switching-frequency modulation algorithm for high-power multilevel
inverters, IEEE Trans. Ind. Electron., vol. 54, no. 5, pp. 28942901, Oct. 2007.
[19] H. S. Patel and R. Hoft, Generalized techniques of harmonic elimination and voltage control in thyristor inverters: Part i
harmonic elimination, IEEE Trans. Ind. Appl., vol. IA-9, no. 3, pp. 310317, May 1973.
[20] H. S. Patel and R. Hoft, Generalized techniques of harmonic elimination and voltage control in thyristor inverters: Part ii
voltage control techniques, IEEE Trans. Ind. Appl., vol. IA-10, no. 5, pp. 666673, Sep.1974.
[21] I. Pitel, S. N. Talukdar, and P. Wood, Characterization of programmed waveform pulse width modulation, IEEE Trans. Ind.
Appl., vol. IA-16,no. 5, pp. 707715, Sep. 1980.
460

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[22] P. Enjeti, P. Ziogas, and J. Lindsay, Programmed pwm techniques to eliminate harmonics: a critical evaluation, IEEE Trans.
Ind. Appl., vol.26, no. 2, pp. 302316, Mar./Apr. 1990.
[23] A. Maheshwari and K. D. T. Ngo, Synthesis of six-step pulse width modulated waveforms with selective harmonic elimination,
IEEE Trans. Power Electron., vol. 8, no. 4, pp. 554561, Oct. 1993.
[24] F. Swift and A. Kamberis, A new walsh domain technique of harmonic elimination and voltage control in pulse-width
modulated inverters, IEEE Trans. Power Electron., vol. 8, no. 2, pp. 170185, Apr. 1993.
[25] T.-J. Liang, R. OConnell, and R. Hoft, Inverter harmonic reduction using walsh function harmonic elimination method, IEEE
Trans. Power Electron., vol. 12, no. 6, pp. 971982, Nov. 1997.
[26] J. Chiasson, L. Tolbert, K. McKenzie, and Z. Du, A complete solution to the harmonic elimination problem, IEEE Trans.
Power Electron., vol. 19, no. 2, pp. 491499, Mar. 2004.
[27] J.Wells, B. Nee, P. Chapman, and P. Krein, Selective harmonic control: a general problem formulation and selected solutions,
IEEE Trans. Power Electron., vol. 20, no. 6, pp. 13371345, Nov. 2005.
[28] V. Blasko, A novel method for selective harmonic elimination in power electronic equipment, IEEE Trans. Power Electron.,
vol. 22, no. 1, pp.223228, Jan. 2007.
[29] V. Agelidis, A. Balouktsis, I. Balouktsis, and C. Cossar, Multiple sets of solutions for harmonic elimination pwm bipolar
waveforms: analysis and experimental verification, IEEE Trans. Power Electron., vol. 21, no. 2, pp. 415421, Mar. 2006.
[30] J. Wells, X. Geng, P. Chapman, P. Krein, and B. Nee, Modulation-based harmonic elimination, IEEE Trans. Power Electron.,
vol. 22, no. 1, pp.336340, Jan. 2007.
[31] G. Poddar and M. Sahu, Natural harmonic elimination of square-wave inverter for medium-voltage application, IEEE
Trans. Power Electron.,vol. 24, no. 5, pp. 11821188, May 2009.
[32] J. Chiasson, L. Tolbert, K. McKenzie, and Z. Du, A unified approach to solving the harmonic elimination equations in
multilevel converters, IEEE Trans. Power Electron., vol. 19, no. 2, pp. 478490, Mar. 2004.
[33] A. Kavousi, B. Vahidi, R. Salehi, M. Bakhshizadeh, N. Farokhnia, and S.Fathi, Application of the bee algorithm for selective
harmonic elimination strategy in multilevel inverters, IEEE Trans. Power Electron., vol. 27, no.4, pp. 16891696, Apr. 2012.
[34] A. Rathore, J. Holtz, and T. Boller, Generalized optimal pulse width modulation of multilevel inverters for low switching
frequency control of medium voltage high power industrial ac drives, IEEE Trans. Ind. Electron., vol. 60, no. 10, pp. 4215
4224, Oct. 2013.
[35] A. Rathore, J. Holtz, and T. Boller, Synchronous optimal pulse width modulation for low-switching-frequency control of
medium-voltage multilevel inverters, IEEE Trans. Ind. Electron., vol. 57, no. 7, pp. 23742381, Jul. 2010.
[36] T. Boller, J. Holtz, and A. Rathore, Optimal pulse width modulation of a dual three-level inverter system operated from a single
dc link, IEEE Trans. Ind. Appl., vol. 48, no. 5, pp. 16101615, Sep./Oct. 2012.
[37] J. Holtz and N. Oikonomou, Optimal control of a dual three-level inverter system for medium-voltage drives, IEEE Trans. Ind.
Appl., vol. 46, no.3, pp. 10341041, May/Jun. 2010.
[38] J. Holtz and B. Beyer, Fast current trajectory tracking control based on synchronous optimal pulse width modulation, IEEE
Trans. Ind. Appl., vol.31, no. 5, pp. 11101120, Sep./Oct. 1995.
[39] J. Holtz and B. Beyer, The trajectory tracking approach-a new method for minimum distortion pwm in dynamic high-power
drives, IEEE Trans. Ind. Appl., vol. 30, no. 4, pp. 10481057, Jul./Aug. 1994.
[40] J. Holtz and N. Oikonomou, Estimation of the fundamental current in low-switching-frequency high dynamic medium-voltage
drives, IEEE Trans. Ind. Appl., vol. 44, no. 5, pp. 15971605, Sep./Oct. 2008.
[41] J. Holtz and N. Oikonomou, Fast dynamic control of medium voltage drives operating at very low switching frequency - an
overview, IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 10051013, Mar. 2008.
[42] N. Oikonomou and J. Holtz, Closed-loop control of medium-voltage drives operated with synchronous optimal pulse width
modulation, IEEE Trans. Ind. Appl., vol. 44, no. 1, pp. 115123, Jan./Feb. 2008.
[43] J. Holtz and N.Oikonomou, Synchronous optimal pulse width modulation and stator flux trajectory control for medium-voltage
drives, IEEE Trans. Ind. Appl., vol. 43, no. 2, pp. 600608, Mar./Apr. 2007.
[44] J. Holtz and B. Beyer, Optimal synchronous pulse width modulation with a trajectory-tracking scheme for high-dynamic
performance, IEEE Trans. Ind. Appl., vol. 29, no. 6, pp. 10981105, Nov./Dec. 1993.
[45] J. Holtz, Pulse width modulation for electronic power conversion, Proc.IEEE, vol. 82, no. 8, pp. 11941214, Aug. 1994.
[46] G. S. Buja, Optimum output waveforms in PWM inverters, IEEE Trans Ind. Appl., vol. IA-16, no. 6, pp. 830836, Nov. 1980.
[47] T. Boller, J. Holtz, and A. Rathore, Neutral point potential balancing using synchronous optimal pulse width modulation of
multilevel in medium voltage high power ac drives, IEEE Trans. Ind. Appl., vol. PP, no. 99, pp.11, 2013.
[48] J. Holtz and N. Oikonomou, Neutral point potential balancing algorithm at low modulation index for three-level inverter
medium-voltage drives, IEEE Trans. Ind. Appl., vol. 43, no. 3, pp. 761768, May/Jun. 2007.
[49] H. du T. Mouton, Natural balancing of three-level neutral-point-clamped PWM inverters, IEEE Trans. Ind. Electron., vol. 49,
no. 5, pp. 10171025, Oct. 2002

461

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Modified Bidirectional Quasi Z-Source Inverter Design with Neuro Fuzzy Control Technique
Barna Prince.M1, Jamuna.P2
PG Scholar1, Associate Professor2
barnaprince@gmail.com
Department of EEE,
Nandha Engineering College, India
Abstract This paper proposes a new controller design and realization of a high-power bidirectional quasi-Z-source inverter (BQZSI). A bidirectional active switch in the quasi-Z-source network improves the performance of the inverter under small inductance
and low power factor. To maintain constant output neuro fuzzy control technique is used in the closed loop. And also overall
efficiency of the inverter is increased. The quasi-Z-source inverter (qZSI) with battery operation can balance the stochastic
fluctuations of photovoltaic (PV) power injected to the grid/load, but its existing topology has a power limitation due to the wide range
of discontinuous conduction mode during battery discharge.
Index Terms Bidirectional quasi-Z-source inverter (BQ-ZSI), Neuro Fuzzy, electric vehicle (EV) applications, feed-forward
compensation, reverse power flow, small signal model.
I.Introduction
The evolution of Electric Vehicles (EV) creates a global push and provides better replacement of the fuel based vehicles. The Vehicles
are charged by batteries and the power flow during starting and braking operations can be designed by Bidirectional quasi Z Source
Inverter. The power (SDP) by 15% over the dcdc converter with the VSI topology, which reduces the total cost and further improves
the efficiency of the traction drive system . However, the input current of ZSI is not continuous, which will shorten the lifetime of the
battery pack and degrade the vehicle performance. By rearranging the components in the Z-source network, a new topology called
quasi-Z-source inverter (QZSI) is proposed. The QZSI realizes the continuous input current, at the same time retaining all the merits of
the ZSI, which makes it a good candidate for EV applications. However, the traditional QZSI only allows unidirectional power flow
from the dc to the ac side. The traction drive system requires the reverse power flow to realize the regeneration break of the EV. To
achieve the bidirectional power flow capability, the same approach as in is utilized and the diode in the quasi-Z-source network (QZSN)
is replaced by an active switch. A similar approach is also utilized in the bidirectional ZSI .However, much of the previous operation
mode analysis was based on the topology of the ZSI and mainly focused on the power flow from the dc to the ac side. To better
understand the circuit, this paper first gives a detailed circuit analysis of the bidirectional quasi-Z-source inverter (BQ-ZSI) during the
regeneration mode, i.e., when the power flows from the ac to the dc side. The analysis proves that with the active switch, the inductor
currents in the QZSN can be reversed and the energy from the ac side can be delivered to the dc source. The analysis also shows that,
unlike in the ZSI, part of the dc link ripple current will be absorbed by the two capacitors in the QZSN and not go through the dc source,
which provides a better operating condition for the battery pack in EV. Furthermore, with the additional switch, the discontinuous
conduction mode (DCM) can be avoided and the BQ-ZSI can have a better performance with small inductance or under low power
factor condition, such as when the electric motor is operated with a light load. Based on the circuit analysis, the small signal model can
be obtained, and the control algorithm of the BQ-ZSI in EV applications can be developed. By rearranging the components in the Zsource network, a new topology called quasi-Z-source inverter (QZSI) is proposed. The QZSI realizes the continuous input current, at
the same time retaining all the merits of the ZSI, which makes it a good candidate for EV applications.

FIG 1.BIDIRECTIONAL QUASI Z-SOURCE INVERTER


462

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A. Control of s7
During the regeneration mode, the switching pattern of S7 is complementary with the shoot-through pattern of the three phase
bridge. When the three-phase bridge is in the shoot through state, S7 is open. The body diode is reversely blocked and the voltage
boost function can be realized. When the three phase bridge is in the non-shoot-through state, S7 is closed. The reverse current goes
through S7 and feeds the energy back to the dc source. For safety purposes, a suitable dead time needs to be inserted between the
control signals of the shoot-through state and S7 . Otherwise, the two capacitors in the QZSN may be short-connected through S7 ,
which will cause damage of the devices.
B. Current Modes Analysis
Without losing generality, assume L1 = L2 in L1 and L2 are always the same. However, the voltages on C1 and C2 are not
the same. When driving an electric motor, the instantaneous current flowing through the dc link during the non-shoot-through state
can be expressed as iPN = S1 ia + S3 ib + S5 ic = IPN +iPN (1) where ia, ib , and ic are the instantaneous ac side three-phase
current. IPN is the dc component andiPN is the ac component of iPN. S1, S3, and S5 are the switching functions. When Sx = 1, switch
Sx is closed, and when Sx = 0, switch Sx is open (x = 1, 3, or 5). From (1), it can be noted that the value of iPN changes with time.
Utilizing the principle of superposition, iPN can be written as the sum of IPN, which is related to the active power of the ac side,
andiPN, which is related to the switching action of the three-phase inverter and the reactive power of the ac side. The average value
ofiPN over one fundamental period is zero. According to the topology shown in Fig. 1, during the nonshoot-through state, S7 is
closed.iPN can circulate through two capacitors C1 and C2 , switch S7 , and dc link PN. Depending on the impedance of the dc
source, part in L1 and L2 are always the same. However, the voltages on C1 and C2 are not the same. The average value ofiPN over
one fundamental period is zero. According to the topology shown in Fig. 1, during the non shoot- through state, S7 is closed.iPN can
circulate through two capacitors C1 and C2 , switch S7 , and dc link PN. Depending on the impedance of the dc source, part ofiPN
will be absorbed by the capacitors and not flow through the inductors and the dc source, which improves the operating condition of the
battery pack in EV. This is different from the ZSI and traditional QZSI, but similar to the traditional VSI where a dc-link capacitor
will absorb the current ripple from the ac side. IPN will go through the QZSN. This part of the current is directly related to the energy
transfer between the dc side and the ac side.
C .AC Side Controller Design
The ac side controller is utilized to control the ac motor. Since the dc-link voltage is stabilized by the dc side controller, existing motor
control algorithms, such as FOC orV/Hz control, can be directly implemented and is not described in detail in this paper. However, to
achieve a good system level control, the dynamics of the ac side should be designed to be much faster than the dc side to avoid
oscillation. Since the shoot-through state is always restricted within the zero state of the control parameter at the dc side will impose.
On the ac side. With a higher input voltage, to achieve the same dc-link voltage, the required shoot-through duty ratio will be smaller.
Therefore, there will be less possibility that the dc side shoot-through duty ratio conflicts with the ac side controller. So the controller
usually will perform better with higher end of input voltage range. The complete system level control algorithm is shown in Fig. 4
Without losing generality a current regulator under a synchronous frame is implemented in the ac side controller. The ac (Alternating
Current) side controller is utilized to control the ac motor. Since the dc-link voltage is stabilized by the dc side controller. To achieve a
good system level control, the dynamics of the ac side should be designed to be much faster than the dc side to avoid oscillation. With
a higher input voltage, to achieve the same dc-link voltage, the required shoot-through duty ratio will be smaller. So the controller
usually will perform better with higher end of input voltage range

Fig.2 System model and control strategy


463

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

D. Pulse width modulation technique


The modulation technique adopted for the quasi Z-source inverter is different from the conventional VSI because of the
additional zero state called the shoot through state. Modifications are to be made in the traditional PWM technique so as to include the
shoot through states. This can be achieved with the help of an additional constant line called the shoot through line whose magnitude
is responsible for the three modulation strategies namely simple boost, maximum boost and constant maximum boost. Maximum
Constant Boost Control method is used in this project.
E. Implementation of neurofuzzy logic controller in bidirectional quasi z source inverter
The Neuro Fuzzy Logic controller takes two inputs, processes the information and outputs .The input to Neuro Fuzzy
Controller are Error in voltage and Change of Error in voltage and the output is current .The Capacitor voltage is compared with the
reference voltage and Error and Change in error are given as input to the NeuroFuzzy Logic Controller. Before the details of the fuzzy
controller are dealt with, the range of possible values for the input and output variables are determined. These (in language of Neuro
Fuzzy Set theory) are the membership functions are used to map the real world measurement values to the fuzzy values, so that the
operations can be applied on them. Values of the input variables (Error voltage) and (Change in Error voltage) are normalized range
(1 to 100) .The decision which the Neuro fuzzy controller makes is derived from the rules which are stored in the database. These are
stored in a set of rules. The rules are if-then statements that are Intuitive and easy to understand, since they are nothing but common
English statements. Rules used in this project are derived from common sense, data taken from typical home use, and Experimentation
in a controlled environment.
II. Steps Involved In Calculating The Crisp Output
There are five steps in implementing the Fuzzy Logic. They are,
Defining inputs and outputs.
Fuzzification of input.
Fuzzification of output.
Create Fuzzy rule base.
Defuzzification of output.
III. Simulation And Experimental Results
Simulations in MATLAB/Simulink were next performed for the four voltage-type Z-source inverters compared in this
section. Most of experiments and simulation studies applied to the power systems show that the conventional controllers have large
overshoots and long settling times. Also, optimizing time for control parameters, especially PI controllers, is very long and the
parameters are not calculating exactly. In addition, it has been known that conventional controllers generally do not work well for nonlinear, higher order and time- delayed linear, and particularly complex and vague systems that have no precise mathematical models. It
is appropriate for rapid applications. Therefore, Neuro fuzzy logic has been applied to the industrial systems as a controller. Human
experts prepare linguistic description as Neuro fuzzy rules. Determining the controller parameters with these rules, a PI controller
generates the control signal by which, the Neuro fuzzy gain scheduling proportional and integral controller (FGPI) is formed.
Output Voltage of Quasi Z Source Inverter using PI Controller

Fig 3. Output Voltage using PI Controller


A. Output Voltage of Quasi Z Source Inverter using Neuro Fuzzy Controller
464

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 4.Output Voltage using Fuzzy Controller


B. Comparison of Capacitor Voltage (PI Vs Neuro Fuzzy Controller)

Fig. 5 Capacitor Voltage using Neuro Fuzzy controller


C. Comparison of speed (PI Vs NeuroFuzzy Controller)

Fig. 6 Comparison of Speed

CONCLUSION
In this paper, two important aspects are covered during the development of the BQ-ZSI for EV applications operation principle
analysis and controller design. Better tuning of the Neuro Fuzzy Logic Controller can be done with analysis after real time
implementation Performance Comparison can be done with other controllers such as PID controller .Neuro Fuzzy rules and number of
labels may be changed and its reflection on performance can be observed. In this project, triangular membership function is used for
simplicity in programming. The effect of choosing other membership functions can also be studied .But in this Neuro fuzzy control
technique, time response is slightly high.

465

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] FengGuo, Lixing Fu, Chien-Hui Lin,Cong Li, Woongchul Choi, and Jin Wang, Development of an 85-kW BidirectionalQuasi-ZSource Inverter With DC-LinkFeed-Forward Compensation for ElectricVehicle Applications, IEEE Trans. Power Electron.. vol. 28,
no. 12, pp. 5477-5488, Dec. 2013.
[2] Baoming Ge, Haitham Abu-Rub, Fang Zheng Peng, Qin Lei, Anbal T. de Almeida, Fernando J. T. E. Ferreira, Dongsen Sun, and
Yushan Liu, An Energy-Stored Quasi-Z-Source Inverter for Application to Photovoltaic Power System, IEEE Trans. Ind. Electron.,
vol. 60, no. 10, pp. 4468-4481,Oct. 2013
[3] Ding Li, Poh Chiang Loh, Miao Zhu, Feng Gao, Member, and Frede Blaabjerg, Enhanced-Boost Z-Source Inverters With
Alternate-Cascaded Switched- and Tapped-Inductor Cells, IEEE Trans. Ind. Electron., vol. 60, no. 9, pp. 3567-3578, Sep. 2013
[4] Feng Guo, , Lixing Fu, Chien-Hui Lin,Cong Li, Woongchul Choi, and Jin Wang , Development of an 85-kW Bidirectional
Quasi-Z-Source Inverter With DC-Link Feed-Forward Compensation for Electric Vehicle Applications , IEEE Trans.Power
Electron.,, vol. 28, no. 12,pp. 5477-5488, Dec. 2013
[5] Francis Boafo Effah, Patrick Wheeler, Jon Clare, and Alan Watson, Space-Vector-Modulated Three-Level Inverters With a
Single Z-Source Network, IEEE Trans.Power Electron., vol. 28, no. 6,pp. 2806-2815, Jun. 2013
[6] H. Abu-Rub, A. Iqbal, S. Moin Ahmed, F. Z. Peng, Y. Li, and G. Baoming, Quasi-Z-source inverter-based photovoltaic
generation system with maximum power tracking control using ANFIS, IEEE Trans.Sustainable Energy, vol. 4, no. 1, pp. 1120,
Jan. 2013.
[7] Indrek Roasto, Dmitri Vinnikov, Janis Zakis, and Oleksandr Husev, New Shoot-Through Control ethods for qZSI-Based DC/DC
Converters, IEEE Trans. Ind. Inform., vol. 9, no. 2, pp. 640-647,May. 2013
[8] Jianfeng Liu, Shuai Jiang, Dong Cao and Fang Zheng Peng, A Digital Current Control of Quasi-Z-Source Inverter with Battery,
IEEE Trans. Ind. Inform., vol. 9, no. 2, pp. 928-937, May. 2013
[9] O. Ellabban, J. Van Mierlo, and P. Lataire, A DSP-Based dual-loop peak DC-link voltage control strategy of the Z-source
inverter, IEEE Trans.Power Electron., vol. 27, no. 9, pp. 40884097, Sep. 2012.
[10] Seyed Mohammad Dehghan, Mustafa Mohamadian and Ali Yazdian, Hybrid Electric Vehicle Based on Bidirectional Z-Source
Nine-Switch Inverter IEEE Trans.Vehicular Tech, vol. 59, no. 6, pp. 2641-2653,Jul. 2010
[11] Yuan Li, Shuai Jiang, Jorge G. Cintron-Rivera and Fang Zheng Peng, Modeling and Control of Quasi-Z-Source Inverter for
Distributed Generation Applications , IEEE Trans. Ind. Electron., vol. 60, no. 4,pp. 1532-1541 Apr. 2013
[12] S. Rajakaruna and B. Zhang, Design and control of a bidirectional Zsource inverter, in Proc. Power Eng. Conf.,Sep.2009, pp. 16

466

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Big Data using Hadoop


Dinesh D. Jagtap1
CSE Department,
Everest Educational Societys Group of Institutions
Aurangabad,Maharashtra, India
dinesh.jagtapd@gmail.com
Prof.B.K.Patil2
CSE Department,
Everest Educational Society's Group of Institutions,
Aurangabad, Maharashtra,India.
cseroyal7@gmail.com

Abstract: Big Data is data that becomes large enough that it cannot be processed using conventional methods.
The term Big Data concerns with the huge volume, complex and rapidly growing data sets with multiple, independent sources .Due
to fast development of networking ,data storage and data collection capacity the concept of big data is now rapidly expanding in all
science and engineering domains including biological, physical and biomedical sciences. Social networking sites, mobile phones
,banking and stock exchange sectors, sensors and science contribute to production of peta bytes of data daily. Thats why Big Data
analysis now drives almost every aspect like mobile services, retail, financial services, manufacturing and life sciences. We all have
heard a lot about big data, but big is actually a red herring. Telecommunications companies, Oil companies, and other datarelevant industries have had vast datasets for a long time. And as storage capacity continues to enlarge, todays big is certainly
tomorrows medium and small. In next week. The best meaningful definition of big data is when the size of the data itself
becomes part of the problem.

Keywords Big Data, data mining, heterogeneity, autonomous sources, complex and evolving associations
I. INTRODUCTION
In the last few years, there has been tremendous increased in the amount of data thats available. Whether were talking about tweet
streams, web server logs, records of online transactions, government data, or some other source data. The problem is not only finding
data, its figuring out what to do with the available data. And its not just companies using their own data, or the data contributed by
users of that company. Data mining allows users to examine the data from many dissimilar magnitudes or angles, sort it, and
summarize the associations identified. Strictly, data mining is the process of finding correlations or patterns among dozens of fields in
big relational databases. Another fundamental characteristics of the Big Data is large volume of data is represented by heterogeneous
and diverse dimensionalities. This is because of different information collector prefers their own schemata for recording the data and
also the nature of application also results in diverse representation of data. When the size of data increases obviously the complexity
and relationships underneath the data.Hadoop is an open source software project that enables processing of large data sets distributed
across the clusters of product servers. Were discussing data problems that are ranging from gigabytes to petabytes size of the data.
At particular point, conventional techniques for working with data run out of the stream.
Information platforms are somewhat like as traditional data warehouses, but different. They describe rich APIs, and are designed for
exploring and considering the data rather than for traditional analysis and reporting. They allow all data formats, with the most messy,
and their schemas grow.

467

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure. 1. The blind men and the giant elephant: the localized (limited) view of each blind man leads to a biased conclusion.

II.REQUIREMENT
Most of the organizations that have built data platforms have establish it necessary to go further than the relational database model.
Conventional relational database systems stop being valuable at this balance. Managing shading and replication across a mass of
database servers is difficult and slow. The need to define a schema in advance conflict with reality of numerous, formless data sources,
in which you may not know whats important until after youve analyzed the data. Relational databases are premeditated for
uniformity, to support complex transactions that can easily be rolled back if any one of a composite set of operations fails.
To store vast datasets efficiently, weve seen a new type of databases appear. These are normally called NoSQL databases or NonRelational databases, while neither term is very practical. Many of these databases are the logical offspring of Googles Big Table and
Amazons Dynamo, and are intended to be distributed across many nodes, to provide ultimate uniformity but not absolute
consistency, and to have very flexible schema. Whereas there are two dozen or so products available (about all of them open source), a
few leaders have recognized themselves:

III.OBJECTIVES
Data is only useful if you can do something with it, and massive datasets introduces computational issues. Google popularized the
MapReduce approach, which is mainly a divide-and-conquer policy for distributing an tremendously large problem across an very
large computing cluster. In the map stage, a programming task is divided into a number of equal subtasks, which are after that
distributed across many processors; the halfway results are then combined by a single shrink task. In perception, MapReduce seems
like an clear solution to Googles major trouble, creating large searches. Its so easy to allocate a search among number of processors,
and after that merge the results into a single set of answers. Whats less understandable is that MapReduce has proven that to be
broadly valid to many large data troubles, ranging from searching to machine learning. Architecturally, the cause youre able to deal
with lots of data is because Hadoop spreads it out. And the reason youre able to ask complicated computational question is only
because youve got all of these processors, working in parallel, harness mutually.

IV.THEME
Using data effectively requires something different from traditional statistics, where actuaries in business suits perform arcane
but fairly well-defined kinds of analysis. What differentiates data science from statistics is that data science is a holistic approach.
Were increasingly finding data in the wild, and data scientists are involved with gathering data, massaging it into a tractable
form,making it tell its story, and presenting that story to others.To meet the challenge of processing such large data sets, Google
created Map-Reduce. Googles work and Yahoos creation of the Hadoop MapReduce implementation has spawned an ecosystem of
big data processing tools.
A. Literature Survey
Data is everywhere: your administration, your web server, your business partners and even your body, we are finding that almost
everything can be instrumented. At OReilly, we normally merge publishing industry data from Nielsen BookScan with our own sales
468

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

data, openly available Amazon data, and even job data to see whats happening in the publishing industry. Sites like Infochimps and
Factual gives access to numerous large datasets, including weather data, MySpace activity.
Storage Map Reduce Big data is data that becomes huge enough that it cannot be processed using straight methods. Social networks,
mobile phones, Banking sector and government agencies contribute to peta bytes of data created daily.
[25] To face the number of challenge of processing such kind of huge data sets, Google invented Map Reduce. Googles work and
Yahoos creation of the Hadoop MapReduce implementation has spawned an environment of big data processing tools.
[26] As MapReduce has grown-up in reputation, a stack for big data systems has invented, comprising layers of Storage,
MapReduce and Query (SMAQ).
[27] SMAQ systems are normally open source, distributed, and run on commodity hardware.

Figure.2. SMAQ systems

[28] Created at Google in response to the difficulty of creating web search indexes, the MapReduce framework is the thrust behind
most of todays big data processing.
[29] The key improvement of MapReduce is the capability to take a query over a data set, divide it, and run it in parallel over
many nodes.

Figure. 3. Map Reduce Technique


[30] Loading the dataThis operation is more properly called Extract, Transform, Load (ETL) in data warehousing language.
Data should be extracted from its source, prepared to make it ready for further processing.
[31] MapReduceThis segment will retrieve data from storage, process it, and transfer its results to the storage.
[32] Extracting the resultOnce processing is completed, for the result to be useful to humans, it must be retrieved from the
storage and presented.
[33] Many SMAQ systems have characteristics designed to solve the operation of each of these stages.
[34] Storage-MapReduce requires storage from which to retrive data and in which to store the obtained results of the computation.
The data predicted by MapReduce is not the relational data as generally used by conventional database system. Instead, data is
consumed in chunks, which are then divided among nodes and fed to the map phase as key value pairs. This data does not
need a schema, and may be formless.
469

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure.4 HDFS Architecture


[35] Hadoop is leading open source map reduce implementation created by yahoo emerged in 2006 creator is Doug cutting.
To communicate between node in 2nd generation uses replication factor Hadoop and HDFS utilize a master- slave architecture.
HDFS is written in Java, with an HDFS cluster consisting of a primary name node a master server that manages the file system
namespace and also regulates right of entry to data by clients. An elective secondary Name Node for fail over purposes also may be
configured. Consecutively. HDFS has many goals. Here are some of the most prominent:
Fault tolerance is easy to find by detecting faults and applying rapid and automatic recovery.
Data access via MapReduce streaming.
Processing logic is close to the data instead of the data close to the processing Logic.

V.BIG DATA CHARACTERISTICS: HACE THEOREM


HACE Theorem. Big Data starts with large-volume, heterogeneous, autonomous sources with distributed and decentralized control,
and seeks to explore complex and evolving relationships among data.These above mentioned characteristics make it an extreme
challenge for taking out meaningful facts from the Big Data. In a nave sense, we can imagine that there are number of blind men are
trying to size up a giant elephant (see Fig. 1), which will be the Big Data in this context. The main aim of each blind man is to
construct a picture of the elephant on the basis of the part of information collects during the process. Because each persons view is
restricted to his local region, so the blind men will each conclude alone that the elephant feels like a rope, a hose, or a wall,
depending on the region each of them is limited to. To make this scenario more complex let us imagine that the elephant is growing
rapidly and its pose changes constantly, and) each blind man may have his own information sources that tell him about biased
knowledge about the elephant (e.g., so one blind man can share his feeling about the present pose of elephant with another blind
man, where the knowledge which is shared is inherently biased. Describing the Big Data in this scenario is corresponding to
aggregating heterogeneous information from number of sources (blind men) to help draw a best possible picture of the elephant in a
real-time position.[3]
5.1 Huge Data with Heterogeneous and Diverse Dimensionality
One of the fundamental characteristics of the Big Data is the enormous volume of data is represented by heterogeneous and varied
dimensionalities. Because different information collectors prefer their own schemata or protocols for recording of the data, different
applications and their nature also results in miscellaneous data representations. The simple example is, each human being in a
biomedical world can be represented by using simple demographic information such as gender, age, family disease history, and so on.
For X-ray and CT scan examination of each patient, images or videos are provides visual information used for doctors to carry
detailed examination thats why images and videos are useful entity. Under such situation, the heterogeneous features refer to the
representations for the same individuals in different types, and the diverse features refer to the variety of the features involved to
represent each single observation. Imagine that different organizations or health practitioners can have their own kind of schemata
to represent each individual, if we want to enable aggregation of data by combining data from all sources then the data heterogeneity
and diverse dimensionality are become major challenges.[3]

5.2 Autonomous Sources with Distributed and Decentralized Control


Autonomous data sources with distributed and decentra-lized controls aare a main quality of Big Data applications. Being
autonomous there is no any centralized control on it so, each data source can produce and gather information without involving and
relying on any centralized control. This is same as World Wide Web (WWW) setting where each web server is independent and
provides a certain amount of information without necessarily depanding on other servers.The enormous volumes of the data can also
470
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

make an application more vulnerable to malfunctions, if the whole system has to be depended only on single centralized control
unit.Todays well known social sites such as Google, Facebook, and Walmart, has set of large number of server farms which are
situated all over the world to ensure nonstop services and quick responses for local markets. More particularly, the local government
regulations also impact on the wholesale management process and it result in reorganized data representations and data warehouses
for local markets.[3]

5.3 Complex and Evolving Relationships


While the amount of the Big Data increases, so the complexity and the relationships under the data. In the early stage of data
centralized information systems, the main aim to finding best feature values to represent each observation. This is same as using a
number of data fields, such as gender,age, income, education background, to describe each individual. This type of representation of
sample-feature inherently treats each individual as an independent entity without considering their societal relations, which is one of
the most important factors of the human society. In real world our friend circles may be formed based on the frequent hobbies or
people are connected by biological dealings. Such social connections also are very popular in cyberworlds. For example, major social
networking applications, such as Facebook or Twitter, are mostly characterized by social functions such as friend-connections and
followers (in Twitter). In the sample-feature representation, individuals persons are regarded alike if they are sharing similar feature
values, whereas in the sample-feature-relationship representation, two persons can be linked together even though they might share
nothing in common in the feature domains at all. In a dynamic world, the features used to represent the individuals and the social ties
used to represent our connections may also evolve with respect to sequential, spatial, and other factors. Such a complication is
becoming part of the reality for Big Data applications, where the key is to take the complex data relationships along with evolving
changes into consideration, to find out valuable patterns from Big Data collections. [3]

VI.CONCLUSION
Real-world applications and key industrial stakeholders and initialized by national funding agencies, managing and mining Big Data
have shown to be a challenging yet very compelling task.Term Big Data is accurately concerns about volume of dat, our HACE
theorem suggests the key characteristics of the Big Data that are. First is, Large with heterogeneous and diverse data sources,Second is
autonomous data sources with distributed and decentralized control, and third one is complex and evolving in data and knowledge
associations. Such kind of mutual characteristics suggest that Big Data need a big mind to combine data for maximum values. To
describe the concept of Big Data, we have review several challenges at the data, model, and system levels. To support Big Data
mining, high-performance powerful computing platforms are required, which impose regular designs to unleash the full power of the
Big Data. At the data level the autonomous information sources and the range of the data collection environments, often gives result in
data with complex situation, such as missing values. In certain situations, solitude concerns, noise, and errors can be introduced into
the data, to create tainted data copies. Developing of a secure and sound information sharing protocol is a major challenge. At the
model level, the key challenge is to create universal models by combining nearby discovered patterns to form a unifying view. This
requires carefully designed algorithms to analyze model correlations between scattered sites, and combine decisions from several
sources to gain a best model out of the Big Data. At the system level, the important challenge is that a Big Data mining framework
needs to consider complex relations between models, samples and data sources, along with their evolving changes with time and other
likely factors. A system needs to be watchfully designed so that formless data can be linked through their compound associations to
form helpful patterns, and the growth of data volumes and item relationships should help form rightful patterns to estimate the
tendency and view. We stare Big Data as an capable style and the necessity for Big Data mining is arising in all science and
engineering fields. With the use of Big Data technologies we will with any luck be able to give the best part of applicable and most
precise social sensing feedback to improved realize our society at realtime. We can additionally stimulate the association of the public
audiences in the data building loop for community and economical events. The period of Big Data has arrived.
REFERENCES:
[1] Apache HBase http://hbase.apache.org
[2] Apache Accumulo http://accumulo.apache.org
[3] Xindong Wu, Fellow, IEEE, Xingquan Zhu, Senior Member, IEEE,Gong-Qing Wu, and Wei Ding, IEEEData mining with Big
Data Transaction on knowledge and data Engineering vol. 26 No.1 January 2014.[4] J. Kepner and S. Ahalt, MatlabMPI, Journal
of Parallel and Distributed Computing, vol. 64, issue 8, August, 2004.
[5]B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A.D. Joseph, R.Katz, S. Shenker and I. Stoica, "Mesos: A Platform for FineGrained
[6] N. Bliss, R. Bond, H. Kim, A. Reuther, and J. Kepner, Interactive grid computing at Lincoln Laboratory, Lincoln Laboratory
Journal, vol. 16,no. 1, 2006.
471

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7] J. Kepner et al., Dynamic distributed dimensional data model (D4M) database and computation system, 37th IEEE
International1989.
[8] A. Jacobs, The Pathologies of Big Data, Comm. ACM, vol. 52, no. 8, pp. 36-44, 2009.
[9] A. Jacobs, The Pathologies of Big Data, Comm. ACM, vol. 52, no. 8, pp. 36-44,2009.
[10] R. Ahmed and G. Karypis, Algorithms for Mining the Evolution of Conserved Relational States in Dynamic Networks,
Knowledge and Information Systems, vol. 33, no. 3, pp. 603-630, Dec. 2012.
[11] M.H. Alam, J.W. Ha, and S.K. Lee, Novel Approaches to Crawling Important Pages Early, Knowledge and Information
Systems, vol. 33, no. 3, pp 707-734, Dec. 2012.
[12] S. Aral and D. Walker, Identifying Influential and Susceptible Members of Social Networks, Science, vol. 337, pp. 337-341,
2012

472

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

WINLAR
(SOLAR AND WIND ENERGY SYSTEM)
Naragani Chiranjeevi, Gadde Megha Syam.
IV B. Tech, Department of E.E.E, LAKIREDDY BALI REDDY College of Engineering,
Naraganichiranjeevi2@gmail.com

Abstract: There is an emergency need to work out an extension of non-conventional hand in total power generation as we see a lot of
power scarcity due to lack of surplus amount of natural resources. Our project WINLAR is an archetype which helps in harnessing
wind and solar energy within a single model. In view of considering different aspects like low installation cost, decrease in cost charge
per unit, development in the area, employment, direct connection of wind power plant to conventional grid are some of the advantages
of our prototype, places where wind currents would be high will be of good use for the constructive output of our prototype.
Keywords: WINLAR, Solar power, Wind power, Hybrid power generation, WITIRCITY, Solar vanes, Principle of induction,
Special axial machines, Grid interfacing.
INTRODUCTION

Now-a-days we see many different types of power generation methods like Thermal, nuclear, Wind, Bio Gas, Solar and
etc Among all thermal power is the largest source of power in India. It produces 75% of the total power in India i.e. Around
89000 MW from 102 thermal power stations. And nuclear power produces around 4560 MW with 9 centers and In wind India
stands as the 5th largest power production in the world. It supplies around 9853 MW with 24 wind mill stations situated. But
the solar energy system is still backing in India when compared to other countries except in Gujarat. And many power plants
like Hydro, Diesel, Gas and etc. Are producing 5000 to 9000 MW of power every year. Even though as India is producing this
much amount of power every year, it is not sufficient to meet the demands of the people. Along with this if we see nowa-days
the contribution of thermal power is going to reduce due to the fact of reduction of coal in the nature. Thus there is an
immediate need to focus on the renewable sources to help our future generation for better enforcement. This concept of
WINLAR is a renewable system which can tackle both wind and solar energy at a time to serve for the better purpose.

1. WINLAR:
This is completely a pure combinational idea which has not been implemented up to now in present days. Here the name
WINLAR suggests as

473

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1. (A) WINLAR
This is quite different form of hybrid system. Here we use both solar modules and wind system to form a single unit called as multiunit which will be briefly described later. Normally we see many hybrid systems which re the combination of solar and wind in
practice.

1.1 Aim of WINLAR:


The main aim of WINLAR is to have a multi-unit which is not of course as shown above. This system has a single set up for
capturing both wind and solar energy that to at a time. So, we can think that this is an advanced version of Hybrid system.

1.2 WINLAR Description:


Here the vanes of the system are replaced
By solar plates which means the solar modules need to be designed in the form of the vanes of required length. Here the system design
changes for a normal hybrid system to WINLAR.
If we see there are few components in design of WINLAR system they are
1.

Solar Panels

2. Solar vanes

3.

Gear box

4. Generator

5.

Transformer

6. Inverter

7.

Tower

8. Controller

Finally we need to have an overview over conversion techniques we want to introduce.

1.2.1 Solar panels:


Solar light is the most energetic radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies
such as solar heating, solar photovoltaics, solar thermal and thermal electricity, solar architecture and artificial photosynthesis.
Here we use a mono crystalline and active solar panel which consists of photovoltaic cells.

474

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.2.2

Solar vanes:

1.2.2 (A) Ordinary Wind vane


The main function of vanes is to convert the kinetic energy to electrical energy. The vanes are here replaced with a specially
designed solar module or with a group of solar plates placed on the vanes which are made of which are of light weight. Here we are
considering the vane structure to be of triangular i.e. Pyramidal as shown

1.2.2 (B) Formatted pyramidal Wind vane

The length of these solar plated vanes is assumed to be about 7 to 10 ms which is of a diameter of 16 to 22 m. And solar
planes are plated on the vanes which are made of light fiber. The above shown pyramidal structural vane is used as a normal vane with
a hub which is fixed to a rotor shaft. This WINLAR has three vanes of pyramidal structured fixed to the hub.
475

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.2.2(C) Cross sectional View of Assumed structure of wind vane


So, by connecting these types of vanes there are a few advantages of the WINLAR systems and they are

It reduces the angle of rotation to the direction of wind

We get the same cross sectional area to cut

This helps us to attain maximum lift and minimum drag such that it increases the efficiency.

As increasing solar energy harnessing to maximum extent this is useful as the vane rotates the light can always incident on it
at 90 degrees or to the extent of maximum.
Here we have one more idea towards the enhancement of the solar vane i.e by using the already developed technology of
solar panels i.e. V3 Solar Spin Cells.

1.2.3

V3 Solar Spin Cells:

A V3Solar panel is a type of solar cell which can generate electricity more than 20 times to that of a normal flat plate solar
panel of the same panel area. This is panel is a combination of concentrating lenses, dynamic spin and advanced electronics.
Normally this V3 panels are conical in shape s shown in the figure.

476

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1.2.3.

(A) V3 Solar Cell

Construction:
The V3 Spin Cell features two cones, one made up of hundreds of triangular PV cells and a static hermetically-sealed
outer lens concentrator comprising a series of interlocking rings and a number of tubular lenses spaced equally around the
outside surface. According to V3 Solar, the Spin Cells cone has been set at an angle of 56 percent to enable capture of the
suns light at more angles than flat PV panels.

Working:
On the first layer there are lenses which focusing the light on the inner layer which is made of thousands of solar panels
which are rotating such that these light is focused continuously on the rotating solar panels which will avoid the heat.

2.2.4. Inverter:
Inverter is used to convert DC to Ac and connected to a transformer for stepping up the voltage.

Specifications:
477

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Size: 600*300*700 mm
Weight: 50- 68 Kg Capacity: 5 to15KW

1.2.4. (A) Inverter

3.0 Conversion Techniques:


As were using a solar plated module there is an ambiguity i.e. How to transform current obtained by the rotating solar
plated vanes into the static one. For this we have developed three models and they are
I. WITRICITY
II. Induction machine concept

2.1 WITICITY:
WITRICITY refers to transmission of power through air. In this concept we are using two modules which are
specified as transmitter and receiver. In our implementation we are using this WITRICITY for a length lower than 200cms. Thus
this could serve the purpose

478

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.1. (A) WITRICITY Transmission Type

2.2 Induction machine method:

2.2. (A) Soldered Spoke


Here in this method we are using a coil which is fixed as shown in the figure. The wires of

solar plated vanes

coming out of the hub soldered with an iron spoke which are good conductors of electricity. These spoke have balls
arranged at their tips which are placed onto the coil.

479

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.2. (B) Induction machine concept


In the both above mentioned methods the WITIRCITY method is better to use because in the induction concept we
have many troubles such as more maintenance required because the ball bearings need to be replaced time to time.

3. Working of WINLAR:

3. (A) Block Diagram of WINLAR


480

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The main function of this WINLAR prototype is to collect the solar energy and wind energy and also the heat energy.
The wind vanes collect the kinetic energy while the same time the solar plated vanes absorb the light energy. These vanes
can absorb maximum light energy since they are in triangular shape or construction. So, that the light incident on it is at an
angle 135 degrees in most of the cases.
At the transformer both the electrical energies of wind and solar are combined stepped-up and transmitted to the bottom
of the tower. From there to loads or to grids
3.1 Using over the Grid:
Here we consider DC motor as a backup motor this DC motor will be in parallel with the electromagnets which
has the ability that can produce high magnetic field which in turn produces large generating voltage and DC motor usage
can help us in attaining continuous reliability on generating power
4.2 Direct transmission of the wind energy and solar energy:
In this method we would like to transfer complete solar power through a DC-to-Ac converter and then we add this
output to the power developed due to the wind energy conversion. The total output is summed up and given to transformer
to step up and then to ultra or high voltage transmission lines to the grid.

4.1 Enhancements we think:


1) Application of phototropism concept to tracking system would increase efficiency.
2) Special axial machines concept introduction to wind energy conversion and that to done by harnessed solar energy
would make wind power generators to connect to conventional grid.
3)

Dynamic selection of energy selection with the help of power electronic devices and controllers such that no stone to
increase efficiency.

5.0 Advantages of WINLAR prototype:

481

Efficient usage of wind energy and solar energy at a time in a single prototype and continuous reliability.

Can be reliable to use for supplying current to the village.

Better to establish at the sea coasts in villages for the effective use.

Increase in efficiency of the non-conventional method of generation of power.

High output in similar or equal area which in turn reduces per unit rate.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The power factor can be improved.

Reduces burden on the conventional method of generation.

Though installation may cost in high but in future good outputs and great profits can be attained.

Low maintenance cost.

6. Solar power and Cost Calculations:


Consider vanes length be 131 ft i.e. 40mts and width is varying for a vane. Divide the vane into three equal halfs and
calculate the area. Total area be area1+area2+area3 = 384m^2. Due to few uneven let total area be = 384m^2.
We are using solar modules of 320 w and 40v and 8 amp. And of size 1.66x1.51x.04mts. The area of each module is
considering length and width is 2.192m^2. No. of panels required to cover the one of the three vane is = (384/2) =192.
modules. Assume it to be less than 184 modules like 150 modules around.

6 (A) Area Divisions


Total for three vanes and three blades no of modules required is = 150 *3*3 =1350 panels.
Here we practically have efficiency of solar panels is around 44.4% so consider
Total power generated by panels at 44.4% efficiency is =125 K Watts = 0.125 MW.
Cost is each solar module of above specifications is $ 214, we do require 1350 panels and total cost is about 1 crore.
There is subsidy given by the government from 30% to 50%. Thus we get it at half of the cost only.

7. Conclusion:
Optimal usage of renewable energy sources would be claimed as an efficient way for reducing power scarcity ,so as to improve
the utility of natural resources so we tried to make out the best part of natural resources wind and solar harnessing in a unified

482

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

model which could gain a lot of prominence in mere future acquainted with power system architecture as we should always think
that a minor change would result in an enormous hike in the output power especially in power systems
Ex: a surge diverter is protecting transmission line huge transient lightening stroke, ground wire protecting from indirect
lightening strokes.
WINLAR utilizes both solar and wind harnessing systems which in turn has the capital investment cost, charge per unit also has
an enormous reduction when compared to individual systems. Extensive research on WINLAR would also result in more practical,
efficient renewable energy harnessing system which has a lot of scope for crossing of limitations that a renewable system has. It
may reduce our dependency on conventional power production also which would definitely increase coal, water systems as rate of
usage will be nearly equal to the rate of usage, so we could conserve our natural sources for our future generations.
REFERENCES:

1. http://www.laserfocusworld.com
2. http://Wikipedia.com/index.html
3. http://web.mit.edu/newsoffice/2012/infrared-photovoltaic-0621.html
4. http://www.iowaenergycenter.org/wind-energy-manual/wind-energy-systems
5. http://paginas.fe.up.pt/feliz
6. http://cleantechnica.com/cleantechnica
7. http://v3solar.com/
8. http://solar-panels-review.toptenreviews.com/grape-solar-390w-review.html
9. http://www.freesunpower.com/faq.php
10. http://www.smartpowergeneration.com
11. http://en.wikipedia.org
12. Electrical machines by P.S.BIMRA
13. Non conventional resources by G.D.ROY, D.S.CHAUHAN, B.H. KHAN

483

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design And Analysis of Pressure Disc Type Filter


Mr.Rutuparna R.Deoghare, Prof. Shailesh Dhomne,Prof.Atul S.Shriwaskar
Department of Mechanical Engineering, DBACER,Nagpur,India
rutuparnadeoghare@gmail.com

Abstract Presently used Filters for Beverages making industries are very bulky in shape and gives low outlet discharge. Hence
they are less efficient .Therefore there is a need to design compact, automated unit that produces completely clear liquor and which
have large outlet discharge. This concept highlights the design of new filter which fulfill the requirements of beverages making
industries for filteration.For making filteration more feasible, unit is to be design in which multiple disc comprising of blades is to be
mounted on a shaft for filteration. Multiple discs will get patterned throughout the shaft and number of disc decides the capacity of
filter. The special arrangement of Two cake discharge blades (scrapper remover) suspended from a frame mounted on the tank and
serve to deflect and guide the cake to the discharge tube. On large diameter filters, the blades are of the swing type that float to
maintain the cake to disc clearance and so allow for the wobble of the turning discs.

Keywords - Outlet Discharge, Filter, Filtration ,Scrapper Remover, Multiple Disc, Discharge tube.
Introduction
The Disc Filters are used in heavy duty applications such as the filtration of beverages and dewatering of aluminum hydrate, pyrite
flotation concentrates, copper concentrate and other beneficiation processes. The filter consists of several rotary discs, each made up
from sectors which are clamped together will form the disc. when compared to other vacuum filters, the floor space required for the
disc filter is minimal and the cost per m2 of filtration area is the lowest.
During operation the pure liquor collect into the hallow tank and a cake (pulp) is formed on the surface of the discs. It then come
forward to the drying zone, the liquid draw off to a central barrel and from there passing through a regulator to the vacuum receiver.
The regulator with its bridge setting controls the timing so that once the sector leaves the drying zone it moves over a separating
bridge and a snap or low pressure blow is applied to discharge the cake. Scraper remover on each side of disc removes the pulp.
Scrapper removers are positioned between adjacent discs and are wide enough to avoid their clogging by the falling cake.
A paddle type agitator is designed which is located at the bottom of the tank maintains the slurry in suspension which in most of
the metallurgical applications contains solids with high specific gravity which are fast settling and abrasive avoid their jamming by the
falling cake.
1.1 Paper Bag: paper bags are available in two forms. The standard bags are made up by two ply material. The inner lining arrest
particles while air passes through the outer cellulose layer. The paper bag collects the bulk
particles with a filtration efficiency of 99.7% at 3 microns. Particles smaller than 3 microns move through this paper bag headed for
the next filter. For dustless disposal, the container can be lined with a disposable polyliner.The electrostatic paper bag offers finer
filtration capabilities. This paper bag has an electrostatically charged inner lining consist of melt down polypropylene. The inner lining
magnetize even the finest particles, enabling the bag to be used to receive materials like toner. The paper bag collects the particles
with an efficiency of 97.8% at 1.5 microns.
1.2 Main Filter: Nilfisk and CFM filters are massive by design to provide maximum surface area for filteration. The extra large
filtering surface attached with the vacuum's powerful suction maintains a steady airflow, prolonging filter life and ensuring optimum
vacuum performance.
1.3 Cartridge Filter: These filters are available in large, continuous duty CFM vacuums, the cartridge filter retains 99.7% particles
down to 0.3 microns. It is best for the collection of ultra-fine dusts, this "non-stick" filter collects the dust on the surface and
eradicating clogging. Dust is easily cleaned from this filter media which is available for dry collection only. The filter is features
Teflon coating for sticky dusts.
484

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. Literature Reviews

The following journal papers helps in understanding the topic clearly and to frame a strategy for solving the given problem.
The following are the terminologies referred from journal papers.

To produce good-quality clarified juice, the enzyme liquefaction treatment carried out before membrane filtration has the advantage
of not only lowering the juice viscosity but also of reducing the SS content. Insoluble solids can then be re-concentrated by
microfiltration until the concentration is the same as the original juice. The clarified juice can thus be extracted without lowering the
retentates economic value. This methodology, if it were applied in a plant producing pulpy juice, would not generate waste or
byproducts and would diversify the range of products being offered. The costs of producing clarified juice would also be highly
competitive, compared with other established processes and would have a higher production yield. For tropical fruit juice industries, it
represents a real alternative method to diversify production and increase market share. This methodology also allows fully continuous
processing that can be easily integrated into the normal processing line and can also be automated. Indeed, in the trials, permeate flows
seem to remain almost constant, not showing the classical decrease observed during microfiltration done in concentration mode.[1]
Flavored coffee Filters which can be impregnated with an essential oil and placed inside a conventional coffee maker to which coffee
is then added to the filter. The filter permits the brewing water to filter through the coffee and the filter without obstruction while
imparting the desired flavor of the essential oil to the brewing water. [3]
Hollow fiber ultrafiltration was successfully applied to obain a clear,amber-colored pear juice.the flux reached a maximum at an
average pressure of 157 kpa with an average feed stream velocity of 0.15 m/sec at 50 degree temp.High flux obtained at high
temp.within temperature limitation of the membrane.[4]
A method of filtering beverages and other liquids. To avoid the considerable ecoproblems encountered with the filter aids of known
procedures, which must be thrown away, the filtering active structure of the inventive filter aids is maintained so that they may be
reused as often as required. A mixture of filter aids of varying morphological and physical components is used, and constitutes a
minimum of two components, namely one component of specifically heavy, chemically stable metal and/or metal oxide and/or carbon
particles of fibrous and/or granular structure, and a further component, for building up the filter cake and increasing its volume, of
synthetic and/or cellulose fibers having a fiber length of 1 to 5000 um and a fiber thickness of 0.5 to 100 pm. To increase the filtering
efficiency of the filter cake of the aforementioned components, a further component may be added that comprises fibrillated or fanned
out synthetic and/or cellulose fibers, preferably having a fiber length of 500 to 5000 um and a fiber thickness of 0.5 to 20 pm. The
components are intensively mixed to form a homogeneous mixture, and are dosed to the liquid that is to be filtered. [7]
Kiwifruit is nutritionally rich fruit with high ascorbic acid content (193mg/100g) but the extraction of its juice is difficult due to
slimy pulp.To overcome this problem a combination of enzymes + amylase + mash enzyme were used and thus, facilitating the
extraction of juice. Though kiwifruit juice extraction is difficult, it can be extracted by pressing through hydraulic press. Yield is
substantially increased by enzymes combinations treatment to pulp before pressing and extracted juice can be clarified to reduce total
phenols (may be responsible for astringency) by passing the juice through filter press.[5]
The production of guava juice fortified with soluble dietary fiber as pectin extracted from guava cake (peel, pulp, seeds) was
conducted. The waste guava cake from juice processing plant was used for pectin extraction using sodium hexametaphosphate method
followed by pectin precipitation using acidified ethanol method. A yield of 30.500.34% crude pectin was achieved.[6]

3. Existing Filtration System


A compact Band Type filter Figure.1 Majorly work with the use of filter paper bundle which cant be reuse. It gives very low
output discharge i.e.upto13500 litre/ Day. The space required for the complete filtration system is more i.e.8 x 8 x 7 (M). Due to low
485

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

output discharge the production rate is low. The pulp generated after the filtration process needs to be removed manually which is time
consuming.

Figureure.1 Compact Band Type Filter

4.Objectives

Design of an automated filter system which gives high Outlet Discharge.


Design of scrap remover for removing pulp.
To improve the production rate i.e.upto 100 LPM
To reduce floor space area.

486

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5.Proposed Design

Figureure. 2 Disc Type Filter

Figureure.3 Cross Sectional View


The filter consists of the following subassemblies

Discs and sectors which may be made in injection molded polypropylene, metal or special redwood/ sheet metal.

A center barrel supported by the main bearings and consisting of piped or trapezoidal filtrate passages. The sectors are
attached to the barrel through "o" ring sealed connections in a number equal to the number of disc sectors.

An agitator with paddles that are positioned between the discs and far enough not to interfere with the forming cake.

A tank which, on its discharge side, has separated slurry compartments for the discs and discharge chutes for the blown-off
cake. When the solids are of an abrasive nature it is advisable to line the bottom portion of the tank that cradles the agitator with
rubber.

Two cake discharge blades on both sides of each disc are suspended from a frame mounted on the tank and serve to deflect
and guide the cake to the discharge chutes. On large diameter filters the blades are of the swing type that float to maintain the cake to
disc clearance and so allow for the wobble of the turning discs.
487

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An overflow trough that spans across the entire tank length and ensures full submergence of the sectors in the cake formation
zone since an exposed sector in the 6 o'clock position will cause immediate loss of vacuum.

6. Material

Selection

6.1 Shaft and Blade


SS-310 (Stainless steel)
CHEMICAL COMPOSITION
Carbon
Chromium
Iron
Manganese
Nickel
Phosphorus
Silicon
Sulphur

- 0.25% max
- 24-26%
- Balanced
- 2% max
- 19-22%
- 0.45% max
- 1.5% max
- 0.3% max

6.2 Filter cloth


Polypropylene (E= 1.5 2 GPa and Sut = 28 36 MPa)
[1] Light in weight.
[2] High strength.
[3] High resistance to most acids and alkalis.
[4] Heat resistance is about 1900 F.
7.

Design Calculation

M = mass of cylinder (m1) + mass of pulp (m2) + mass of fluid in cylinder (m3)+ mass of Disc (m4).
Ri = inner radius of cylinder.
m1 = 980 kg.
(i) (Previous Cylinder Volume =400 litres. For volume > 1000 litres
By trial and error method: Ri = 0.4 m & L=2.480 m)
(ii) Mass of fluid(m2):
Since operating fluid (mixture of fruit & fruit pulp ) with pulp of 200 gm per liter.
Mass of pulp on the cylinder
m2 = 1250 x 0.2 kg = 250 kg
(iii) Mass of fluid in cylinder
m3 = volume x density
= 1.250 x 1150 = 1437.5 kg.
(iv) Mass of disc
m4 = 1.9 x12 x 12 = 273.6 kg
Hence,
M = m1 + m2 + m3 + m4 = 980 + 250 + 1437.5 + 273.6
M = 2941.1 kg
488

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Volume of cylinder (V= Ri2L) = 1.25 m3 = 1250 lit.


7.1 Loading conditions on central hollow tank :
ID : 800mm
Length: 2480 mm
Mass inside the cylinder
M = density x volume
Density of liquid used : 1150 kg/m3
M = 1150 x 1.25
M = 1437.5 Kg
Since tank will filled partially (50%),before going above 50% transfer pump get activates.
Therefore mass to be considered 1437.5/2 =718.7 Kg
* Mass is the load inside the tank
Force = mass x g = 718.7 kg x 9.81 m/ s 2
=7050.9 kgm/s2
= 7050.9 N
*

Area of cylindrical surface


Area (A) = * Do * Length of cylinder
A = 3.14* 0.84* 2.480= 6.54 m2

Pressure = F / A
= 7050.9 /6.54 = 1078.12 N/m2 or Pa.
Internal Pressure is also stresses which developed due to liquid inside the tank (Circumferential Stresses)
7.2 Rotating effect on hollow tank

The above diagrams shows the dependence of torque on the angle . Maximum torque will occur when the component of F at right
angles to r is maximum, i.e. when = 90.
The central figureure shows the tangential component of F, which is nothing but the F sin .
The
equation

=
r
F
sin
.
It can be taken to mean in two different ways, as shown in these diagrams:
= r (F sin ) or = F (r sin ).
We can think of it as r times the tangential component of F (left sketch and equation) or as F times the shortest distance (r sin )
between the axis and the line along which F acts (right sketch and equation).
Total weight : M = m1 + m2 + m3 + m4
= 980 + 250 + 1437.5 + 273.6 = 2941.1 kg
Total load of rotating components:
2941.1 x9.81 = 28852 N.
r =879 mm =0.879 M(value taken from blade geometry)
initial torque: 0;
489

=0
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

* Torque (T ); when ; = 45
Ft =Fsin = 28852 x sin 45
Ft = 20401.4 N
Torque (T) = Ft x r
T=17932.83 Nm
* Torque (T ); when ; = P/2
Ft = F sin = 28852 x sin P/2
= 28852 N
T = Ft x r
T= 25360 Nm
7.3 Stress Calculations:
For cylinder shaft material: SS 310
Yield tensile strength (yt) = 292 Mpa; Yield shear strength (ys) = 146 Mpa.
Selecting factor of safety = 4 (1.25 to 4 for ductile material )
s permissible = 146/4 = 36.4 Mpa.
=Tr/J
Tmax = max J/R
Where,
Tmax = maximum twisting moment (Nm)
max = maximum shear stress (Mpa,psi)
R = radius of shaft (m)
J = Polar moment of inertia (m4)
Polar moment of inertia of a circular hollow shaft can be expressed as :
J = (Do4 - Di4) / 32
where Di = shaft inside diameter.(m)
Tmax = ( / 16) max Do3 (1 - K4)
Where, K= Di/Do <1
max = 0.00122 Mpa
Since, max < s permissible.
Hence, design is safe.
7.4 Torsional deflection of shaft
The angular deflection of a torsion shaft can be expressed as
= L T / (J G)
where,
= angular shaft deflection (radians)
L = length of shaft (m)
G = modulus of rigidity (MPa)
The angular deflection of a torsion hollow shaft can be expressed as
= 32 L T / (G (Do4- Di4)
Where,
G = 79 GPa = 79000 MPa (Modulus of rigidity of steel)
We get,
Angular deflection
() = 0.003 x 10-3 radians < 0.015 10-3 radians.

490

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

8. Modeling of Pressure Disc Filter

Figure 4.(a) cut section of cylinder (Hallow Shaft)

Figure 4.(b) Cylinder (Hallow Shaft)

Figure 5 (a) Blade Fitting

Figure 5 (b) Blade Fitting


491

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 6.Scraper (Cake) Remover

Figure 7. Glass Panel

Figure 7. (a) Assembly

Figure.7.(b) Assembly

492

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

9. Analysis

Figure.8 Meshing

Figure. 9 Loading condition

Figure.10 Deformation

493

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure.11 Stress

10.

Conclusion

It has been seen that the Disc type concept for Filtration is possible and can be use majorly due to its bigger filtration medium surface
than existing filtration medium surface which oriented horizontally in the system. The special arrangement of cake discharge blades
(scrapper remover) on both sides of each disc guide the cake to the discharge chutes and reduces the human effort and helps to
improves production and efficiency of the system.

REFERENCES:
[1]F. Vaillant, A. Millan, M. Dornier, M. Decloux Strategy for economical optimization of the clarification of pulpy fruit juices
using cross flow microfiltration..9 June 2000. Journal of Food Engineering 48 (2001)
[2] David C. Kilpatrick Purification and Some Properties of a Lectin from the Fruit Juice of the Tomato (Lycopersicon
esculentum),8 October 1979. Biochem. J. (1980) 185, 269-272 269
[3] P.P.Vaidyanathan,IEEE Multirate Digital Filters, Filter Banks, Polyphase Networks, and Applications
[4] D.E.Kirk,M.W.Montegomery,M.G.Kortekaas.Clarification Of The Pear Juice By Hollow Fiber Ultrafilteration Journal of food
science-volume 48.
[5] Devina Vaidya, Manoj Vaidya, Surabhi Sharma and Ghanshayam Enzymatic Treatment For Juice Extraction And Preparation
And Preliminary Evaluation Of Kiwifruits Wine.. 9 April 2009 . Natural Product Radiance, Vol. 8(4), 2009, pp.380-385
[6]
Thongsombat, W., Sirichote, A. and Chanthachum, S.The Production Of Guava Juice Fortified With Dietary Fiber
Songklanakarin J. Sci. Technol., March 2007, 29(Suppl. 1) : 187-196
[7] Ben Aim R., Shanoun A., Visvanathan C., and Vigneswaran S. (1993). New filtration media and their use in water treatment.
Proceedings, World Filtration Congress, Nagoya, 273276

494

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Security in Cloud Computing


Mr.OmarAbd Al-kaderkalaf
Assistant Lecturer,College of Medicine,
Baghdad University, Iraq
omar_abdel2003@yahoo.com

Abstract Becloud stupefy computing is solid buzzword in the trade. It is timeless in which the advantage seat be leveraged on
sound out miserable take into consideration reducing the indict and complication of grant providers. Cloud computing promises to
curtail truly and opinionated retrench and approximately specifically concede IT departments focusing on moral projects as contrasted
with of misery datacenters contention, It is unconditionally with than on the up internet. Give are sundry consequences of this put
together. For the actuality remodeling in turn flock cause get revenge buyer be attractive to. This implies ramble they chaperone
custody of servers, they carry out software updates and assistant on the condense user pays everywhere i.e. for the subsidy
unaccompanied. Reclusion, Atypical, Availability, Genuineness, and Solitariness are empty concerns for both Tiresome providers and
flagrant as broadly. Sorry as a Subsidize (IaaS) serves as the subservient paint for the interexchange oversight models, and an
insufficiency of rivet in this covering stamina utterly transform the remodeling in turn provision models, i.e., PaaS, and SaaS cruise
are technique from IaaS jacket. These essay hand-outs a pompous estimate of IaaS components attach and determines vulnerabilities
and countermeasures. Uphold Equiponderance be consistent obligation be even very hugely benefit.

Keywords Computing, Cloud Computing Security, (SLA) and (SaaS).


INTRODUCTION:

Clouds square measure massive pools of simply usable and accessible virtualized resources, these resources may be dynamically
reconfigured to regulate to a variable load (scale), permitting optimum resource utilization. Its a pay per use model during which the
Infrastructure supplier by suggests that of bespoken Service Level Agreements (SLAs) offers guarantees usually exploiting a pool of
resources. Organizations and people will get pleasure from mass computing and storage centers, provided by massive corporations
with stable and robust cloud architectures. Cloud computing incorporates virtualization, on - demand preparation, net delivery of
services, and open supply computer code. From one perspective, cloud computing is nothing new as a result of it uses approaches,
concepts, and best practices that have already been established. From another perspective, everything is new as a result of cloud
computing changes however we tend to invent, develop, deploy, scale, update, maintain, and purchase applications and also the
infrastructure on that they run Cloud computing is a technology that uses the internet and central remote servers to maintain data and
applications. Cloud computing allows consumers and businesses to use applications without installation and access their personal files
at any computer with internet access. This technology allows for much more efficient computing by centralizing storage, memory,
processing and bandwidth.[1]
The new construct of Cloud Computing offers dynamically ascendible resources provisioned as a service over the net and thus
guarantees plenty of economic benets to be distributed among its adopters. Betting on the kind of resources provided by the Cloud,
distinct layers will be dened (see Figure 1).The bottom-most layer provides basic infrastructure parts like CPUs, memory, and
storage, and is henceforward usually denoted as Infrastructure-as-a-Service (IaaS). Amazons Elastic work out Cloud (EC2) could be a
distinguished example for Associate in IaaS provide. On high of IaaS, a lot of platform-oriented services permit the usage of hosting
environments tailored to a specic would like. Google App Engine is Associate in example for an internet platform as as service
(PaaS) that allows deploying and dynamically scaling Python and Java primarily based net applications. Finally, the top-most layer
provides it users with able to use applications additionally referred to as computer code
As-a-Service (SaaS). To access these Cloud services, 2 main technologies will be presently identied. Net Services square measure
usually wont to give access to IaaS services and net browsers square measure wont to access SaaS applications. In PaaS environments
each approached scan is found. [1]
Cloud computing could be a term wont to describe each a platform and kind of application. A cloud computing platform
dynamically provisions, configures, reconfigures, and depravations servers pro re nata. Servers within the cloud will be physical
machines or virtual machines. Advanced clouds generally embrace alternative computing resources like cargo deck networks (SANs),
network instrumentality, firewall and alternative security devices.[2]
Cloud computing additionally describes applications that square measure extended to be accessible through the net. These cloud
applications use giant information centers and powerful servers that host net applications and net services. Anyone with an appropriate
net association and a customary browser will access a cloud application.[2]
Throughout this steerage we have a tendency to build intensive recommendations on reducing your risk once adopting cloud
computing, however not all the recommendations square measure necessary or maybe realistic for all cloud deployments. As we have
a tendency to compiled info from the various operating teams throughout the editorial method, we have a tendency to quickly realize
495

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

there merely wasnt enough area to supply totally nuanced recommendations for all doable risk eventualities. Even as an essential
application may well be too vital to maneuver to a public cloud supplier, there may well be very little or no reason to use intensive
security controls to low-value information migrating to cloud-based storage. [11].

LITERATURE REVIEW :
[1] In this paper, we presented a selection of issues of Cloud Computing security. We investigated ongoing issues with
application of XML Signature and the Web Services security frameworks (attacking the Cloud Computing system itself),
discussed the importance and capabilities of browser security in the Cloud Com-puting context (SaaS), raised concerns about
Cloud ser-vice integrity and binding issues (PaaS), and sketched the threat of ooding attacks on Cloud systems (IaaS).As we
showed, the threats to Cloud Computing security are numerous, and each of them require an in-depth analysis on their
potential impact and relevance to real-world Cloud Computing scenarios.As can be derived from our observations, a rst good
starting point for improving Cloud Computing security consists in strengthening the security capabilities of both Web browsers
and Web Service frameworks, at best integrating the latter into the rst. Thus, as part of our ongoing work, we will continue to
harden the foundations of Cloud Computing security which are laid by the underlying tools, specications, and protocols
employed in the Cloud Computing scenario[2] In this paper in today's global competitive market, companies must innovate and
get the most from its resources to succeed. This requires enabling its employees, business partners, and users with the platforms
and collaboration tools that promote innovation. Cloud computing infrastructures are next generation platforms that can
provide tremendous value to companies of any size. They can help companies achieve more efficient use of their IT hardware
and software investments and provide a means to accelerate the adoption of innovations. Cloud computing increases
profitability by improving resource utilization, Costs is driven down by delivering appropriate resources only for the time those
resources are needed. Cloud computing has enabled teams and organizations to streamline lengthy procurement
processes.Cloud computing enables innovation by alleviating the need of innovators to find resources to develop, test, and
make their innovations available to the user community, Innovators are free to focus on the innovation rather than the logistics
of finding and managing resources that enable the innovation. Combining cloud computing with IBM Innovation Factory
provides an end-to-end collaboration environment that could transform organizations into innovation power houses.[3] In this
paper to support the quality of service guarantee from the service provider side, complex web services require to be contracted
through service level agreement. State of the art on web services and web service compositions provides for a number of models
for describing quality of service for web services and their compositions, languages for specifying service level agreement in the
web service context, and techniques for service level agreement negotiation and monitoring. However, there is no framework
for service level agreement composition and composition monitoring; the existing design methodologies for web services do not
address the issue of secure workflows development. The present research proposal aims to develop concepts and mechanisms
for service level agreement composition and composition monitoring. A methodology that allows a business process designer to
derive the skeleton of the concrete secure business processes from the early requirements analysis would benefit.[5] The trusted
virtual data center (TVDc) is a technology developed to address the need for strong isolation and integrity guarantees in
virtualized environments. In this paper, they extend previous work on the TVDc by implementing controlled access to
networked storage based on security labels and by implementing management prototypes that demonstrate the enforcement of
isolation constraints and integrity checking. In addition, we extend the management paradigm for the TVDc with a hierarchical
administration model based on trusted virtual domains and describe the challenges for future research.[11] You should now
understand the importance of what you are considering moving to the cloud, your risk tolerance (at least at a high level), and
which combinations of deployment and service models are acceptable. Youll also have a rough idea of potential exposure
points for sensitive Information and operations, these together should give you sufficient context to evaluate any other security
controls in this Guidance. For low-value assets you dont need the same level of security controls and can skip many of the
recommendations such as on-site inspections, discoverability, and complex encryption schemes. A high-value regulated asset
might entail audit and data retention .Requirements, for another high-value asset not subject to regulatory restrictions, you
might focus more on technical security controls. Due to our limited space, as well as the depth and breadth of material to cover,
this document contains extensive lists of security recommendations. Not all cloud deployments need every possible security and
risk control. Spending a little time up front evaluating your risk tolerance and potential exposures will provide the context you
need to pick and choose the best options for your organization and deployment.

496

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SERVICES IN CLOUD COMPUTING:


A. Infrastructure-as-a-Service:As a service infrastructure including storage in which a provision of models, hardware, servers and
components networking tool used to support the campaign, an organization for service provider equipment and accommodation, and it
is responsible for maintaining the customer usually pays on a per-use basis.
Characteristics and components of IaaS include:
1. Utility computing service and billing model.
2. Dynamic scaling.
3. Desktop virtualization.
4. Policy-based services.
5. Internet connective
6. Automation of administrative tasks.
A .service like Amazon Web services on-demand virtual server instances with unique IP addresses and storage block offers customer
reach and start, stop, configure your virtual server and storage provider to use application program interface (API). Enterprise, cloud
computing for a company to pay as much capacity as needed Bring more online as quickly as expected because of the way electricity,
fuel and water intake are to pay-what-you-use model looks like it sometimes is referred to as utility computing. As a service
infrastructure sometimes hardware as a service (Haas), also referred to as.
B. Platform-As-A-Service:Platform-as-a-service (PaaS) is a way to rent on the Internet hardware, operating systems, storage and
network capacity. Service delivery models customer virtualized server allows you to rent and run existing applications or associated
services develop and test new ones. Platform-as-a-service (PaaS) and software as a result of a service (SaaS), a software distribution
model in which hosted software applications for customers is made available on the Internet. PaaS developers many benefits with
PaaS, operating system features can be changed and upgraded to a geographically distributed development team. With software
development projects can work on services that cross international borders can be obtained from diverse sources. Initial and
continuous costs multiple hardware features that often suffer from duplicate tasks performance or incompatibility problems instead of
maintaining single-vendor infrastructure can be reduced by the use of the services. Overall expenses by programming the integration
development efforts also can be reduced if need service interfaces or proprietary development languages offerings on the downside,
"lock-in" some risk of PaaS. Another possible pitfall that offering the flexibility that needs developing rapidly to meet the
requirements of users,

FIG 1: SERVICES IN CLOUD COMPUTING

497

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C. Software-As-A-Service: Sometimes the demand for software as a service, referred to as the "software" software that is deployed on
the Internet and/or behind a firewall on a local area network or personal computer is deployed with a SaaS provider is an application
for customers on-demand, a subscription, a "pay-as-you-go" model, Or at no charge as a service through licensing. application for
delivery this approach all of the utility computing where technology "cloud" as a service accessed over the Internet starting in model
widely. SaaS sales force automation and customer relationship management (CRM) was posted. Now it is common to many business
functions, including computerized billing, Invoice, human resource management, financial, content management, collaboration,
document management and service desk management.

SECURITY ISSUE IN CLOUD COMPUTING:


Over the past few years, cloud computing has become a promising business concept it industry one of the fastest growing areas of
being hit by the recession, companies from fast just by tapping into the cloud they can breed of most business applications fast or
faster access to their infrastructure resources To promote that, realizing all negligible cost. But as individuals and companies more and
more information is placed in the cloud, concerns about just how safe environment are beginning to develop.
A. Security :Where your data in the cloud high security on the server or on your local hard driver more secure? Some argue that
customer data more secure when managed internally, while others argue that cloud providers trust and maintain such a high level of
safety is a strong incentive for. However, regardless of your data in the cloud, where these will be distributed on different computers,
your base storesData is stored at the end of the industrious hackers invade virtually any server., and there are statistics that show that a
third of the stolen or lost laptop breaches result from and other equipment and staff from accidentally due to insider theft is nearly 16
percent, to reveal data on the Internet.
B. Privacy :Apart from the traditional computing model, cloud computing makes use of virtual computing technology, users personal
data is scattered in various virtual data center can be in the same physical location rather than even national borders, at this time, data
privacy protection will face controversy over various legal systems. On the other hand Leaked hidden information, users can access
cloud computing services can analyze vital functions. Raiders submitted by users depend on the computing tasks.
C. Reliability :Cloud servers in your home is the same as the server problems and slowdowns are difference is that users in the cloud
service provider (CSP) is dependent on a high, even cloud computing cloud Server downtimes experience. There is a big difference
CSP service model, once you select a specific CSP, you may get locked in, thus bring a potential business safe risk.
D. Legal Issues :Regardless of efforts to bring into line the lawful situation, as of 2009, supplier such as Amazon Web Services
provide to major markets by developing restricted road and rail network and letting users to choose availability zones. On the other
hand, worries stick with safety measures and confidentiality from individual all the way through legislative levels.
E. Open Standard :Open standards are critical to the growth of cloud computing. Most cloud providers expose APIs which are
typically well-documented but also unique to their implementation and thus not interoperable. Some vendors have adopted others'
APIs and there are a number of open standards under development, including the OGF's Open Cloud Computing Interface. The Open
Cloud Consortium (OCC) is working to develop consensus on early cloud computing standards and practices.
F. Compliance :Several rules related to storage and data require regular reporting and audit trails, cloud providers their customers
appropriately follow these rules should be able to compliance and security management for cloud computing, how to be a top-down
view of a cloud-based location for all it resources is a strong management and compliance policies can deliver insights on
enforcement. In addition to the requirements who are the clients of subject, cloud providers maintained by data centers also may be
subject to compliance requirements.
G. Freedom :Cloud computing users physically data storage, data storage does not allow officers to leave and handed control of cloud
providers, Clients that this is pretty fundamental and makes them their own copies of a form that retains its freedom of choice and they
are realizing tremendous benefits whilst certain issues beyond their control to defend against the ability to retain data in cloud
computing-la will the conflict be affords.
H. Long-term Viability :Are you sure that you will never get your cloud computing provider cloud became invalid broke or get the
data put in a large company and acquired by engulf. "How you ask prospective providers will be able to get your data back and if it's a
format that you can import in a replacement application.

498

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

MODELS OF CLOUD COMPUTING:


A .Public Cloud :A public cloud based on a standard cloud computing model in which resources, such as applications and storage, a
service provider makes available to the general public on the Internet. Public cloud services free or a pay-per-use model is offered.One
of the main benefits of using public cloud service:1. Easy and inexpensive to set up because the hardware, applications and bandwidth
costs are covered by the provider.Scalability to meet the requirements2. No resources are wasted because you pay for what you use.3.
the term "public cloud" standard model and private cloud, which is a proprietary network or data center that cloud computing
technologies, such as virtualization uses arose to distinguish between a private cloud makes this task managed by the organization.
The third model, hybrid cloud is maintained by both internal and external providers. Compute Cloud public cloud example Amazon
flexible (EC2) IBM's blue cloud, Sun, cloud, Windows Azure platform services and Google App engine.

Fig. 2 Cloud Computing Models


B. Community Cloud :Private cloud (also called corporate internal cloud or cloud) that provides a proprietary computing architecture
for a marketing term people behind a firewall for a limited number of hosted services, advances in Virtualization and distributed
computing, corporate networks and administrators effectively within their Corporation "service providers that meet the needs of the
clients has allowed media marketing" private cloud "that is needed or your data than they can get a third-party wants more control
using an organization is designed to appeal to the words uses Amazon Elastic Compute Cloud (EC2) or (S3) simple storage service
such as Hosted service.
C. Hybrid Cloud :Offers a hybrid cloud which is an organization and management of certain resources in the home and others is
provided externally to a cloud computing environment. for example, an organization is a public cloud services such as Amazon Simple
storage service (Amazon S3) can be used to store data but home storage operations continue to maintain customer data. Ideally Hybrid
approach to scalability and mission-critical applications without the cost effectiveness and third-party vulnerabilities exposing data to
a public cloud computing environment that allows a business to take advantage.
D. Private Cloud :In a community where many organizations have requirements similar to the cloud can be established and some of
the benefits of the cloud as to seek to share infrastructure costs spread over a public cloud computing. (But more than a single tenant)
users this option is more expensive, but the privacy, security and/or offer a high level of compliance with the policy. "Gov cloud
Googles" community cloud Examples are included.The term is widely used although cloud computing, note that all are important not
only to the cloud model.

499

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 3 Cloud Computing Models


As such, it is critical that organizations don't apply a broad brush one-size fits all approach to security across all models. Cloud Models
can be segmented into Software as a Service (Saas), Platform as a service (PaaS) and Integration as a Service (IaaS). When an
organization is considering Cloud Security it should consider both the differences and similarities between these three segments of
Cloud Models.

COMPONENT OF IAAS:
IaaS delivery model consists of several components that have been developed through past years; nevertheless, employing those
components together in a shared and outsourced environment carries multiple challenges. Security and Privacy are the most significant
challenges that may impede the Cloud Computing adoption. Breaching the security of any component impact the other components
security, consequently, the security of the entire system will collapse. In this section we study the security issue of each component
and discuss the proposed solutions and recommendations.
A. Service Level Agreement (SLA) :Cloud computing management emerges a set of complexities, and cloud SLA to resolve using the
QoS guarantees of acceptable levels. SLA contract definition, SLA, SLA monitoring, and SLA enforcement, SLA contract definition
and negotiation stage benefits and each party, the list is important for determining the responsibilities; any misunderstanding will
affect the security of the systems and customer exposure vulnerabilities. On the other hand, to implement and monitor SLA step
provider and customer confidence is critical to building a dynamic environment such as cloud SLA enforcement. It sounds Sheeted
QoS features to monitor SLA and SOA. Web service level agreement enforcement in (WSLA) framework developed. Cloud
computing environment to manage SLA WSLA believes in using a third party to solve the problem is for SLA monitoring and
enforcement had been proposed by delegating work. There are currently, Standardization of cloud providers of cloud computing
systems customers and SLA and delegating enforcement to mediation by third-parties to rely on SLA monitoring.
B. Utility Computing :Computing is not a new concept; it played an essential role in grid computing deployment. This resource (for
example, calculation, bandwidth, storage, etc...) as metered services, packages and delivers them to the client. this model lies in the
power of the two main points: first, it, IE, rather than resources Reduces the total cost to the owner, the client can only access (pay-asyou-go), you can pay for the second, it's scalable systems, namely, a fast-growing system users according to a rapid rise in demand
from your service or reach the Summit about Don't need to worry as the owner for support have been developed. Clearly, utility
computing cloud computing (e.g., scalability, and pay-as-you-go) are two of the main features of the shapes to utility computing. The
first challenge cloud computing, for example, as a provider of high Amazon metered services must offer its services in terms of the
complexity of even those services which is metered services can be used by second-tier providers. in several layers of utility
Systems become more complex and higher and second level requires more management effort than providers. an example for such
systems Amazon DevPay5, second level using AWS services provider meter and users according to user-defined value to the Bill. The
second challenge that utility computing systems Raiders May be attractive targets for an attacker to access services without payment
of the target, or to specific company Bill drives intolerable level. Main system healthy and well-functioning provider is responsible for
keeping, but the client's practice also affects the system.
C. Cloud Software :There are several open source implementations of eucalyptus cloud software and Nimbus 6 forms; Cloud software
joins together the components of cloud or cloud software open-source or commercial closed-source software available in bug
vulnerability and we cannot ensure, Furthermore, cloud service providers most management tasks from a remote location, such as
access controls O to APIs (rest, SOAP, or XML/JSON with HTTP) is presented, for example, consume customer services offered by
500
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

provider or simply use the Web interface to implement your own Amazon EC2 applications toolkits, a widely supported interface,
using Downloading. In both cases, the user uses the Web services protocols supported in the SOAP protocol Web services. Many
SOAP-based security solutions are researched, developed, and implemented. WS-Security, SOAP, in a standard extension detects
security for Web services is a SOAP header that WS-Security Extensions and determines how existing XML signature and XML
encryption to SOAP messages like safety standards apply (Safety) defines XML signature for authentication or integrity protection.
Using Protocol on well-known attacks as a result to affect cloud services Web services applied. Finally, an extreme scenario browser
and the possibility of breaking the safety among the clouds has revealed, and to increase the safety of current browsers, followed by
proposing, in fact, these attacks and more Web services to the world, but used in a cloud computing technology as Web services
security strongly affect cloud services security.
D. Platform Virtualization Virtualization, cloud computing services, a fundamental technology platform (for example, CPUs, memory,
network, and storage) by a single hardware platform virtual computing resources in standalone zing systems facilitate the aggregation
of physical computing hardware abstraction hides the platform. Management complexity and simplifies scalability computing
resources. Therefore, virtualization provides multi-tenancy and scalability, and these are two significant cloud computing
characteristics as hypervisor is responsible for separation, VMs directly to the virtual disk, memory, or others on the host applications
may not be able to use. Annual, a shared environment, to maintain a strong isolation An precision configuration cloud service
providers their system secure communications, monitoring, modification, migration, mobility, from DOS results and to minimize risks
to a substantial effort to start. In this section, we discuss virtualization risks and vulnerabilities that annual distribution model for
annual security, privacy and data integrity to guarantee in addition to the recently proposed solutions are particularly affected.

SECURITY CODE FOR IAAS:


As a result of this research, we also discuss a Security Model for IaaS (SMI) as a guide for assessing and enhancing security in each
layer of IaaS delivery model as shown in Fig.4. SMI model consists of three sides: IaaS components, security model, and the
restriction level. The front side of the cubic model is the components of IaaS which were discussed thoroughly in the previous
sections. The security model side includes three vertical entities where each entity covers the entire IaaS components. The first entity
is Secure Configuration Policy (SCP) to guarantee a secure configuration for each layer in IaaS Hardware, Software, or SLA
configurations; usually, miss-configuration incidents could jeopardize the entire security of the system. The second is a Secure
Resources Management Policy (SRMP) that controls the management roles and privileges. The last entity is the Security Policy
Monitoring and Auditing (SPMA) which is significant to track the system life cycle. The restriction policy side specifies the level of
restriction for security model entities. Restriction starts from loose to tight depending on the provider, the client, and the service
requirements. Nevertheless, we hope SMI model be a good start for the standardization of IaaS layers. This model indicates the
relation between IaaS components and security requirements, and eases security improvement in individual layers to achieve a total
secure IaaS system.

Figure 4: Security Model in IAAS


As a result of this research, we also have a security model annual (SMI) as a guide as shown in Fig 4% annual increase protection in
every layer of the distribution model and discussed to assess SMI consists of three sides of the model: the model and the restriction
level annual components. Cube model on the front side of the components in the annual discussions on the previous sections well.
Three vertical bodies of security model where each unit covers the entire annual component. The first unit secures configuration
policy (annual SLA hardware, software, or configuration in each layer guarantees a safe configuration for SCP); Typically, Miss501

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

configuration events can jeopardize the entire security system. The second a protected resource management policy (management
roles andprivileges control SRMP). Monitor security policy and audit the last entity (systems to track important life cycle what is
restriction policy side SPMA) security model specifies the level of sanctions for entities. Customers and service providers from the
loose restrictions, depending on the requirements begins to pester. Nevertheless, we hope the annual layers SMI model standardization
a good start for the relationship between annual components and security requirements model indicates, And safety improvements to
achieve a total secure annual system for easy individual layers.

(SRMP) control management roles and privileges, monitor and audit the last entity security policy (System life cycle is important to
track the SPMA). Restrictions for the security restriction policy side specify the level of model entities. Restriction level loose
provider, client and service requirements based on tight then begin to also the layers SMI model standardization a good start for the
relationship between annual components and security requirements model indicates, and safety improvements to achieve a total secure
annual system for easy individual layers.

ACKNOWLEDGMENT
I want to say thank you to my family. I am very grateful to my College of Medicine,Baghdad University, Iraqto support me during my
Msc. in computer science in BharatiVidyapeethdeemd,YashwantraoMohite College(Arts, Commerce, Science)Pune,and other faculty
and associates of computer science department who are directly or indirectly helped me for this work. This work is to get on a
scientific level and raise the level of education at the University of Baghdad in Iraq.

CONCLUSION
In this paper we have different layers as a service infrastructure; we also discuss about security to provide a public key infrastructure (PKI) is
that each layer we can discuss in this paper. SLA only services provided and if this discount services agreement were found, but in fact to
meet our deficit did not help customers about the discount. In this paper we discuss security holes associated with the annual implementation.
Security issues here in addition to the recently proposed solutions for each annual component of safety concern

REFERENCES:
[1].M. Jensen, J. Schwenk, N. Gruschka, and L. Lo Iacono, On Technical Security Issues in Cloud Computing . IEEE, 2009.
[2]. Greg Boss, Padma Malladi, Denis Quan, Linda Legregni, Harold Hal l, Cloud Computing,
http://www.ibm.com/developerswork/websphere/zones/hip ods/library.html, October 2007, pp. 4 - 4
[3]G. Frankova, Service Level Agreements: Web Services and Security , ser. Lecture Notes in Computer Science. Berlin,
Heidelberg: Springer Berlin Heidelberg, 2007, vol. 4607.
[4]. Service Level Agreement and Master Service Agreement, http://www.softlayer.com/sla.html, accessed on April 05, 2009.
[5]. S. Berger, R. Caceres, D. Pendarakis, R. Sailer, E. Valdez, R. Perez, W. Schildhauer, and D. Srinivasan, Security for the cloud
infrastrcture: trusted virtual data center (TVDc). [Online]. Available: www.kiskeya.net/ramon/work/pubs/ibmjrd09.pdf
[6]. http://www.cloudsecurity.org, accessed on April 10, 2009.
502

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7]. Sampling issues we are addressing, http://cloudsecurityalliance.org/issues.html#15, accessed on April 09, 2009.
[8]. MikeKavis,Real time transactions in the cloud,
http://www.kavistechnology.com/ blog/?p=789, accessed on April 12, 2009.
[9]. Secure group addresses cloud computing risks,
http://www.secpoint.com/security - group - addresses cloudcomputing - risks.html, April 25, 2009.
[10]. Service Level Agreement Definition and contents, http://www.service level - agreement.net, accessed on March 10, 2009.
[11]Cloud security alliance: Security guidance for critical areas of focus in cloud computing v2.1, Dec 2009. Available at:
www.cloudsecurityalliance.org.
[12]. WesamDawoud, Ibrahim Takouna, ChristophMeinel Infrastructure as a Service Security

503

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Standardization of Process Parameters for Neem Oil & Determination of


Properties for Using as a Fuel
Suman Singh1* ,P.K.Omre1, Kirtiraj Gaikwad2
Dept. of Post Harvest Process and Food Engineering, G.B. Pant University of Agriculture & Technology, Pantnagar 263145, India

Food Packaging Laboratory Department of Packaging, Yonsei University Wonju, Gangwon-do 220-710 South Korea
3
Power Engineering, G.B. Pant University of Agriculture & Technology, Pantnagar 263145, India
Simanki.singh27@gmail.com *

ABSTRACT: Studies on the exploration of alternate fuels obtained from renewable sources of energy to supplement conventional
fossil fuels are being carried out throughout the world. The edible and non edible oils are being tried to either supplement or to replace
diesel as fuel in CI engines. India is the net importer of edible oils and, therefore, emphasis is being laid to explore the possibility of
using non-edible oils or their esters to be used in diesel engines alone or blended with diesel.
A study was, therefore, undertaken to standardize ethyl esterification process parameters of raw Neem oil, comparing
characteristic fuel properties of different blends of Neem oil ester with ethanol and use of the blends as a constant speed CI engine
fuel. Raw Neem oil was esterified with methyl alcohol to obtain Neem methyl ester having lowest possible kinematic viscosity. The
process parameters such as alcohol, catalyst concentration and reaction temperature were standardized to obtain higher recovery of
esters. The characteristic fuel properties such as kinematic viscosity, relative density, gross heat of combustion, cloud and pour point,
flash and fire point, carbon and ash content and total acidity of diesel, raw Neem oil, its methyl and ethyl ester and blends methyl ester
with ethanol were compared.
The recovery of Neem methyl ester of lowest kinematic viscosity (6.65 cS) with 84 percent recovery was possible at the following
standardized parametric conditions: Main Transesterification Process: Molar Ratio (6:1),Type of Catalyst (KOH) ,Concentration of
Catalyst (2%), Reaction Temperature (650c),Reaction Time
(60min), Setting time
(24h). The relative density of the Neem oil
used in the experiment was 6.07 percent higher than that of diesel, whereas the Neem methyl ester of 6.65 cS viscosity has the relative
density 2.57 percent higher than the diesel. The cloud and pour point of diesel used in the experiment were 4.20C and 1.50C respectively
and raw Neem oil had the cloud and pour point of 19 0C and 30C respectively. The cloud point was observed 4,18,20,21,22 and 23 0C and
the pour point as 6.0,7.0,8.0,9.0,10.0 and 5.0 for NME90E10, NME80E20,NME70E30,NME60E400,NME50E50 and NME100
respectively. The flash and fire point of diesel used in the experiment were observed as 54.3 and 59.4 0C respectively where as for Neem
oil these values were 152 and 1590 respectively. The blends of Neem methyl ester-ethanol were blends found to have lower flash and
fire point than diesel.
Key words- Neem Oil, Biofuels, Diesel, Flash Point, Methyl Ester
INTRODUCTION
The tremendous increase in number of automobiles in recent years have resulted in greater demand of petroleum
products.The depletion of crude oil reserves are estimated for few decades, therefore, effort are on way to research now alternatives to
diesel.Ganguli, S. (2002) [1]There are different kinds of vegetable oils and biodiesel have been tested in diesel engines its reducing
characteristic for green house gas emissions. Its help on reducing a countrys reliance on crude oil imports its supportive characteristic
on agriculture by providing a new market for domestic crops, its effective lubricating property that eliminates the need of any lubricate
additive and its wide acceptance by vehicle manufacturers can be listed as the most important advantages of biodiesel fuel. There are
more than 350 oil bearing crops identified, among which only Jatropha, ongamia, sunflower, Soyabean, cottonseed, rapeseed, palm oil
and peanut oil are considered as potential alternative fuels for diesel engines. The present study aims to investigate the use of neem oil
blend with ethanol as an alternate fuel for compression ignition engine. Ethanol has some detergent properties that reduce buildup of
carbon deposits on injectors due to which engines run smoothly and fuel injection systems remain clean for better performance. Use of
vegetable oil and ethanol blend gave satisfactory results (Goering et al., 1983). But ester of vegetable oil and ethanol has never been
used before as fuel in CI engines. The concept of employing alcohol, especially ethanol with ester of vegetable oils as a fuel in engines
is totally new and revolutionary. Since there is no use of diesel, there is 100 per cent replacement of diesel. In view of the above, a
study was carried out with the objectives Standardization of methyl esterification process parameters for neem oil by single stage
504

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Acid-Base catalyst process & Determination of characteristic fuel properties of different blends of methyl ester of neem oil with
ethanol.
MATERIAL & METHOD
The methodology used for standardization of ethyl esterification process parameters for Neem oil, preparation of fuel
blends,characteristic fuel properties. The experiments were conducted in the Bio Energy Technology Laboratory, Department of Farm
Machinery and Power Engineering.
Selection of Fuel Constituents
The experiments were carried out using high speed diesel as reference fuel and methyl esters of neem oil and their blends with ethanol
in various proportions used as the engine fuel.
Neem oil
Neem oil used in the present investigation was taken from the local market of pantnagar.
Reference fuel
High speed diesel marketed by Indian Oil Corporation in accordance with IS: 1460-1974 was taken as reference fuel for comparison.
Ethanol
Anhydrous ethanol was used as one of the constituent of blended fuel for the experiment. The experiment was conducted using
Changsu Yangyuan Chemical, China make anhydrous ethanol procured from the local market. Ethanol, chemically named ethyl
alcohol (CH3CH2OH) is a colourless liquid with a sweet alcohol odour.
Neem methyl ester
Neem methyl ester was used as another constituent of the blended fuel. Better self ignition characteristics, compatibility with fuel
injection system of existing CI engines. The sample which gave the highest yield and had its viscosity in the permissible range was
used for the main transesterification process. The main transesterification process was carried out with methanol to oil ratio of 6:1 and
2% KOH concentration as an alkaline catalyst. The reaction was carried out at 65C for an hour which gave a methyl ester yield of
85%(v/v).
Esterification Process Standardization of methyl esterification process parameters and preparation of Neem ethyl ester
Fukuda, H (2001) [2]Esterification process is defined as the chemically reacting triglycerides such as one of the vegetable oil with
an alcohol in presence of an alkaline or acidic catalyst to produce glycerol and fatty acids ester. Barnwal (2005) [3] In this process the
ester is produced when vegetable oil combines with a simple alcohol in presence of a catalyst. The fatty acids of vegetable oil
exchange places with the (OH) groups of the alcohol producing glycerol and methyl, ethyl or butyl fatty acids ester depending on the
type of alcohol used. The four distinct stages in the preparation of an ester are namely:

Heating oil at a desired temperature.


Stirring and heating of alcohol-oil mixture with an alkaline or acidic catalyst.
Separation of glycerol and washing of ester with water.
Evaporating traces of water from ester recovered.
The following parameters affect the level of ester recovery:
Molar ratio of vegetable oil- alcohol mixture
Preheating time
Preheating temperature
Reaction time
Reaction temperature
Type of catalyst
Concentration of catalyst
Degree of proof of alcohol used
Settling time
Method of removal of traces of water from washed ester either by heating or absorbing using a suitable chemical.
The main transesterification reaction of raw neem oil was carried out as per the steps described in Fig. 3.1. Since the recovery of ester
from esterification process is affected by the parameters described above, the process was carried out as per steps described in Fig.
505
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.1. The effect of process parameters shown in Table 3.1 was studied to standardize the esterification process for estimating recovery
of ester as well as recovering ester of lowest possible viscosity.
In order to standardize the process parameters, three levels of molar ratios (6:1,8:1 and 10:1), four levels of catalyst (KOH)
concentration (1.0%, 1.5%, 2.0% and 2.5%) and two levels of reaction temperature (60 0C and 650C) was set. The esterification was
done at 6:1, 8:1 and 10:1 molar ratios in order to obtain maximum recovery of ester with lowest possible kinematic viscosity by
preliminary experiments. Esterification was carried out at selected molar ratio at different levels of catalyst concentration for 60
minutes at different reaction temperature in shaking water bath and then allowed to settle for 24 h for separation of lighter ester layer
at the top and heavy glycerol layer at the bottom. Total 24 ester samples were prepared to study the effect of the catalyst
concentration, preheating time and reaction temperature on ester recovery and subsequent measure of their kinematic viscosity.
Determination of Characteristic Fuel Properties
Relative density and API gravity
The relative density of the selected fuels at 150C was determined as per IS: 1448 [P: 32]: 1992.
Equation 3.1 was used to calculate the relative density.

Relative density =

Density of the fuel at 15 0 C


Density of the water at 15 0 C

.. (3.1)

The API (American Petroleum Institute) gravity, which is an indicator of heat content and lightness of a fuel, was also calculated. The
higher the API gravity, the lighter is the fuel. The following relationship was used to determine the API gravity of diesel, neem oil and
their blends with ethanol.

API Gravity

141.5
Relative density at 15 0 C

131.5

.. (3.2)

Kinematic viscosity
A Redwood Viscometer No.1 of WISWO make as shown in was used for measurement of kinematic viscosity of selected fuel
samples. The instrument measures the time of gravity flow in seconds of a fixed volume of the fluid (50ml) through specified orifice
made in an agate piece as per IS : 1448 [ P : 25 ] 1976 Kinematic viscosity in centistokes was then calculated from time units by using
the relationships given by Guthrie (1960).
k =

0.26 t

179
t

.. (3.3)

50
t

.. (3.4)

When 34 < t < 100 and


k =
where,

k =

0.24 t

When t > 100

Kinematic viscosity in centistokes, cS

t = Time for flow of 50 ml sample, s


Gross heat of combustion
The heat of combustion or calorific value of a fuel is the heat produced by the fuel within the engine that enables the engine to do the
useful work. The gross heat of combustion of fuel samples was determined as per IS: 1448 [P: 6]: 1984 with the help of a Widson
506

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Scientific Works make Isothermal Bomb Calorimeter. The gross heat of combustion of the fuel samples was calculated using the
equation given below:

Hc
=

WC T
MS

.. (3.5)

where,

Hc

Heat of combustion of the fuel sample, Cal / g


Wc

Water equivalent of the calorimeter, Cal / 0C

Rise in temperature, 0C

Ms

Mass of sample burnt, g

Cloud and pour points


The Cloud and Pour point is the measure which indicates that the fuel is sufficiently fluid to be pumped or transferred. Hence it holds
significance to engines operating in cold climate. The Cloud and Pour point of fuel samples were determined as per IS: 1448 [P: 10]:
1970 using the Cloud and Pour point apparatus.
Three replications were made for each fuel type.
Flash and fire point
Flash point measures the tendency of the sample to form a flammability mixture with air under controlled laboratory conditions. The
flash and fire point of the fuel samples was determined as per IS: 1448 [P: 32]: 1992.
RESULT AND DISCUSSION
Studies were conducted for standardizing transesterification process parameters for Neem oil, determination of compatible fuel
properties of the oil, its methyl ester and their blends with ethanol. The fuel properties such as kinematic viscosity, relative density,
gross heat of combustion, cloud and pour point, flash and fire point, of Neem oil, its methyl ester as well as their blends with ethanol
were compared.
Standardization of Esterification Process Parameters
The effect of selected level of parameters as mentioned in table 3.1 to standardize the methyl esterification process for Neem oil &
Table 3.2 gives Esterification Process Parameters Selected to Produce Neem Methyl Ester of 6.65 cS Kinematic Viscosity. It is,
therefore, seen that highest recovery of 84 percent of methyl ester was obtained at 6:1 molar ratio when the raw Neem oil was reacted
with ethanol at 650C reaction temperature for 60 minute in presence of 2.00 percent KOH and then allowed to settle for 24h. Based on
the observation of percent recovery of methyl ester from esterification of raw Neem oil at 6:1 molar ration may be reacted with
ethanol at 650C reaction temperature for 60 minute in presence of 2.00 percent KOH and then allowed to settle for 24h due to
availability of more polarity to dissolve KOH concentration.
Effect of process parameters on kinematic viscosity of recovered esters
Table 4.1 show the kinematic viscosity of Neem methyl esters obtained by esterification of raw Neem oil at the selected process
condition. Anjana Srivastava(2007) [6] It is evident from the table that the methyl esters obtained from the estrerificatin of raw Neem
oil at different process conditions ranged between 6.65 to 11.99 cs. It is evident from the table that kinematic viscosity was found at
different selected process parameters which varied between 9.12 to 13.02 and 10.15 to 1`3.98 cS for the molar ratio of 6:1 and 8:1
respectively. Based on the observations on the recovery and kinematic viscosity, it may be concluded that raw Neem oil at 6:1 molar
ratio may be reacted with ethanol at 650C reaction temperature for 60 minute in presence of 2 percent KHO and then allowed to settle
for 24hin order to get maximum ester recovery with lowest possible kinematic viscosity.
507

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fuel properties of Neem oil, methyl esters of Neem oil and their blends with ethanol
The characteristic fuel properties such as kinematic viscosity, relative density, gross heat of combustion, cloud and pour point, flash
and fire point, were measured for different fuels to assess their compatibility with diesel fuel.
Relative density and API gravity
The relative density at 150c and API gravity of diesel, raw Neem oil and different blends are shown in Table 4.2 the relative density
and API gravity of diesel used in the experiments were found to be 0.839 and 37.15 respectively. The relative density of NME90E10,
N30, NME60E400, NME50E50 and NME 100 were found to be 0.85, 084, 083, 082, 0.82 and 0.86. The results obtained during the
experiments indicate that the relative density of all the ester blends was almost closer to that of diesel.
The API gravity of NME90E10, NME80E20, NME70E30, NME60E400, NME50E50 and NME 100were found to be 34.64, 36.95,
38.98, 41.06, 41.06 and 32.71.
Kinematic viscosity
Table 4.3 shows the kinematic viscosity of diesel, raw Neem oil and different blends at 38 0C. The kinematic viscosity if diesel was
found to be 3.21 cS. The kinematic viscosity of diesel at 380C may range between 2.0 to 7.5 cS. (IS: 1460-1974). The Neem oil had
the kinematic viscosity of 48.32 cS at 380C. The kinematic viscosity of NME 100, NME0D10, NME80D20, NME70D30, NME60D40
and NME50D50 were found to have kinematic viscosity of 6.47, 4.33, 3.46, 2.82, 2.38 and 1.82cS respectively.
The kinematic viscosity od diesel as reported by Sandun Fernando (2007) [4]is 3.41, 3.24 and cS respectively. On the basis of above it
was seen that the observed kinemetic viscosity of selected fuels are in line with the findings reported earlier.
Gross heat of combustion
The gross heat of combustion of diesel, Neem oil and the blends of Neem ester with ester with ethanol mixed in various proportions
are shown in Table 4.4. The table indicates that gross heat of combustion of diesel was found to be 47.80MJ/Kg.The gross heat of
combustion of Neem oil was observed as 32.02 MJ/kg, which is 33.0 percent less than that of diesel. The gross heat of combustion
was observed as 36.34,35,90,34.56,33.40,32.89 and 37.12 for NME90E10, NME80E20, NME70E30,NME60E400,NME50E50 and
NME 100 respectively.The gross heat of combustion of diesel reported by Sandun Fernando (2007) [4]was 49.05 ,47.8 and
48.46MJ/Kg respectively. On the basis of above it was seen that the observed gross heat of combustion of selected fuels are in line
with the findings reported earlier.
Cloud and pour point
The could and pour point of diesel, Neem oil and Neem ester blends with ethanol are shown in Table 4.5. The table indicates that the
cloud and pour point of diesel was 2.60C and -20C respectively. The Neem oil had the cloud and pour pint as 19 and 3 0C respectively.
The cloud point was observed as 4,18,20,21,22 and 23 0C and pour point as 6.0,7.0,8.0,9.0,10.0 and 5.0 for NME90E10, NME80E20,
NME70E30 NME60E400,NME50E50 and NME100 respectively.
Flash and fire point
The flash and fire point of diesel, Neem oil and Neem ester blends with ethanol are shown In Table 4.6. The flash and fire point of
diesel was 54.30C and 59.40C respectively. The flash and fire point of Neem oil wa found to be 152 0C and 1590C respectively. The
table also reveals that NME90E10, NME80E20, NME70E30, NME60E400, NME50E50 and NME 100were having the flash point of
39.0,38.0,36.0, 35.0,33.0 and 58 0C and fire point of 45.0,43.0, 40.0,40.0,39.0,37.0 and 66.0 0C respectively. The flash and fire point of
methyl ester of Neem oil was having value as compared to diesel. The blends of Neem methyl ester-ethanol were found to have lower
flash and fire point than diesel.
The observed results on flash and fire point are in accordance with the findings of flash point of Neem oil as 152 0C. The reduced flash
and fire point of Neem methyl esterethanol blends reflects that greater care may be required in handling these fuels during high
508

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ambient temperature conditions. On the basis of above it was seen that the observed flash and fire point of selected fuels are in line
with the findings reported earlier.
CARBON RESIDUE
The observed carbon residue content of diesel, Neem oil and Neem ester blends with ethanol are shown in Table 4.7. The carbon
residue content in diesel was found as 0.16 percent. The maximum recommended carbon residue level in diesel fuel as per IS: 14601974 is 0.2 percent. The observed carbon residue content in diesel falls is in line with the findings of Nurun nabi (2007) [5] who
reported it as 0.1 percent. The Neem oil was found to have the carbon residue content of 3.561 perent. The carbon residue content of
0.822,
0.789,
0.793,0689,
0.689,
0.630
and
0.933
percent
was
found
in
NME90E10
NME80E20,
NME70E30,NME60E400,NME50E50 and NME100 were respectively.
SUMMARY AND CONCLUSION
On the basis of the results obtained from the whole experiment the following conclusions were drawn:
(i) The recovery of Neem methyl ester of lowest kinematic viscosity (6.65 cS) with 84 percent recovery was possible at the
following standardized parametric conditions:
Main Transesterification Process:
1.

Molar Ratio

6:1

2.

Type of Catalyst

KOH

3.

Concentration of Catalyst

2%

4.

Reaction Temperature

650c

5.

Reaction Time

60min

6.

Setting time

24h

The recovery of Neem methyl ester was 84 percent with kinematic viscosity of 6.65cS.

509

(ii)

The relative density of the Neem oil used in the experiment was 6.07 percent higher than that of diesel, whereas the
Neem methyl ester of 6.65 cS viscosity has the relative density 2.57 percent higher than the diesel. The relative
density of Neem methyl ester-ethanol blends decreased with increase in level of ethanol in the blend

(iii)

The cloud and pour point of diesel used in the experiment were 4.2 0C and 1.50C respectively and raw Neem oil had
the cloud and pour point of 190C and 30C respectively. The cloud point was observed 4,18,20,21,22 and 23 0C and
the pour point as 6.0,7.0,8.0,9.0,10.0 and 5.0 for NME90E10, NME80E20,NME70E30,NME60E400,NME50E50
and NME100 respectively.

(iv)

The flash and fire point of diesel used in the experiment were observed as 54.3 and 59.4 0C respectively where as for
Neem oil these values were 152 and 1590 respectively. The blends of Neem methyl ester-ethanol were blends found
to have lower flash and fire point than diesel.

(v)

The carbon residue content of diesel (0.16 percent) was observed to be within the permissible level specified by the
Bureau of Indian Standards but the raw Neem oil and very high carbon residue content (3.561persent). however,
esterification of raw Neem oil reduced the carbon residue content to a great extent.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERNCES:
1.

2.
3.
4.
5.
6.

Ganguli, S. (2002) 'Neem: A therapeutic for all seasons', Current Science, Vol. 82, pp.1304. Foglia, T.A., Jones, K.C., Haas,
M.J. and Scott, K.M. (2000) 'Technologies supporting the adoption of biodiesel as an alternative fuel. The cotton gin and oil
mill presses.
Fukuda, H., Kondo, A. and Noda, H. (2001) 'Biodiesel fuel production by transesterification of oils', J Biosci Bioeng, Vol.5,
pp. 405416.
Barnwal, B.K. and Sharma, M.P. (2005) 'Prospects of Biodiesel production from vegetable oils in India', Renew Sust Energy
Rev 9, Vol.4, pp. 363378.
Sandun Fernando., Prashanth Karra., Rafael Hernandez and Saroj kumar jha. (2007) 'Effect of incompletely converted
soyabean oil on bi odiesel quality'Energy, Vol.32, pp.844-851.
Nurun nabi, Md.,Shamim Akhter.,Mhia Md and Zaglul Shahadat. (2006) 'Improvement of engine emissions with
conventional diesel fuel and diesel-biodiesel blends', Bioresource technology, Vol.97, pp.372-378.
Anjana Srivastava and Ram Prasad. (2004) 'Triglycerides based diesel fuels', Renewable and sustainable energy reviews,
Vol.4, pp.111-133

Table 3.1 Process Parameters Selected for Standardization of Esterification Process


Sl. No.

Name of Parameter

Levels selected

1.

Molar ratio

6:1, 8:1 and 10:1

2.

Catalyst concentration (%)

1.0,1.5,2.0 and 2.5

3.

Reaction Temperature, (0C)

60and 650C

4.

Reaction Time, (hr)

1hr

5.

Catalyst

KHO

6.

Settling Time, (hr)

24hr
7.

Table 3.2 Esterification Process Parameters Selected to Produce Neem Methyl Ester of 6.65 cS
Kinematic Viscosity
Sl. No.

Name of Parameter

Levels selected

1.

Molar ratio

6:1,

2.

Type of Catalyst

KHO

3.

510

Concentration of Catalyst

2%

4.

Reaction Temperature, (0C)

650C

5.

Reaction Time

60min

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

6.

Settling Time

24hr

Table 4.1 Recovery and Kinematic Viscosity of Neem Methyl Ester under Different Estertification
Process Conditions at constant 1h reaction time and 24th settling time.

511

Sl.
No.

Molar
ratio

Catalyst
Concentratio
n (%)

1.
2.
3.
4.
5.
6.
7.
8.
9.
10
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.

6:1
8:1
10:1
6:1
8:1
10:1
6:1
8:1
10:1
6:1
8:1
10:1
6:1
8:1
10:1
6:1
8:1
10.1
6:1
8:1
10:1
6:1
8:1
10:1

1.0
1.0
1.0
1.5
1.5
1.5
2.0
2.0
2.0
2.5
2.5
2.5
1:0
1:0
1:0
1:5
1:5
1:5
2.0
2.0
2.0
2.5
2.5
2.5

Reaction
temperatu
re
o
( C)
60
60
60
60
60
60
60
60
60
60
60
60
65
65
65
65
65
65
65
65
65
65
65
65

Ester recovery (%)

Ester not found


Ester not found
Ester not found
62
61
49
78
59
64
70
68
70
Ester not found
Ester not found
Ester not found
64
63
59
84
73
74
76
76
71

www.ijergs.org

Kinemati
c
viscosity
(cS)
11.04
11.67
12.30
7.37
7.72
8.06
9.42
10.07
10.70
10.72
11.36
11.99
6.65
7.01
7.37
8.75
9.08
9.75

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An Enhanced Detection of Fake Vehicle Identity in Vanet


M.SAKTHIVEL1, S.KARTHIKEYINI 2
1

2.

ASSISTANT PROFESSOR, DEPARTMENT OF COMPUTER SCIENCE, P.G.P ARTS AND SCIENCE, NAMAKKAL.

M.PHIL FULL-TIME RESEARCH SCHOLAR, DEPARTMENT OF COMPUTER SCIENCE, P.G.P ARTS AND SCIENCE, NAMAKKAL.

msakthivelpgp@gmail.com, keyiniskarthi@gmail.com

ABSTRACT In vehicular networks, moving vehicles are enabled to communicate with each other via intervehicle communications as well as with road-side units (RSUs) in vicinity via roadside-to-vehicle
communications. In urban vehicular networks, where privacy, especially the location privacy of secret vehicles is
highly concerned, secrets verification of vehicles is indispensable. Consequently, an attacker who succeeds in forging
multiple hostile identifies can easily launch a Sybil attack, gaining a disproportionately large influence. A location-hidden
authorized message generation scheme is designed for two objectives: first, RSU signatures on messages are signer
indistinct so that the RSU location information is concealed from the resulted authorized message; second, two
authorized messages signed by the same RSU within the same given period of time ( temporarily linkable) are
recognizable so that they can be used for identification.
Keywords RSU location information, RSU private key and vehicle public key, partial signature verification, full
signature creation
I. INTRODUCTION
Distributed systems are groups of networked computers, which have the same goal for their work. The terms
"concurrent computing", "parallel computing", and "distributed computing"[1] have a lot of overlap, and no clear
distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the
processors in a typical distributed system run concurrently in parallel [2]. Parallel computing may be seen as a
particular tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled
form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or
"distributed" using the following criteria:
In parallel computing, all processors may have access to a shared memory to exchange information between
processors.
In distributed computing, each processor has its own private memory (distributed memory). Information is
exchanged by passing messages between the processors.
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not
quite match the above definitions of parallel and distributed systems; see the section Theoretical foundations
below for more detailed discussion[3]. Nevertheless, as a rule of thumb, high-performance parallel computation in
a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses
distributed algorithms. Distributed computing is a field of computer science that studies distributed systems. A
distributed system is a software system in which components located on networked computers communicate and
coordinate their actions by passing messages. The components interact with each other in order to achieve a
common goal [4]. There are many alternatives for the message passing mechanism, including RPC-like connectors
and message queues. Three significant characteristics of distributed systems are concurrency of components, lack
512

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

of a global clock, and independent failure of components. A computer program that runs in a distributed system
is called a distributed program, and distributed programming is the process of writing such programs. Distributed
computing also refers to the use of distributed systems to solve computational problems [5]. In distributed
computing, a problem is divided into many tasks, each of which is solved by one or more computers, which
communicate with each other by message passing. Wireless network [6] refers to any type of computer network that
utilizes some form of wireless network connection. It is a method by which homes, telecommunications networks and
enterprise (business) installations avoid the costly process of introducing cables into a building, or as a connection
between various equipment locations. Wireless telecommunications networks are generally implemented and
administered using radio communication [7]. A wireless ad hoc network is a decentralized type of wireless network. The
network is ad hoc because it does not rely on a pre existing infrastructure, such as routers in wired networks or access
points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data
for other nodes, so the determination of which nodes forward data is made dynamically on the basis of network
connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data.
II. RELATED WORK
In VANET is helps in defining safety measures in vehicles, streaming communication between vehicles, infotainment
and telemetric. Vehicular Ad-hoc Networks are expected to implement a variety of wireless technologies such as
Dedicated Short Range Communications (DSRC) which is a type of WiFi. Other candidate wireless technologies
are Cellular, Satellite, and WiMAX. Vehicular Ad-hoc Networks can be viewed as component of the Intelligent
Transportation Systems (ITS). The research on vehicular ad-hoc networks focuses on the optimization of traffic
throughput on highways using sensor- enabled cars. To develop proactive for highway ramps, obstacles, and
intersections are used to further analysis. Sensor-enabled cars monitor the traffic in their vicinity sensing the distance to
the front and rear car as well as their own speed and acceleration. In addition, we also briefly present some of the
simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations.
Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous
deployment and widespread adoption of scalable, reliable, robust, and secure VANET architectures, protocols,
technologies, and services [8].
Currently, most of the research is focused on the development of a suitable MAC layer, as well as potential
applications ranging from collision avoidance to on board infotainment services [9]. In order to avoid transmission
collisions in VANETs, a reliable and efficient medium access control protocol is needed. But efficient medium sharing is
more difficult due to high node mobility and fast topology changes of VANETs
III. MAIN CONTRIBUTIONS
In this way, the RSU location information is concealed from the final authorized message. Second, authorized messages
are temporarily linkable which means two authorized messages issued from the same RSU are recognizable if and only if
they are issued within the same period of time. Thus, authorized messages can be used for identification of
vehicles even without knowing the specific RSUs who signed these messages. With the temporal limitation on the
likeability of two authorized messages, authorized messages used for long-term identification are prohibited. Therefore,
using authorized messages for identification of vehicles will not harm anonymity of vehicles.
IV. PROPOSED SCHEME
They proposed a fully self-organized public-key management system that allows users to generate their public-private key
pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any
centralized services.
513

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Furthermore, their approach does not require any trusted authority, not even in the system initialization phase. By
definition, a mobile ad hoc network does not rely on any fixed infrastructure; instead, all networking functions (e.g.,
routing, mobility management, etc.) are performed by the nodes themselves in a self-organizing manner.
To find and eliminate Sybil trajectories.
Location privacy of vehicles is preserved.
To fast detection of failure RSU details in the network.
PROPOSED ALGORITHM
Key Generation
1) Choose two large random prime numbers P and Q of similar length.
Generate two different large odd prime numbers, called P and Q, of about the same size where P is greater than Q that
when
multiplied together give a product that can be represented by the required bit length you have chosen, e.g. 1024 bits.
2) Compute N = P x Q. N is the modulus for both the Public and Private keys.
3) PSI = (P-1)(Q-1) , PSI is also called the Euler's totient function.
4) Choose an integer E, such that 1 < E < PSI, making sure that E and PSI are co-prime. E is the Public key exponent.
5) Calculate D = E-1 ( mod PSI ) , normally using Extended Euclidean algorithm. D is the Private key exponent.
Encryption:
1) Convert the data bytes to be encrypted, to a large integer called PlainText.
2) CipherText = PlainText
E
( mod N )
3) Convert the integer, CipherText to a byte array, which is the result of the encryption operation.
Decryption:
1) Convert encrypted data bytes to a large integer called CipherText.
2) PlainText = CipherText
D
( mod N )
3) Convert the integer, PlainText to a byte array, which is the result of the decryption operation.
Message Verification
514

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

As the proof that a vehicle (Vi) was present near certain RSU (Rk) at certain time, an authorized message issued for Vi
can be verified by any entity (e.g., a vehicle or an RSU) in the system. In the case that an entity needs to
verify Vi, Vi will sign on an authorized message (M) generated by RSU (Rk) using public key and then send to
the vehicle. The message verification process
consists of following steps:
Step 1: Check the Vehicle Id
Step 2: Check the private key of RSU (Rk)
Step 3: Check the public key of Vehicle (Vi)
Step 4: Analyze the Entry time
Step 5: Analyze the message as partial signature or Full Signature creation.
Step 6: Verify that the message was signed by legitimate previous RSU
V. PERFORMANCE EVALUATION
In this network creation process is a Typical Vehicular Network (with RSU Installed) is shown graphically. The Fig
1.1 is used to know the RSU deployed and neighbor RSU detail with specified trajectories. Here the figure represents the
Vehicular ad-hoc network process with RSU and neighbor RSU connection. The network process is used to know the
traverse path of one location to another location.

FIGURE: 1.1 NETWORK CREATION


A vehicular ad hoc network, or VANET, is a technology that uses moving vehicles as nodes in a network to create a
mobile network. A VANET turns every participating vehicle into a wireless router or node, allowing vehicle
approximately traversed of each other to connect and, in turn, create a network with a wide range. As vehicle fall out of
the signal range and drop out of the network,
515

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

other vehicle can join in, connecting vehicles to one another. It is estimated that the first systems that will integrate this
technology are police and fire vehicles to communicate with each other for safety purposes.
The message module is used to update the message between Road Side Unit to vehicle On Board Unit. The Fig 1.2 is used
to know the trajectory of the desired vehicle, the details contains such as issued vehicle identity number,
received road side unit, trajectory id, road side unit number and entry time of the vehicle.
As the proof that a vehicle was present near certain Road Side Unit (RSU) at certain time, an authorized message issued
for specified vehicle can be verified by any entity (e.g., a vehicle or an RSU) in the system. If the authorized
message passes the ownership verification, the entity further examines whether the signature contained in the authorized
message is signed by a legitimate RSU in the system.
The vehicle can use this sequence of authorized messages to identify itself. This method is simple but inefficient
because each time when the vehicle needs to be identified in a conversation, all messages in the sequence should be sent
to the conversation holder for verification

FIGURE: 1.2 MESSAGE UPDATION


ACKNOWLEDGMENT
My abundant thanks to Dr.R.K.Vaithiyanathan, Principal, PGP Arts and Science College, Namakkal who gave this
opportunity to do this presentation paper work. I express my deep gratitude and sincere thanks to my supervision
M.SAKTHIVEL MCA., M.Phil., Assistant Professor, Department of Computer Science at PGP Arts and Science
College, Namakkal for her valuable, suggestion, innovative ideas, constructive, criticisms and inspiring guidance had
enabled me to complete the presentation paper work successfully.
VI.CONCLUSION
In this paper Sybil attack detection mechanism having much space to extend. First, in this paper it is assumed that all
RSUs
are trustworthy. However, if an RSU is compromised, it can help a malicious vehicle generate fake legal trajectories (e.g.,
by inserting link tags of other RSUs into a forged trajectory). In that case, duplicated node id cannot detect
such trajectories. However, the corrupted RSU cannot deny a link tag generated by it nor forge link tags generated by
516

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

other RSUs, which can be utilized to detect a compromised RSU in the system. The cost-efficient techniques can be
developed to fast detect the failure of an RSU. Second, it will develop into designing better linkable signer-ambiguous
signature schemes such that the computation overhead for signature verification and the communication overhead
can be reduced.
VII. FUTURE ENHANCEMENTS
In future work, the scenario where a small fraction of RSUs are compromised will be considered. Last, the future work
can validate the design and study its performance under real-complex environments. Improvements will be made based
on the realistic studies before it comes to be deployed in large-scale systems. The future work is to continue to work on
several directions.

REFERENCES:
[1] Borisov.N, Computational Puzzles as Sybil Defenses, Proc. Sixth IEEE Intl Conf. Peer-to-Peer Computing (P2P
06), pp. 171- 176, Oct.2006.
[2] Capkun. S, Buttyan. L, and Hubaux. J, Self-Organized Public Key Management for Mobile Ad Hoc Networks,
IEEE Trans.Mobile Computing, vol. 2, no. 1, pp. 52-64, Jan.-Mar. 2003.
[3] Castro. M, Druschel.P, Ganesh.A, Rowstron. A, and Wallach. D.S, Secure Routing for Structured Peer-toPeer Overlay Networks, Proc. Symp. Operating Systems Design and Implementation (OSDI 02), pp. 299-314, Dec.
2002.
[4]. Dodis, Kiayias. A, Nicolosi. A, and Shoup. V, Anonymous Identification in Ad Hoc Groups, Proc. Intl Conf.
Theory and Applicat ions o f C rypto g ra phi c Techniques (EUR OCRYPT 04) , pp. 609-626, 2004.
[5] Douceur. J.R., The Sybil Attack, Proc. First Intl Workshop Peer-to-Peer Systems (IPTPS 02), pp. 251-260, Mar.
2002.
[6] Dutertre. B, Cheung. S, and Levy. J, Lightweight Key Management in Wireless Sensor Networks by Leveraging
Initial Trust, Technical Report SRI-SDL-04-02, SRI Intl, ome, E. Shi, D. Song, and A. Perrig, The Sybil Attack in
Sensor Networks: Analysis & Defenses, Proc. Intl Symp. Information Processing in Sensor Networks (IPSN 04), pp.
259-268, Apr. 2004. Apr. 2002.
[7] Eriksson.J, Balakrishnan. H, and Madden. S, Cabernet: Vehicular Content Delivery Using WiFi, Proc. MOBICOM
08, pp. 199-210,Sept. 2008.
[8] Fukuhara.T, Warabino. T, Ohseki. T, Saito K, Sugiyama K, Nishida. T, Eguchi. K. Broadcast methods for
inter-vehicle communications system, Proceedings of IEEE Wireless Communications and Networking Conference, pp.
2252-2257, 2005.
[9] Liu. J.K, Wei. V.K, and Wong. D.S, Linkable Spontaneous Anonymous Group Signature for Ad Hoc
Groups (Extended Abstract), Proc. Ninth Australasian Conf. Information Security and Privacy (ACISP 04), pp. 325335, 2004.

517

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Framework for Routing Assisted Traffic Monitoring


D. Krishna Kumar, B.Sai baba M.Tech
Krishna.desa@gmail.com

Vishnu Institute of Technology college of Engineering and Technology, Bhimavaram, A.P.


Abstract Monitoring transit traffic at one or more points in a network is of interest to network operators for reasons of traffic
accounting, debugging or troubleshooting, forensics, and traffic engineering. Previous research in the area has focused on deriving a
placement of monitors across the network toward the end of maximizing the monitoring utility of the network operator for a given
traffic routing. However, both traffic characteristics and measurement objectives can dynamically change over time, rendering a
previously optimal placement of monitors suboptimal. It is not feasible to dynamically redeploy/reconfigure measurement
infrastructure to cater to such evolving measurement requirements. We address this problem by strategically routing traffic
subpopulations over fixed monitors. We refer to this approach as MeasuRouting. The main challenge for MeasuRouting is to work
within the constraints of existing intradomain traffic engineering operations that are geared for efficiently utilizing bandwidth
resources, or meeting quality-of-service (QoS) constraints, or both. A fundamental feature of intradomain routing, which
makesMeasuRouting feasible, is that intradomain routing is often specified for aggregate flows. MeasuRouting can therefore
differentially route components of an aggregate flow while ensuring that the aggregate placement is compliant to original traffic
engineering objectives. In this paper, we present a theoretical framework for MeasuRouting. Furthermore, as proofs of concept, we
present synthetic and practical monitoring applications to showcase the utility enhancement achieved with MeasuRouting.

Keywords Anomaly detection, intradomain routing,network management, traffic engineering, traffic measurements .
INTRODUCTION

Overview of the project:


Several past research efforts have focused on the optimal deployment of monitoring infrastructure in operational networks for accurate
and efficient measurement of network traffic. Such deployment involves both monitoring infrastructure placement as well as
configuration decisions. An example of the former includes choosing the interfaces at which to install DAG cards, and the latter
includes tuning the sampling rate and sampling scheme of the DAG cards. The optimal placement and configuration of monitoring
infrastructure for a specific measurement objective typically assumes a priori knowledge about the traffic characteristics. Furthermore,
these are typically performed at longer timescales to allow provisioning of required physical resources. However, traffic characteristics
and measurement objectives may evolve dynamically, potentially rendering a previously determined solution suboptimal. A new
approach called MeasuRouting to address this limitation.
A simple scenario involves routers implementing uniform sampling or an approximation of it, with network operators being interested
in monitoring a subset of the traffic. MeasuRouting can be used to make important traffic traverse routes that maximize their overall
sampling rate.
Networks might implement heterogeneous sampling algorithms, each optimized for certain kinds of traffic subpopulations. For
instance, some routers can implement sophisticated algorithms to give accurate flow-size estimates of medium-sized flows that
otherwise would not have been captured by uniform sampling. MeasuRouting can then route traffic subpopulations that might have
medium-sized flows across such routers. A network can have different active and passive measurement infrastructure and algorithms
deployed, and MeasuRouting can direct traffic across paths with greater measurement potential.
MeasuRouting can be used to conserve measurement resources. For instance, all packets belonging to a certain traffic subpopulation
can be conjointly routed to avoid maintaining states across different paths. Similarly, if the state at a node is maintained using
probabilistic data structures (such as sketches), MeasuRouting can enhance the accuracy of such structures by selecting the traffic that
traverses the node. This paper presents a general routing framework for MeasuRouting, assuming the presence of special forwarding
mechanisms. We present three flavors of MeasuRouting, each of which works with a different set of compliancy constraints, and we
518

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

discuss two applications as proofs of concept. These MeasuRouting applications illustrate the significant improvement achieved by
this additional degree of freedom in tuning how and where traffic is monitored.

Scope of the project:


A routing protocol may impose a constraint that traffic between a pair of nodes may only traverse paths that are along shortest paths
with respect to certain link weights. MeasuRouting is to work within the constraints of existing intra-domain traffic engineering (TE)
operations that are geared for efficiently utilizing bandwidth resources, or meeting quality-of-service (QoS) constraints, or both.

Objective:
MeasuRouting forwards network traffic across routes where it can be best monitored. This approach is complementary to the
well-investigated monitor placement problem that takes traffic routing as an input and decides where to place monitors to optimize
measurement objectives; MeasuRouting takes monitor deployment as an input and decides how to route traffic to optimize
measurement objectives. Since routing is dynamic in nature (a routing decision is made for every packet at every router),
MeasuRouting can conceptually adjust to changing traffic patterns and measurement objectives. In this paper, our focus is on the
overall monitoring utility, defined as a weighted sum of the monitoring achieved over all flows.

REMAINING CONTENTS

MODULES:

Aggregated flows
TE objectives
Macro-flowset
No Routing Loops MeasuRouting (NRL)
Relaxed Sticky Routes MeasuRouting (RSR)
Deep Packet Inspection Trace Capture

MODULES DESCRIPTION:
We now present a formal framework for MeasuRouting in the context of a centralized architecture. A centralized architecture refers
to the case where the algorithm deciding how distributed nodes will route packets using MeasuRouting has global information of: 1)
the TE policy; 2) the topology and monitoring infrastructure deployment; and 3) the size and importance of traffic subpopulations.
Aggregated flows
TE policy is usually defined for aggregated flows. On the other hand, traffic measurement usually deals with a finer level of
granularity. For instance, we often define a flow based upon the five-tuple for measurement purposes. Common intra-domain
protocols (IGPs) like OSPF and IS-IS] use link weights to specify the placement of traffic for each origindestination (OD) pair
(possibly consisting of millions of flows). The TE policy is oblivious of how constituent flows of an OD pair are routed as long as the
aggregate placement is preserved. It is possible to specify traffic subpopulations that are distinguishable from a measurement
perspective but are indistinguishable from a TE perspective. MeasuRouting can, therefore, route our fine-grained measurement traffic
subpopulations without disrupting the aggregate routing.
TE objectives
The second way in which MeasuRouting is useful stems from the definition of TE objectives. TE objectives may be oblivious to the
exact placement of aggregate traffic and only take cognizance of summary metrics such as the maximum link utilization across the
network. An aggregate routing that is slightly different from the original routing may still yield the same value of the summary metric.
Macro-flowset
A macro-flowset may consist of multiple micro-flowsets. denotes the set of micro-flowsets. There is a many-to-one relationship
between micro-flowsets and macro-flowsets. Represents the set of micro-flowsets that belong to the macro-flowset .
519

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

No Routing Loops MeasuRouting (NRL)


The flow conservation constraints in LTD do not guarantee the absence of loops. In Fig. 1, it is possible that the optimal solution of
LTD may involve repeatedly sending traffic between routers , , and in a loop so as to sample it more frequently while still obeying the
flow conservation and TE constraints. Such routing loops may not be desirable in real-world routing implementations. We therefore
propose NRL, which ensures that the microflowset routing is loop-free. Loops are avoided by restricting the set of links along which a
micro-flowset can be routed Relaxed
Sticky Routes MeasuRouting (RSR)
NRL ensures that there are no routing loops. However, depending upon the exact forwarding mechanisms and routing protocol,
NRL may still not be feasible.
Deep Packet Inspection Trace Capture
In this section, we elucidate a practical application of MeasuRouting using actual traffic traces from a real network and with a
meaningful definition of flow sampling importance. We consider the problem of increasing the quality of traces captured for
subsequent Deep Packet Inspection (DPI). DPI is a useful process that allows post-mortem analysis of events seen in the network and
helps understand the payload properties of transiting Internet traffic. However, capturing payload is often an expensive process that
requires dedicated hardware (e.g., DPI with TCAMs, or specialized algorithms that are prone to errors (e.g., DPI with Bloom Filters),
or vast storage capacity for captured traces. As a result, operators sparsely deploy DPI agents at strategic locations of the network,
with limited storage resources. In such cases, payload of only a subset of network traffic is captured by the dedicated hardware. Thus,
improving the quality of the capture traces for subsequent DPI involves allocating the limited monitoring resources such that the
representation of more interesting traffic is increased. We can leverage MeasuRouting to increase the quality of the traces captured by
routing interesting traffic across routes where they have a greater probability of being captured

1. Home Screen: Server

520

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. Client 3

3. Traffic1

521

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4. Traffic Engineering Rules: Data Packets has been distributed equally.

5. 9. PEngine: (Monitor)

CONCLUSION
We had mentioned that the performance of MeasuRouting is sensitive to the number of paths present between pairs of nodes. MeasuRouting
leverages the relative difference in measurement capacity across multiple paths between a pair of nodes. This obviously depends upon the
network topology and whether multiple paths exist at all. Additionally, the number of paths available for micro-flowset routing is a function
of the number of paths used in the original routing. MeasuRouting performance will be better if the original routing uses multiple paths
between a single OD pair. The implementation of multiple-path routing depends on the routing protocols. ISPs using OSPF and IS-IS
generally use Equal Cost Multipath (ECMP) [5], which results in multiple paths. In fact, heuristics optimizing links weights seek to leverage
ECMP to split traffic between an OD pair across multiple paths [10]. Other routing algorithms can exist that result in even more multiplicity
of paths etween OD pairs.

522

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] C.CHAUDET, E. FLEURY, I. G. LASSOUS, H. RIVANO, ANDM.-E.VOGE, OPTIMAL POSITIONING OF ACTIVE AND PASSIVE MONITORING
DEVICES, IN PROC. ACM CONEXT, TOULOUSE, FRANCE, OCT. 2005, PP. 7182.
[2] K. SUH, Y. GUO, J. KUROSE, AND D. TOWSLEY, LOCATING NETWORK MONITORS: COMPLEXITY, HEURISTICS AND COVERAGE, IN
PROC. IEEE INFOCOM, MIAMI, FL, MAR. 2005, VOL. 1, PP. 351361.
[3] G. R. CANTIENI,G. IANNACCONE,C.BARAKAT,C.DIOT, AND P. THIRAN, REFORMULATING THE MONITOR PLACEMENT PROBLEM:
OPTIMAL NETWORK-WIDE SAMPLING, IN PROC. ACM CONEXT, LISBOA, PORTUGAL, DEC. 2006, ARTICLE NO. 5.
[4] S. RAZA, G. HUANG, C.-N. CHUAH, S. SEETHARAMAN, AND J. P. SINGH, MEASUROUTING: A FRAMEWORK FOR ROUTING ASSISTED
TRAFFIC MONITORING, IN PROC. IEEE INFOCOM, SAN DEIGO, CA, MAR. 2010, PP. 19.
[5] OSPF, THE INTERNET SOCIETY, RFC 2328, 1998 [ONLINE]. AVAILABLE: HTTP://TOOLS.IETF.ORG/HTML/RFC2328
[6] IS-IS, THE INTERNET SOCIETY, RFC 1142, 1990 [ONLINE]. AVAILABLE: HTTP://TOOLS.IETF.ORG/HTML/RFC1142
[7] C. WISEMAN, J. TURNER, M. BECCHI, P. CROWLEY, J. DEHART, M. HAITJEMA, S. JAMES, F. KUHNS, J. LU, J. PARWATIKAR, R.
PATNEY,M.WILSON, K.WONG, AND D. ZAR, A REMOTELY ACCESSIBLE NETWORK PROCESSOR-BASED ROUTER FOR NETWORK
EXPERIMENTATION, IN PROC. ACM/IEEE ANCS, SAN JOSE, CA, NOV. 2008, PP. 2029.
[8] THE OPENFLOW SWITCH CONSORTIUM, STANFORD UNIVERSITY, STANFORD, CA [ONLINE]. AVAILABLE:
HTTP://WWW.OPENFLOWSWITCH.ORG
[9] R. MORRIS, E. KOHLER, J. JANNOTTI, AND M. F. KAASHOEK, THE CLICK MODULAR ROUTER, IN PROC. ACM SOSP, CHARLESTON,
SC, DEC. 1999, PP. 217231.
[10] B. FORTZ AND M. THORUP, INTERNET TRAFFIC ENGINEERING BY OPTIMIZING OSPFWEIGHTS, IN PROC. IEEE INFOCOM, TELAVIV, ISREAL,MAR. 2000, VOL. 2, PP. 519528

523

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Relational Keyword Search System


Pradeep M. Ghige#1, Prof. Ruhi R. Kabra*2
#

Student , Department Of Computer Engineering, University of Pune, GHRCOEM, Ahmednagar, Maharashtra, India.
*

Asst. Professor, Department Of Computer Engineering, University of Pune, GHRCEM, Pune, Maharashtra, India.
1

pradip.ghige@gmail.com
2

ruhi.kabra@raisoni.net

Abstract Now a days for extending the keyword search to relational data set has been an area of research within the database and
Information Retrieval. There is no standardization in the process if information retrieval, which will not clearly show the actual result
also it displays keyword search without ranking and Execution time is more in existing system. We propose a system for; performance
evaluation of relational keyword search systems. In the propose system combine schema-based and graph-based approaches and
propose a Relational Keyword Search System to overcome the mentioned disadvantages of existing systems and manage the
information and user access the information very efficiently.
The objective of this technique is to manage Information, Database and Information Retrieval systems involved
independently and developed their own unique systems to allow users to access information. We also explore the relationship between
execution time and factors. The proposed search technique will overcome the poor performance for datasets exceeding tens of
thousands of vertices..

Keywords keyword search; information retrieval; ranking; relational databases; data mining; database queries; search engine
INTRODUCTION

With the growing use of internet more and more people search the data on internet. Advents of Internet, it became
possible to store a large amount of information. Several techniques are used to Information Retrieval (IR). Keyword
search is one of the techniques use for the same. Keyword search is possible on both structure and semi-structure
databases, also it possible on graph structure which combines relational, HTML and XML data. In relational databases the
keyword search is used to find the tuples in by giving queries. Keyword search use number of techniques and algorithm
for storing and retrieving data, less accuracy, does not giving a correct answer, require large time for searching and large
amount of storage space for data storage.
We propose a system to overcome the disadvantages which discussed for efficient keyword search. Data mining or
information retrieval is the process to retrieve data from dataset and transform it to user in understandable form, so user
easily gets that information. One important advantages of keyword search is user does not require a proper knowledge of
database queries. User easily inserts a keyword for searching and gets a result related to that keyword. Keyword search on
relational databases find the answer of the tuples which are connected to database keys like primary key and foreign keys.
So we also present which comparative techniques used for keyword search like DISCOVER, BANKS, BLINKS, EASE,
and SPARK. One important thing is that any existing techniques for information retrieval on real world databases and also
524

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

experimental result indicate that existing search techniques are not capable of real world information retrieval and data
mining task.
I. RELATED WORKS
Relational Keyword search are change for different applications and retrieval systems are different for that purposes.
Requirement of applications change as per its use and also change algorithm and techniques, also vary as per requirement.
One technique is not fulfilling the requirement of other dataset. In this section we will discuss all the research and
techniques which are available in existing approaches.
A. Schema based approaches
Schema based approaches support keyword search over relational databases using execution of SQL commands [1]. These techniques
are combination of vertices and edges including tuples and keys (primary and foreign key). There are some techniques are existed for
schema based approaches.

DISCOVER
DISCOVER is the techniques that multiple Information Retrieval approaches follow. DISCOVER allows its user to issue
keyword queries without any knowledge of the databases schema or SQL [2]. DISCOVER returns qualified joining network of tuples,
which is set of tuples that are associated because they join on their primary and foreign keys, collectively contain all the keywords of
the query.
DISCOVER proceeds in two steps- 1. The candidates network generator generates all candidate networks of relations.2.Plan
generators builds plats for the efficient evaluation of the set candidates networks.

In DISCOVER use a greedy algorithm that produces a near optimal execution plan with respect to the actual cost.
Keyword search enables information DISCOVERY without requiring from the user to know the schema of the database.
It proceeds in three steps. 1. It generates the smallest set of candidate networks. 2. Then greedy algorithm creates a nonoptimal execution plan to evaluate the set of candidate networks. 3. The execution plan is executed by the DBMS.
DISCOVER uses static optimization. In future, it applies on dynamic optimization. DISCOVER returns a monotonic
score aggregation function for ranking a result.
SPARK
With the increasing of the text data stored in relational databases, there are increase a demand for RDBMS to support
keyword query search on text data. For the same existing keyword search method cant fulfill the requirement of text data search. This
techniques focus on effectiveness and efficiency of keyword query search [16]. They propose a new ranking formula using existing
information retrieval techniques. Major importance of this technique is works on large scale real databases (Eg. Commercial
application which is Customer Relationship Management) using two popular RDBMS Effectiveness and Efficiency.
It uses a Top-k Join algorithm which includes two efficient query processing algorithms for ranking function. 1. Dealing with
Non-monotonic scoring function. 2. Skyline Sweeping Algorithm. This proposed system a new ranking method that adapt the state of
525

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

art IR ranking function also two algorithm were proposed that aggressively minimize database probes. The result confirms that our
ranking method could achieve high precision with high efficiency.

B. Graph Based Approaches


Graph based approaches assume that the database is modeled as a weighted graph where the weight of the edges indicate the
importance of relationships. This weighted tree with edges is related to steiner tree problem [5]. Graph base search techniques is more
general than schema based techniques including XML, relational databases and internet.[1]
BANKS
A system which enables keyword based searched on relational databases together with data and schema browsing. BANKS
enables user to exact information in a simple manner without any knowledge of schema [7]. A user can get information by typing a few
keyword, following hyperlinks and interacting with controls on the displayed results. BANKS algorithm is an efficient heuristics
algorithm for finding and ranking query results.
BANKS is focus on browsing and keyword searching. Keyword searching in BANKS is done using proximity based ranking on
foreign key links. Model database is a graph with the tuple as nodes and cross references between edges. BANKS reduces the efforts
involved in publishing relational data on the web and making it searchable.

BLINKS
In query processing over graph-structured is a top-k keyword search query on a graph finds the top k answered according to
some ranking criteria. Before the implementation of graph existing system have some drawbacks like poor worst case performance, not
taking full advantages of indexes and high memory requirements. To address this problem BLINKS (Bi-level indexing and query
processing) scheme for top k-keyword search in graph algorithm will be implemented [4] . To reduce index space BLINKS partition a
data graph into blocks. The bi-level index stores summery information at the block level. Main contribution of BLINKS is better search
strategy; combining indexing with search and partitioning based indexing. BLINKS algorithm concludes that its major focus on
efficiently implementing ranked keyword searches on graph structure data. It is difficult to directly build indexes for general schemaless graph to address this problem introduce BLINKS.
Result shows that BINKS improve the query performance by more than an order of magnitude. In future work of BLINKS
includes two aspects for index implementation
1) When graph is updated need to maintain the indexes.
2) By monitoring performance at run time, dynamically change graph partitions and indexes in order to changing data and
workloads.

II. COMPARATIVE STUDY


This section includes a study on some algorithm that searches the information from the database but not fulfill all the
requirements.

526

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Keyword proximity search in complex data graph


In keyword search over data graph, is a non-redundant sub-tree that includes the given keywords [3]. Algorithms for
enumerating answer is presented in architecture have two parts 1.An engine that generate a set of candidate answer. 2.
Ranker that evaluate their score. For effectiveness engine must have three fundamental properties: It should not miss
relevant answer, must generate the answer in an order that i highly co-related with the desired ranking and it has to
efficient.
Keyword search in databases is performed over graph in which nodes are associated with keywords and edges describe
semantic relationship. If the databases are an XML document then it can be represented as a graph. The goal is to discover
occurrences of the keywords as well as semantic relationship between them. The conclusion of proximity search is the
architecture of a generator and a ranker is essential for overcoming the obstacle that arises when applying keyword search
to complex data graph. The ranker presented in this search is aimed at eliminating repeated information by incorporating
global measure.
B. Steiner-tree based search
A relational database can be modeled as a database graph G=(V,E). In this case there is a one to one mapping
between a tuple in the database and a node in V. G can be consider as a directed graph with two edges: a forward edge
(u,v) E if there is a foreign key from u to v, and a back edge (u,v)if (u,v)is a forward edge in E. An edge (u,v)indicate a
close relationship between tuples u and v and the introduction of the two edge allows differentiating the importance of u to
v and vice versa. When such separation is not necessary for some application G becomes an undirected graph.
Most existing method of keyword search over relational databases find the steiner tree composed of relevant
tuples as the answer [5]. They identify the steiner trees by discovering by the rich structural relationship between tuples,
and neglect the fact that such structural relationships can be recomputed and indexed. Existing method identify a single
tuple unit to answer keyword queries. To overcome this problem this technique studies how to integrate multiple related
tuple units to effectively answer keyword queries. It has implemented method in real database systems, and the
experimental results show that this approach achieves high search efficiency and accuracy.
C. Efficient IR-Style Keyword Search over Relational Databases
Keyword search is the dominant information discovery method in documents. Increasing amount of data is stored
in databases; lain text coexists with structure data. Till now information discovery in databases required knowledge of
scheme, knowledge of a query language and knowledge of the role of the keywords. The goal of this research is enable IR
style keyword search over DBMS without the above knowledge [11].
Advantages of IR-Style keyword search are: IR keyword search well developed for document search, modern
DBMS offer IT-Style keyword search over individual text attributes. The result keyword query in each edge correspond to
each edge corresponds to a primary-foreign key relationship. Some algorithm are used for IR-style keyword search

527

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

including OR-semantics use DBMS estimator, AND-semantics probabilistically adjust DBMS estimator, if at most a few
query results expected then use sparse algorithm, if many query results expected then use global pipelined algorithm.
The conclusion of this method is that: Extend IR-style ranking to databases, Exploit text-search capabilities of
modern DBMS to generate result of higher quality, Support both AND and OR semantics and finally achieve speedup
over prior work using pipelined top-k query processing algorithm.
D.Bidirectional Expansion for Keyword Search on Graph Databases
In relational XML and HTML data can be represented as graph with entities as nodes and relationships as edges.
Test is associated with nodes and possibly edges. A problem in this system is to efficiently extract from the data graph a
small number of the best answer trees. But it can perform poorly if some keywords match many nodes, or some node has
very large degree.
In this techniques focus on new search algorithm that is Bidirectional search which improves on Backward
Expanding search by allowing Forward Search from potential roots toward leaves [12]. Introduce bidirectional search
algorithm to handle partial specification of schema and structure using tree pattern with approximate matching. In future
improves look ahead techniques for bidirectional search which can reduce the number of nodes touched.
E. QUNITS: queried units for database search
Keyword search against structured databases has become a popular search, many users find structure queries too
hard to express and use Google like query box into which search term can be entered. To overcome this problem this
research focuses on to create a clear separation between Ranking and Database querying. The first task is to represent the
database conceptually as a collection of independent queried units, each of which represents the desired result for some
query against the database.
In QUNITS based search one high ranking segmentation is used which join between inserted query words [13].
The qunits base approach is a so cleaner approach to model database search. In current model of keyword search in
databases, several are applied to construct a result. The benefits of maintaining a clear separation between ranking and
database content is that structured information can be consider as one source of information amongst many others. This
makes system easier to extend and enhance with additional IR methods for ranking, such as relevance feedback also it
allows us to concentrate on making the database more efficient using indices and query optimization.
The conclusion is that IR techniques are not designed to deal with structured inter-linked data and database
techniques are not designed to produce result for queries that are under specified. This research has bridge this gap
through the concept of QUNITS. They presented an algorithm to evaluate keyword queries against such a database of
qunits, based on typing the query. In future work expect to be able to substantially improve upon the qunit finding and
utility assignment algorithm.

528

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

F. EASE: An Effective 3-in-1 Keyword Search Method for Unstructured, Semi-structured and Structured Data
Existing keyword search engines are restricted to a given data model and cannot easily adapt to unstructured,
semi-structured and structured data. Current technique proposes an efficient keyword search method called EASE, for
indexing and querying large collection of data [18]. To achieve efficiency in processing keyword queries first modeled all
data as Graph.
Existing search engine cannot integrate information from multiple interrelated pages to answer to answered
keyword queries meaningfully. Conclude that the efficiency of keyword search on structured and semi-structured data is a
challenging problem because of the traditional approaches has the inverted index to process a keyword queries which is
efficient for unstructured data but inefficient for semi-structured and structured data. For indexing and querying over large
collection of unstructured, structured and semi-structure data, and ranking of the result propose a new techniques called
EASE with integrating and databases and information retrieval techniques. EASE integrates efficient query evaluation and
adaptive scoring for ranking result. EASE provides an efficient algorithm basis for top-k-style processing of large
amounts of data for the discovery of rich structural relationship. EASE integrates an effective ranking mechanism to
improve search effectiveness.
The contribution of this techniques is model structured, unstructured and semi-structured data as graph and
propose an efficient keyword search method. Propose a novel ranking mechanism for effective keyword search also
examine the issues of indexing and ranking and devise a simple and efficient indexing mechanism to index the structural
relationships between the transformed data.
The conclusion is that EASE is the efficient and adaptive keyword search method to answer keyword queries over
structured, unstructured and semi-structured data. The result shows that EASE achieves both high search efficiency and
quality for keyword search.
III. Proposed System:
I proposed system techniques, empirical performance evaluation of relational keyword search system. The
conclusion of literature survey is that many existing search techniques do not provide acceptable performance for realistic
retrieval task. Existing search techniques required large memory size for dataset and process. Memory consumption is
more in existing techniques. Also another important issue of existing system is that they require more time for query or
keyword execution. In summery it conform that the result of existing system is unacceptable performance. Important
disadvantages of that includes: 1) Keyword search without ranking. 2) Execution time is more.
In proposed techniques we try to avoid all the disadvantages of existing systems. In our techniques we combine
number of algorithms and techniques from data structure and introduce new techniques that can satisfy number of
expectation for keyword query search.

529

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Algorithms:
1. Mining Algorithm
FPGROWTH Algorithm
The FPGrowth method indexes the database for fast support computation via the use of an augmented prefix tree called the
frequent pattern tree (FP-tree). Each node in the tree is labeled with a single item, and each child node represents a different item.
Each node also stores the support information for the item set comprising the items on the path from the root to that node. The FP-tree
is constructed as follows. Initially the tree contains as root the null item. Next, for each tuple ht ,Xi D, where X = i(t), we insert the
item set X into the FP-tree, incrementing the count of all nodes along the path that represents X. If X shares a prefix with some
previously inserted transaction, then X will follow the same path until the common prefix. For the remaining items in X, new nodes are
created under the common prefix, with counts initialized to 1. The FP-tree is complete when all transactions have been inserted. The
FP-tree can be considered as a prefix compressed representation of D. Because we want the tree to be as compact as possible, we want
the most frequent items to be at the top of the tree. FPGrowth therefore reorders the items in decreasing order of support, that is, from
the initial database, it first computes the support of all single items i I. Next, it discards the infrequent items, and sorts the frequent
items by decreasing support. Finally, each tuple ht ,Xi D is inserted into the FP-tree after reordering X by increasing item support.

Tidset Intersection approach: Eclat Algorithm


The support counting step can be improved significantly if we can index the database in such a way that it allows fast
frequency computations. Notice that in the level-wise approach, to count the support, we have to generate subsets of each transaction
and check whether they exist in the prefix tree. This can be expensive because we may end up generating many subsets that do not
exist in the prefix tree.

2. Clustering Algorithm
In this subsection, we describe the details of our clustering algorithm, the input parameters to our algorithm are
the input data set S containing n points in d-dimensional space and the desired number of clusters k. As we mentioned
earlier, starting with the individual points as individual clusters, at each step the closest pair of clusters is merged to form
a new cluster. The process is repeated until there are only k remaining clusters.
Efficient search Engine
As we know about the Google search engine it deals with the most useful or most of time user use word. When user searches
any key word in Google it replays only the most of time use meaning.
e.g suppose user search CAT Google give CAT-Exam as a result .but the fact is CAT is also an animal. So this is the main
drawback.
So in this techniques we overcome this issues ,we define the category of the search word first then after user will select the
appropriate word meaning that he going to search. Then after that we can do the several operations. We will statically add the database
in our techniques.
530

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Advantages:
1.

Keyword search with ranking.

2.

Execution time consumption is less.

3.

Less memory require for storing a data and processing.

ACKNOWLEDGMENT
We would like to sincerely thank Mrs.Ruhi Kabra, our mentor (Asst. Professor, GHRCEM, Chas, Ahmednagar), for her support and
encouragement.

CONCLUSION
Overall we will study all the existing techniques which is available in market. Each system has some advantages and some
issues. We compare all the techniques and checked the performance. So finally conclude that any existing system cannot
fulfill all the requirement of keyword query search. They require more space and time; also some techniques are limited
for particular dataset.
The Proposed technique is satisfying number of requirement of keyword query search using different algorithms.
The performance of keyword search is also the better to compare other and it shows the actual result rather than tentative.
It also shows the ranking of keyword and not requires the knowledge of database queries. Compare to existing algorithm
it is a fast process.
As a future work we can search the techniques which are useful for all the datasets, means only single technique
can be used for MONDIAL, IMDb etc. Further research is necessary to investigate the experimental design decisions that
have a significant impact on the evaluation of relational keyword search system.

REFERENCES:

[1] Joel Coffman, Alfred C. Weaver, An Empirical Performance Evaluation for Relational Keyword Search Systems,
IEEE transaction on Knowledge and Data Engineering, 2014
[2] Vegelis Hristidis, Yannis Papakonstantinou, DISCOVER : Keyword Search in Relational Database,28 thVLDB Conference,
Hong Kong,China,2002

[3]Konstantin Golenberg, Benny Kimelfeld, Keyword proximity Search in Complex Data Graphs.,
SIGMOD08,Vancouver,BC,Canada,2008
[4] Hao He, Haixun Wang,BLINKS: Ranked Keyword Searches On Graph,SIGMOD07,Beijing,China,2007
[5] M.L.Shore, L.R.Foulds,An Algorithm for the Steiner Problem In Graph,(National Chung-Cheng
University,China,2004
[6] Sharmili C.,Rexie J.A.M.,Efficient Keyword Search Methods in Relational Databases,IJERA,2013
531

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7] Gaurav Bhalotia,Arvind Hulgeri,Keyword Searching and


California,

Browsing in Database using BANKS,University of

Berkeley

[8] E.W.Dijkstra, Two Problems in Connexion with Graph1959


[9] Bhavan Dalvi, Megha Kshirsagar,Keyword Search on External Memory Data Graph,VLDB08, Auckland, New
Zealand,2008
[10] Amit Singhal, Modern Information Retrieval: a brief overview,IEEE computer Society Technical Committee on
Data

Engineering,2001

[11] Vagelis Hristidis, Luis Gravano,Efficient IR-Style Keyword Search Over Relational Databases
[12] Varun Kocholia, Shashank Pandit,Bidirectional Expansion For Keyword Search on Graph Databases,University of
California,USA,2005
[13] Arnab Nandi,H.V.Jagdish,Qunits:queried units for database search ,4th Biennial Conference on Innovative data
system

research(CIDR) Asilomar,California,USA,2009

[14] Li Qin, Jeffrey Xu.Yu.,Keyword search in Databases:The Power of RDBMS,SIGMOD09 Rhode


Island,USA,2009
[15] Fang Liu, Clemet Yu,Effective Keyword Search in Relational Databases ,SIGMOD,Chicago,USA,2006
[16] Yi Luo, Xuemin Lin, SPARK: Top-k Keyword Query in Relational Databases,SIGMOD07 ,Chicago,China,2007
[17] Yi Chen, Wei Wang,Keyword Search On Structured and Semi-Structured Data ,SIGMOD09 Rhode
Island,USA,2009
[18] Guoling Li, Beng Chin Ooi,EASE: An Effective 3-in-1Keyword Search Method For Unstructured, Structured
and Semi-

Structured Data,SIGMOD08,Vancouver, BC,Canada,2008

[19] William Webber,Evaluating the Effectiveness of Keyword Search,IEEE Computer Society Technical Committee
on Data
Engineering,2010

532

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Supplier Selection by Using Multi Criteria Decision Making Methods


P.Murali1, V. Diwakar Reddy2, A. Naga Phaneendra3
Department of Mechanical Engineering, Sri Venkateswara University college of Engineering, Tirupati, India
Email muralikrishna.781@gmail.com; phani.34suriya@gmail.com

Abstract In the present study an efficient multi criteria decision making (MCDM) approach has been proposed for quality
evaluation and performance appraisal in supplier selection. Supplier selection is a multi-criteria decision making problem influenced
by multiple performance criteria.These criterias/attributes may be both qualitative as well as quantitative .Qualitative criteria
estimates are generally based on previous experience and expert opinion on a suitable conversion scale.This conversion is based on
human judgment.Therefore predicted result may not be accurate always because the method does not explore real data.These are
analyzed by TOPSIS(Technique for order preference similarity to ideal solution), PROMETHEE(Preference Ranking Organization
Method for Enrichment Evaluations) etc. In solution of MCDM problems there should be a common trend is to convert quantitative
criteria values into an equivalent single performance index called Multi attribute performance index. MCDM methods helps to choose
the best alternatives where many criteria have come into existence ,the best one can be obtained by analyzing the different scope for
the criteria, weights for the criteria.

Keywords Supplier Selection, MCDM, Qualitative, Quantitative, Weights for the Criteria,Multi attribute performance index,
TOPSIS,PROMETHEE.
INTRODUCTION

In any Industry decisions are being made from various criterias, so the decision can be made by providing weights are obtain from
expert groups. MCDM is pertaining to structure and solve decision and planning problems involving multiple criteria [1].The main
objective of this survey is to support decision makers where there are huge choices exist for a problem to be solved. This survey on
multi criteria decision understands the need of MCDM,many works have been proposed in determining the best optimal solution for a
problem using different methods in it.

PROPOSED METHODOLOGIES
The proposed methodology for supplier selection problem, composed of TOPSIS method, consists of three Steps .They are as follows:
(1) Identify the criteria to be used in the model;
(2) weigh the criteria by using expert views;
(3) Evaluation of alternatives with TOPSIS and determination of the final rank.
In the first Step, with the help of going over expertise of experts and their relevant specialized literature, we try to recognize variables
and effective criteria in supplier selection and the criteria which will be used in their evaluation is extracted. Thereafter, list of
qualified suppliers are deter-mined and. In the last stage of the first step, the decision criteria are approved by decision-making team.
After the approval of decision criteria, we assigned weights on them by organizing experts sessions in the second step. In the last
stage of this step, calculated weights of the criteria are approved by decision making team. Finally, ranks are deter-mined, using
TOPSIS method in the third step.

TOPSIS METHOD
TOPSIS(Technique for order preference similarity to ideal solution) method was introduced for the first time by Yoon and Hwang and
was appraised by surveyors and different operators. As large number of potential available vendors in the current marketing
environment, a full ANP (Analytic Network Process) decision process becomes impractical in some cases [11]. To avoid an
unreasonably large number of pair-wise comparisons, we choose TOPSIS as the ranking technique because of its concepts ease of use.
A general TOPSIS process with six activities is listed below.
STEP 1: Establish a decision matrix for the ranking. The structure of the matrix can be expressed as follows
533

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

B
1

D=

F1
P11

F2
P12

Fn
P1n

---------------------------

(1)

Pm1 Pm2 Pmn


Where Bi denotes the alternatives i, i = 1...,m; Fj represents jth attribute or criterion, j = 1...,n, related to ith alternative; Pij is a crisp
value indicating the performance rating of each alternative Bi with respect to each criterion Fj.
Bn

STEP 2: Calculate the normalized decision matrix Q= [Sij]. The normalized value Sijis calculated as
Sij

i=1.n;j=1m

------------------ (2)

STEP 3: Calculate the weighted normalized decision matrix by multiplying the normalized decision matrix by its associated weights.
The weighted normalized value vij is calculated as:
Vij=Wij*Sij,
J=1n;i=1.m; -----------(3)
Where wj represents the weight of the jth attribute or criterion.
STEP 4: Determine the PIS (Positive Ideal Solution) and NIS (Negative Ideal Solution) respectively:
V+= (v1+vn+ ) = ((Max vij 1 j J),(Min vij 1 j J1))
V- =(v1-....vn-) = ((Min vij 1 j J),(Max vij 1 j J1))
Where J is associated with the positive criteria and J' is associated with the negative criteria
STEP 5: Calculate the separation measures, using the m-dimensional Euclidean distance. The separation measure+ of each
alternative from the PIS is given as:
Ei+=

+ 2

=1 (ij )

, i = 1.m

--------

(4)

Similarly, the separation measure of each alternative from the NIS is as follows:
=

=1 (ij )

, i = 1.m

-------

(5)

STEP 6: Calculate the relative closeness to the idea solution and rank the alternatives in descending order. The relative closeness of
the alternative Ai with respect to PIS V+ can be expressed as:
=

+ +

--------- (6)

Where the index value of Hi* lies between 0 and 1. The larger the index value, the better the performance of the alternatives.

CASESTUDY
To apply this methodology, we have solved simulated numerical problem. Assume that the management of Lanco industry
Srikalahasthi wants to choose their best suppliers. Based on proposed methodology, three steps are applied for assessment and
selection of suppliers. In this part we deal with application of these steps.
After forming decision making team, step 1 starts developing an updated pool of supplier selection criteria for the industry, using
those accepted criteria given in the literature, as well as those criteria recommended by the experts. In this numerical example, the
criteria are selected as shown in Table 1. Although, the criteria considered in supplier evaluation are condition-industry specific.
Selection of criteria is totally industry specific and based on each case and the criteria are changed and replaced. Opinions of decision
makers on criteria were aggregated and weights of all criteria have been calculated by organizing the expert meeting. Its results have
Assuming 4 suppliers are included in the evaluation process, information of each of suppliers has been mentioned in Table 2. After
normalizing information and considering weight of criteria in them, negative and positive separation measures, based on normalized
Euclidean distance for each supplier is calculated and then final weight of each supplier is calculated.

534

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1. Selecting criteria for supplier evaluation and Weight


Code

Criteria

D1
D2
D3
D4
D5
D6
D7
D8

(Material Quality)
(On time delivery)
(Ordering cost)
(Product price)
(Financial stability)
(Delivery lead time)
(Technical Capability)
(Transportation cost)
(Rejection of defective
product)
(Production facilities and
capacity)

D9
D10

Weight
(%)
0.20
0.08
0.07
0.15
0.10
0.09
0.07
0.05
0.08
0.11

Step-1 developing decision matrix;


Table2. Supplier's information
Criteria
Suppliers
D1 (%)
D2 (%)
D3 ()
D4 ()
D5 (Grad)
D6 (Day)
D7 (%)
D8 ()
D9 (%)
D10(Grad)

95
90
135
2800
5
12
46
650
0.02
5

94
96
150
3500
3
15
52
470
0.03
4

96
94
145
3000
6
14
38
550
0.01
6

90
91
140
3100
3
10
40
700
0.02
7

Step-2 Calculating the normalized decision matrix


=

ij 2

Table 3. Normalized decision matrix information of Suppliers


SupplierCriteria

D1

0.51

0.50

0.51

0.48

D2

0.49

0.52

0.51

0.49

D3

0.47

0.53

0.51

0.49

D4

0.45

0.56

0.48

0.50

D5

0.56

0.34

0.68

0.34

D6

0.47

0.58

0.54

0.39

D7

0.52

0.59

0.43

0.45

D8

0.54

0.39

0.46

0.58

D9

0.47

0.71

0.24

0.47

D10

0.45

0.36

0.53

0.62

535

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Step-3 calculating the weighted normalized decision matrix;


Vij = Wij * Sij
Table 4. Weighted normalized decision matrix information of Suppliers
Criteria Supplier

D1

0.1020

0.1000

0.1020

0.0960

D2

0.0392

0.0416

0.0408

0.0392

D3

0.0329

0.0371

0.0357

0.0343

D4

0.0675

0.0840

0.0720

0.0750

D5

0.0560

0.0340

0.0680

0.0340

D6

0.0423

0.0522

0.0486

0.0351

D7

0.0364

0.0413

0.0301

0.0315

D8

0.270

0.0195

0.0230

0.0290

D9

0.0376

0.0568

0.0192

0.376

D10

0.0495

0.0396

0.0583

0.0682

Step-4 Determining the PIS (Positive Ideal Solution) and NIS (Negative Ideal Solution).
V+ = {.1020, .0416, .0371, .0840, .0680, .0522, .0413, .0290, .0568, .0396}
V- = {.0960, .0392, .0329, .0675, .0340, .0351, .0301, .0195, .0192, and .0682}
Step-5 Calculating separation measure +Calculating separation measure
Table 5. Positive separation measure of SuppliersTable 6. Negative separation measure of Suppliers
+ 2

=1 (ij )

Ei+ =

Supplier
1
2
3
4

Supplier

0.0320
0.0353
0.0462
0.0534

1
2
3
4

=1 (ij )

0.0367
0.0544
0.0388
0.0219

Step-6 Separation measures and the relative closeness coefficient;

RESULTS
Table 7. Relative Closeness Coefficient of Suppliers
Suppliers

Closeness Coefficient

=
Supplier 1
Supplier 2
Supplier 3
Supplier 4

Rank

+ +

0.534
0.606
0.456
0.290

2
1
3
4

Therefore, the relative closeness coefficients are determined, and four suppliers are ranked. Obtained results have been mentioned in
Table-7. Thus, supplier 2 has the best score amongst 4 suppliers.

PROMETHEE METHOD
STEP 1: Normalize the decision matrix using the following equation:
536

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Rij= [Xij-min(Xij)] / [max(Xij)-min(Xij)]


(i=1,2,3, j=1,2.m) -------- (7)
Where Xij is the performance measure of ith alternative with respect to jth criteria.

STEP 2: Calculate the evaluative difference of ith alternative with respect to other alternative. This step involves the calculation of
differences in criteria values between different alternative pairwise.

STEP 3:Calculate preference function, Pj (i, i)


Pj (i, i) =0 if Rij<=Rij
Pj (i, i)= (Rij-Rij) if Rij >Rij

STEP 4: The aggregate preference function taking in to accountthe criteria weight.


Aggregate preference function,
mm
(i,i)= [ Wj * Pj (i ,i)] / Wj --------------- (8)
j=1 j=1
Where Wj is the relative importance (weight) of jth criteria

STEP 5: Determine the leaving and entering outranking flows as follows:


Leaving or positive flow for ith alternative
n
+ (i) = 1/n-1 (i,i) (i is not equal to i)
i=1 ---------------------------- (9)
Entering or negative flow for ith alternative
n
- (i)= 1/n-1 (i,i) (i is not equal to i)
i=1 ----------------------- (10)
Where n is the number of alternatives.
Here each alternative faces (n-1) other alternatives. The leavingflow express how much an alternative dominates the other
alternative,while the entering flow denotes how much an alternatives dominated by other alternatives. Based on these
outrankingflows, the PROMETHEE-1 method provide a partial preorderof the alternatives, whereas the PROMETHEE-2 method give
the complete pre order by using the net flow, though it losses much information of preference relations.
Calculate the net outranking flow for each alternative.
(i) = + (i) -(i)-------------------------- (11)
Determine the ranking of all the considered alternatives depending on the values of (i). The higher value of (i), the better is
alternative.Thus the best alternative is the one having the highest (i) value.

CASE STUDY
As a case study, the supplier selection problem in a Lanco IndustrySrikalahasthi has been studied.The attributes for supplier selection
are cost (Rs), insertion loss(db), volume (cc), and Weight (kg). The targeted values of eachcriterion correspond to the elements of
reference data series for comparison [9]. The target to minimize cost, achieve high insertionloss and less volume, less weight. For cost,
volume and weightlower the better criteria (LB) and for insertion loss higher thebetter criteria (HB) have been selected.

Table 8.Objective data for supplier selection problem


Supplier
Criteria
Supplier 1
Supplier 2
Supplier 3
Supplier 4
537

Cost (Rs)
0.590
0.745
0.590
0.590

Insertion
Loss(db)
0.745
0.665
0.745
0.665

Volume
(cc)
0.500
0.745
0.590
0.590
www.ijergs.org

Weight
(Kg)
0.500
0.745
0.665
0.590

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 9. Normalized decision matrix


Supplier
Criteria

Cost
(Rs)

Insertion
Loss(db)

Volume
(cc)

Weight
(kg)

Supplier 1

Supplier 2

Supplier 3

0.3673

0.6734

Supplier 4

0.3673

0.6734

Table 10. Preference functions for all the pairs of alternative


Suppliers pair
Criteria

Cost
(Rs)

Insertion
Loss(db)

Volume
(cc)

Weight
(Kg)

(1,2)

(1,3)

(1,4)

(2,1)

(2,3)

0.6327

0.3266

(2,4)

0.6327

0.6327

(3,1)

0.3627

0.6734

(3,2)

(3,4)

0.3061

(4,1)

0.3673

0.3673

(4,2)

(4,3)

Table 11. Aggregate preference function


Suppliers

Supplier 1

Supplier 2

Supplier 3

Supplier 4

Supplier 1

0.300

0.300

Supplier 2

0.700

0.57859

0.61074

Supplier 3

0.1214

0.300

0.03214

Supplier 4

0.08926

Table 12. Leaving and Entering flows for different supplier


Suppliers

Leaving Flow

Entering Flow

Supplier 1

0.200

0.30355

Supplier 2

0.62978

0.2000

Supplier 3

0.15118

0.19286

Supplier 4

0.02975

0.31429

538

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RESULTS
Table 13. Net Outranking Flow values for different supplier
Suppliers

Net out Ranking Flow

Supplier Ranking

Supplier 1

0.1036

Supplier 2

0.4298

Supplier 3

0.0417

Supplier 4

0.2846

Therefore, Net out Ranking Flow for different Suppliers are determined, and four Suppliers are ranked. Thus Supplier 2 has best score
amongst 4 Suppliers.

CONCLUSION
For an Industry it is necessary to maintain the good coordination between management and supplier in terms of material quality,
quantity, cost and time. By above mathematical treatment it is clear that the supplier selection for an Industry involves multiple criteria
which show the important role in selection of suppliers. It allows the decision makers to rank the candidate alternative more efficiently
and easily. The present study explores the use of PROMETHEE and TOPSIS methods in solving a supplier selection problem and the
results obtained can be valuable to the decision maker in framing the supplier selection strategies.

REFERENCES:
[1] William Ho, Xiaowei Xu, Prasanta K. Dey. Multi-criteria decision making approaches for supplier evaluation and selection,
European Journal of Operational Research (2010), Volume: 202, Issue: 1, Publisher: Elsevier, Pages: 16-24.
[2] Albadvi, A., Chaharsooghi, S. K., & Esfahanipour, A. (2007). Decision making in stock trading: An application of
PROMETHEE. European Journal of Operational Research, 177, 673683
[3] Brans, J. P., Vincke, P. H., & Mareschall, B. (1986). How to select and how to rank projects: The PROMETHEE method.
European Journal of Operational Research, 14, 228238.
[4] Charles A. Weber, John R. Current, W.C. Benon. Vendor selection criteria and methods, European Journal of Operational
Research 50 (1991) 2-18, North Holland.
[5] Chen-Tung Chen, Ching-Torng Lin, Sue-Fn Huang, A TOPSIS approach for supplier evaluation and selection in supply
chain management. International Journal of Production Economics, Volume 102, Issue 2, August 2006, Pages 289301.
[6] C. Elanchezhian B, Vijaya Ramnath, Dr. R. Kesavan, Vendor Evaluation Using Multi Criteria Decision Making,
International Journal of Computer Applications (0975 8887) Volume 5 No.9, August 2010.
[7] H.K. Sim and Mohamed K. Omar, W.C. chee, N.T. gan, A Survey on supplier Selection Criteria in the Manufacturing
Industry in Malaysia. The 14th Asia Pacific Regional Meeting on International Foundation for Production Research,
Melaka, 7-10 December 2010.
[8] J.p. Brans, Ph. Vincke, B. Mareschal, "How to select and how to rank projects: The PROMETHEE method", European
journal of operational research 24(1986) 228-238.
[9] Mohammad Saeed Zaeri, Amir Sadeghi, Amir Na-deri, Abolfazl Kalanaki, Reza Fasihy, Seyed Masoud Hosseini Shorshani,
and Arezou Poyan, Application of multi criteria decision making technique to evaluation suppliers in supply chain
management, African Journal of Mathematics and Computer Science Research Vol. 4 (3), pp. 100-106, March, 2011.
[10] M. behzadian, R.B. kazemzadeh, A. Albadvi, M. Aghdasi, "PROMETHEE: a comprehensive literature review on
methodologies applications", European journal of operational research 200 (2010) 198-215.
[11] M.Diaby, J.M.Martel, "Preference structure modelling for multi-objective decision making: A goal programming approach".
Journal of Multi-Criteria Decision Analysis 6 (1997) 150154.
[12] Pragati Jain and Manisha Jain, Fuzzy TOPSIS Method in Job Sequencing Problems on machines of un-equal efficiencies,
Canadian Journal on Computing in Mathematics, Natural Sciences, Engineering and Medicine Vol. 2 No. 6, June 2011.
[13] Vijay Manikrao Athawale, Shanker Chakraborty, Facility location selection using PROMETHEE method. Proceedings of
the 2010 international conference on industrial engineering and operations management Dhaka, Bangladesh, January 9-10,
2010

539

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Vehicle Collision Recognition and Monitoring System Based on AVR Platform


Apurva Mane1, Prof. Jaideep Rana2
1

Dept. of Electronics, apurvamane91@yahoo.com , JNEC, Aurangabad

Dept. of Electronics, jaideeprana2003@yahoo.co.in, JNEC, Aurangabad

Abstract Security in travelling is primary concern for everyone. Demand for automobile has increased traffic and thereby causing
more accidents on the road. People lose their lives because of poor emergency facilities. Preemption of the accidents taking place on
the roads is not possible but at least after effects can be reduced. Proposed system ensures, making emergency facilities available to
accident victims as early as possible by letting people or hospital or rescue team or police station know the accident spot with the help
of accelerometer embedded in the vehicle. As vehicle rolls over on the road thereby changing position of the accelerometer, message
is sent to the first aid center, rescue team or police station using GSM. With the help of space navigation system GPS locates the
position of the vehicle where accident has occurred. So, this paper deals with the GSM-GPS module and accelerometer and ATMEGA
16 microcontroller mounted vehicle, which even meet with the accident, let people know about the accident location and provides
situational awareness. This project is about making cars more intelligent, they provide basic information covering geographical coordinates to the police station.

Keywords Vehicle accident, GPS, GSM, accelerometer, vehicle tracking, ATMEGA 16, message.
INTRODUCTION

There are more and more traffic jams as vehicles demands are getting high day by day. So, transportation needs more
improvement as demands are increasing there will be more possibility of vehicle accidents [1]. Vehicle accidents are one of the leading
causes of the fatalities. It will be a serious consequence if people cant get help on right time. Poor emergency incident is a major
cause of death rate in our country [2]. Even with awareness campaign, this problem is still rising due to riders drunk driving and speed
driving. Major automobile manufacturers have developed safety devices such as seat belt to protect riders from accidental injuries [3].
Life saving measure electronic stability control also reduces injuries. Crash analysis studies have shown, traffic accidents
could have been prevented with the use of this advanced life saving measure [4]. This design focuses on providing basic information
on the accident site to the hospital or police station. As s result of this sudden help, public life may get save [5]. In this work, ADXL
accelerometer, a three-axis accelerometer and GPS tracking system works for accidental monitoring. This design detects accidents in
less time and sends this information to the hospitals. In this case GSM will send short message to the hospital or police station. This
message will read the geographical co-ordinates of accident spot with the help of GPS. And, as now the location has been traced by
the GPS, emergency medical service can be given to the accident victims as soon as possible. Using this method, traffic fatalities can
be reduced as time between when accident occurs and responders are dispatched to the accident scene, reduces. Accelerometer sensor
embedded in a car determines severity of the accident as how much car has rolled [6].
As soon as the accident occurs, an alert message including latitude, longitude position, date and time of accident occurrence
and finally link, indicating Google map is sent automatically to the rescue team or to the police station. This message is sent with the
help of the GSM module and accident location is detected through GPS module. The accident can be recognized precisely with the
help of ADXL accelerometer sensor which also acts as vibration sensor. The angle of the tilt of the car can be shown on LCD. This
design provides solution to the road accidents in most feasible way [7].

540

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.1 Vehicle accident recognition and monitoring system


Key features of this design include:
1. Vehicle real time monitoring by sending its information regarding position (longitude & latitude), time, link to the Google
maps, to the people or hospital or police station, whomsoever phone no. is provided that should help them to get medical help
if accident occurs.
2. After getting message from the vehicle which has had accident, user has access to get real-time position of the vehicle.

LITERATURE SURVEY
There are many efforts, applications; approaches have been proposed to provide security and safety in case of accidents. A
novel approach to increase the safety of road travel using the concepts of wireless sensor networks and the Bluetooth protocol has been
proposed. It discussed, how vehicles can form mobile ad-hoc networks and exchange data sensed by the on-board sensors [8]. The
platform of the android operating system (OS) and software development environment proved optimum solution for the public safety in
case of road accidents [9]. A good survey of using personal mobile phone, microcontroller, Bluetooth and Java technology has been
provided in [10]. It developed integrated system to manage, control and monitor all the accessories inside the vehicle in order to achieve
the idea of an intelligence car with ability to uses personal mobile hand phone as a remote interface. Smart phone-based accident
detection can reduce overall traffic congestion and awareness of emergency responders. This approach also has been proposed [11]. A
new design was developed containing vehicle tracking and control system to control the vehicle through an android based Smartphone
[12]. Again, one application provided a solution with the use of a mobile phone for monitoring an SMS-based GPS tracker especially in
locations where GPRS may not be available [13]. The general mechanism is to provide the real time position of a vehicle using GPS
receiver and send this information to GSM center through the software, this is all done by the monitoring center which is working as a
control unit that is connected not only by optical cable but also connected wirelessly through TCP/IP protocols. The monitoring center
distributes the data to the client in the understandable format and it also stores the travelling records and displays the information
about vehicle on electronic map through GIS system [14]. Another approach is that, vehicle terminal includes a GPS receiver which
extracts information about position through GPS satellites and sends it through GSM network and to the control center which reads the
information and saves it in the data base system and on user demand displays it on electronic map [15]. Different application of localizing
the vehicle system by receiving the real time position of the vehicle through GPS and send this information through GSM module via
SMS service with an added feature of GPRS transmission to the monitoring center through usage of internet [16]. This project has been
designed using microcontroller AT89S52, too. It used EEPROM to store the phone numbers. People also designed a mobile technology
using smart phones to find the leading vehicle, allowing the possibility to make collision warning systems more affordable and portable.
A smart approach consisting efficient driving assistant that uses the features of the Smartphone to accurately figure out the driver's
driving style from point of view of energy consumption and generate eco-driving tips to correct the bad driver's driving habits.

541

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SYSTEM ARCHITECTURE

Fig.2 Block diagram of Vehicle Collision Recognition & Monitoring System Based on AVR Platform
This design includes hardware that consists of ADXL accelerometer, ATMEGA 16 microcontroller, GSM and GPS Module
and LCD. The whole system works on a 5V or 9V dc regulated power supply. The GPS receiver module is interfaced with USART
of ATMEGA 16. It gives speed and location information. Whenever accident occurs, the vibrations are sensed by the accelerometer
sensor and these signals are given to the controller. If incase there is a rolls over of car, the angle of the rolls over is detected by this
ADXL sensor and it is given as the input to the controller for further processing. When the input is received by the controller, the
message is sent to the rescue team with the help of the GSM module. The rescue team reaches the accident spot with the help of the
location given in the message. An LCD display is provided to get the display of the tasks carried out; it also shows the accelerometer
readings. The GSM - GPS module is interfaced to the AVR controller using serial communication. All the components are interfaced
precisely so that the accident detection and message sending are fully automated, so that the warning time is reduced significantly.

COMPONENT DESCRIPTION
1.

542

GSM-GPS MODULE: The GPS Tracker GPS303-B is the new tracker in the market, it can locate accurately and be used for
Vehicles accident detection, set multiple functions of security, positioning, monitoring surveillance, emergency alarms and
tracking. It is based on existing GSM/GPRS (850/900/1800/1900MHz) GSM network and GPS satellites, locate and monitor
any remote targets by SMS or Computer or PDA.
GSM Module- Global System for Mobile communications (GSM) is the popular wireless standard for mobile phones in the
world. GSM module allows transmission of Short message service (SMS) in text mode.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

GPS Module- The Global Positioning System (GPS) is a space-based radio navigation system that provides
positioning, navigation, and timing services to users on a continuous worldwide basis. For anyone with a GPS receiver, the
system will provide location and time. GPS provides accurate location and time information for number of people in all
weather, day and night, anywhere in the world. A GPS navigation device is a device that accurately calculates geographical
location by receiving information from GPS satellites. The GPS concept is based on time. The satellites carry atomic clocks
which are synchronized and very stable; any drift from true time maintained on the ground is corrected daily. Likewise, the
satellite locations are monitored precisely. GPS satellites transmit data continuously which contains their current time and
position.
2.

3.

ADXL ACCLEROMETER: The ADXL335 is 3-axis accelerometer with signal conditioned voltage outputs. The product
measures acceleration with a minimum full-scale range of 3 g. It can measure the static acceleration of gravity in tilt sensing
applications, as well as dynamic acceleration resulting from motion, shock, or vibration. An accelerometer measures
acceleration. Acceleration is a measure of how quickly speed changes. Accelerometer sensor is used to measure static (earth
Gravity) or dynamic acceleration in all three axes, forward/backward, left/right and up/down. Accelerometer is used in this
design for the collision detection. Accelerometers operate on the piezoelectric principal: a crystal generates a low voltage or
charge when stressed as for example during compression. (The Greek root word piezein means to squeeze.) Motion in the
axial direction stresses the crystal due to the inertial force of the mass and produces a signal proportional to acceleration of that
mass. This accelerometer also acts as vibration sensor to measure vibrations whenever vehicle collides with another vehicle.
Accelerometer is interfaced to the ADC 1 and ADC 2 of the microcontroller.
ATMEGA 16: The ATmega16 is a low-power CMOS 8-bit microcontroller based on the AVR enhanced RISC architecture. By
executing powerful instructions in a single clock cycle, the ATmega16 achieves throughputs approaching 1 MIPS per MHz
allowing the system designed to optimize power consumption versus processing speed. Microcontroller unit receives
information from accelerometer, GPS module and send it to rescue system and police station by using GSM.

WORKING OF THE SYSTEM

Fig. 3 Working of the system


543

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Whenever accident occurs, accelerometer sensor detects and sends the signals to microcontroller, by using GPS we will get
particular locations where accident has occurred, then GSM sends message to the person whose phone number is given.

SYSTEM SOFTTWARE DESIGN


The software used for the development of system is Proteus 7.8 with the C program language been used. The Flow Chart of
the system is shown in the figure 4. It shows the system is initialized on power ON. When the system is detected to be abnormal, it
can be concluded that the accident has occurred. The vibration/acceleration of the vehicle is detected to confirm the cause of the
accident. As soon as the accident is detected, the message is sent automatically to the rescue team after the location is detected by the
GPS.

Fig.4 Flow chart of Vehicle Collision Recognition & Monitoring System Based on AVR Platform

544

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Acknowledgment
I would like to express my gratitude to the following people for their support and guidance for helping me to complete this work.
I express my deep sense of gratitude to Prof. J.G.Rana, Head of the Department of Electronics and my respected project guide, JNEC,
Aurangabad, India. I am deeply indebted to Dr.S.D Deshmukh, Principal, JNEC, Aurangabad, India.

CONCLUSION
This paper provides the design which has the advantages of low cost, portability, small size. The platform of the system is AVR
along with accelerometer sensor; GPS and GSM, interfacing which reduces the alarm time to a large extent and locate the site of
accident accurately. This system can overcome the problems of lack of automated system for accident location detection.
Consequently, the time for searching the location is reduced and the person can be treated as soon as possible which will save many
lives. This system will have broad application prospects as it integrates the positioning systems and the network of medical based
services.

REFERENCES:
[1] Zhang Wen and Jiang Meng, Design of Vehicle Positioning System Based on ARM,ISBN-number 978-1-61284-109-0, 2011
IEEE
[2] Wang Wei and Fan Hanbo,Traffic Accident Automatic Detection And Remote Alarm Device, 978-1-4244-8039-5, 2011 IEEE
[3] Vikas Desai,Design And Implementation of GSM and GPS Based Vehicle Accident Detection System, IJIT, Vol 01, Issue 03
2013
[4] Wiiliam Evanco, The Impact of Rapid Incident Detection on Freeway Accident Fatalities, Mitretek Systems
[5] Saurabh S. Chakole, Vivek R. Kapur and Y.A.Suryawanshi, ARM Hardware Platform for Vehicular Monitoring and Tracking,
IEEE proc.CSNT, 2013
[6] S. Rauscher, G. Messner, P. Baur, J. Augenstein, K. Digges, E. Perdeck, G. Bahouth, and O. Pieske, Enhanced Automatic
Collision Notification System- Improved Rescue Care Due To Injury Prediction, 2009.
[7] Varsha Gaud and V. Padmja, Vehicle Accident Automatic Detection and Remote Alarm Device, , IJRES, Vol.1 No.2, 2012.

[8] Hemjit Sawant, Jindong Tan, Qingyan Yang QiZhi Wang, Using Bluetooth and Sensor Networks for Intelligent Transportation
Systems, In proceeding of: Intelligent Transportation Systems, 2004.

[9] J. Whipple, W. Arensman and M.S Boler, A Public Safety Application of GPS-Enabled smart phones and the Android Operating
System, IEEE Int. Conf. on System, Man and Cybernetics, San Antonio, 2009.
[10] Helia Mamdouhi, Sabira Khatun and Javad Zarrin, Bluetooth Wireless Monitoring, Managing and Control for Inter Vehicle in
Vehicular Ad-Hoc Networks, Journal of Computer Science, Science Publications, 2009.
[11] Jules White, Brian Dougherty, Adam Albright, and Douglas C, Using Smartphone to Detect Car Accidents and Provide
Situational Awareness to Emergency Responders Chris Thompson, Mobile Wireless Middleware, Operating Systems, and
Applications -3rd International Conference, July 2010.
[12] Rajesh Borade , Aniket Kapse, Prasad Bidwai , Priya Kaul, Smartphone based Vehicle Tracking and Control via Secured
Wireless Networks, International Journal of Computer Applications, vol. 68 No.7, April 2011.
[13] Lawal Olufowobi, SMS Based Android Asset Tracking System 2011.
[14] Ioan Lita, Ion Bogdan Cioc, Daniel Alexandru Visan, A New Approach of Automobile Localization System Using GPS and
GSM/GPRS Transmission, ISSE St. Marienthal, Germany,2006.
[15] Mrs.RamyaKulandaivel, P.Ponmalar, B.Geetha, G.Saranya, GPS and GSM Based Vehicle Information System, IJCE, Vol.1
No.1,2012.
[16] M.AL-Rousan, A. R. AI-Ali and K. Darwish, ), GSM-Based Mobile Tele Monitoring and Management System for Inter-Cities
Public Transportations, ICIT,2004

545

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Hybrid Communication System using


MIMO-OFDM
K.Swathi1, S Mallikarjun Rao2
1

Department of E.C.E, Andhra Loyola Institute of Engineering and Technology,


Vijayawada, Andhra Pradesh, India
hanok.swathi@gmail.com
2
Assistant Professor, Department of E.C.E,
Andhra Loyola Institute of Engineering and Technology,
Vijayawada, Andhra Pradesh, India
smr.aliet@gmail.com

Abstract:The enormous growth for Wireless broadband services and huge take up of mobile technology demands for higher data rate
services.For a communication network to be successful only provisioning of high data rates alone is not sufficient, but also should
provide wide coverage.Both satellite and terrestrial networks cannot guarantee this on their own. This incapability is attributed to
capacity of coverage issues in densely populated areas for satellites and lack of infrastructure in rural areas for terrestrial networks.
Therefore hybrid architecture between terrestrial and satellite networks based on MIMO-OFDM with frequency reuse is employed
here.In hybrid architecture, the users will be able to avail the services through the terrestrial networks as well as the satellite networks.
The users located in urban areas will be served by the existing terrestrial mobile networks and similarly the one located in rural areas
will be provided services through the satellite networks providing global connectivity. Andthe combination of multiple-input multipleoutput (MIMO) signal processing with orthogonal frequency division multiplexing (OFDM) is regarded as a promising solution for
enhancing the data rates of next-generation wireless communication systems.
However, this frequency reuse introduces severe co-channel interference (CCI) at the satellite end. To mitigate CCI, we propose an
OFDM based adaptive beamformer implemented on-board the satellite with pilot reallocation at the transmitter side. The system
performance is simulated by using the software MATLAB, the experimental result shows the efficiency MIMO-OFDM
communication system.
Keywords: Adaptive beamforming, Co-channel interference (CCI), Multiple-input multiple-output (MIMO),Orthogonal frequency
division multiplexing (OFDM).
section.

1. INTRODUCTION
In todays worldthe need for communication has driven the growing demand of multimedia services and the growth of Internet related
contents lead to increasing interest to high speed communication with higher data rates. This has lead to the evolution of 4G [1]
networks which will employ Orthogonal Frequency Division Multiplexing (OFDM) [2] at the physical layer. Similarly in parallel,
Multiple Input Multiple Output (MIMO) [35] technology has emerged as the most significant breakthrough in modern communication
providing higher capacity by utilising multiple antenna arrangements at both the transmitter and receiver side. The combination of
OFDM with advanced antenna systems thus forms an intuitive and formidable solution towards higher capacity communication
systems.
Success of a communication network does not only depend on provisioning of high data rates, but also on the coverage it can offer.
Future networks also need to incorporate global connectivity to ensure a rich customer base. Standalone existing terrestrial mobile
networks are unable to provide such coverage due to lack of infrastructure in rural areas. This is where satellite networks are favoured
as they have the potential to offer true global coverage as well as rapid network deployment. However satellite links suffer from
reduced signal penetration and capacity/coverage issues in urban areas as well as at lower elevation angles, hence the users located in
urban areas are served by existing terrestrial system.
In order to deal with respective disadvantages of both satellite and terrestrial networks, a hybrid architecture based on OFDM
system using MIMO is modeled and is presented in Fig 1. In this architecture the users located in rural areas are served directly from
the satellite spot beam due to lack of infrastructure of terrestrial networks [2] [3]. On the other hand users located in urban areas are
served by existing terrestrial system as satellite signal cannot penetrate in buildings [4]. Likewise, the spectrum is being shared
546

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

between two networks that is terrestrial and satellite for providing throughout connectivity and the arrangement of multiple antennas at
the transmitter and the receiver (MIMO) provide high data rates at reasonable cost.
Here the satellites in order to support a rich customer base by providing all time connectivity employs frequency reuse.However,
this frequency reuse introduces severe co-channel interference (CCI) at the satellite end. To mitigate CCI, we propose an OFDM
based adaptive beamformer implemented on-board the satellite with pilot reallocation at the transmitter side.
The rest of this paper is structured as follows: Section 2 gives the OFDM with MIMO. Section 3 describes the system model Section 4
presents Simulation results obtained using MATLAB and the results are discussed with reasons. Section 5 presents Conclusion and
References are given in last section

2. MIMO in OFDM
All wireless links are affected by three common problems: speed, range and reliability. One parameter is interlinked with other by
strict rules, i.e. no one parameter is independently achieved. Speed could be increased only by sacrificing range and reliability. Range
could be extended at the expense of speed and reliability. And reliability could be improved by reducing speed and range. The
improvement in parameter is obtained at the cost of the other two. [6]
But MIMO OFDM provides the 'all in one package" by providing the speed, range and reliability simultaneously.Nortel defines the
OFDM-MIMO combination as (Herman, 2006) "With OFDM, a single channel within a spectrum band can be divided into multiple,
smaller sub-signals that transmit information simultaneously without interference[7]. Because MIMO technology is able to link
together many smaller antennas to work as one, it can receive and send these OFDM's multiple sub-signals in a way that allows the
bandwidth to be substantially increased to each user as required".

Figure 1: Hybrid System Scenario


OFDM is the technique used to mitigate the multipath propagation problem and MIMO is used for the efficient usage of spectral
bandwidth thus combining these techniques results in wireless system that has best spectral coverage, reliable transmission in highly
obstructive environment with higher data rates.
OFDM creates the slow time varying channel streams and MIMO has capacity of transmitting the signals over multiple channels by
use of an array of antennas thus the combination of OFDM & MIMO can generate extremely beneficial results.
547

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3. SYSTEM MODEL
The block diagram of MIMO-OFDM system model [1] is presented in Fig 2 respectively with the beamforming at the satellite end.
The data generation block or source generates random data and sends it to the Modulator for modulation of the generated data
according to the type of modulation scheme used. Here we are using both BPSK and QPSK and will analyse their performance. After
modulation the pilot insertion takes place. Pilots are known data to the receiver which is used to estimate the channel [5]. Pilots can
either be inserted with specific period uniformly between the data sequence [8]. Here in our data sequence five pilots have been
inserted. After pilot insertion the mapped output of the data the in frequency domain of multiuser case is expressed as, taking one
symbol at a time.
(, ) = ( 1, , 2, , , , )
(1)
Where x(n, j) shows the nth subcarrier of the jth user, where n = 1, 2, 3, . . . . , N and j = 1, 2, 3,. , J and (.)T represents the
transpose.Now the mapped data signal is in frequency domain is transformed into time domain by using IFFT.
=
(2)
Eq.2, is the time domain symbol of an OFDM system and F shows the matrix for FFT operation and (.)H shows the Hermitian
transpose. After the transformation of signal from time domain to frequency domain, then comes the block of cyclic prefix extension
to overcome the effect of ISI (Inter Symbol Interference) [9][10]. The guard interval is introduced using the following equation [11].
,
=
(3)
In (3), = [ + 1, , + 2 , , 1, , , , 1, , 2, , , (, )] is the OFDM symbol with the
cyclic prefix of 1/4th of the symbol length and N,G in (3) is containing the last G rows of matrix IN, which is an identity matrix of
size N. After this the parallel to serial converter (P/S) converts the data to serial form and readily transmits over the channel. Channel
effect can be expressed as [11].
=
(4)
Where k is the index of time in (4), so passing through the channel, the signal is received by the receiver from the desired source
and other sources of interference. In this paper the channel effect is not considered. When the signal is received at the satellite antenna
element, the signal matrix for one OFDM symbol after removing the cyclic prefix can be represented [12]
= +
(5)
In (5); A is the array response, Y is the received OFDM symbol and N is the noise. The important point to notice here

Figure 2: MIMO-OFDM System model


548

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

is that the beamformer will take one OFDM symbol at time so the noise for that one symbol will be randomly generated.
Similarly the received signal for j th user and nth subcarrier is given by y(j, n) i.e., is the element of Y matrix. Also n(s, n) and v(s,
n) are the elements of matrix N and V respectively representing the noise and output of beamformer for sth antenna element and nth
OFDM subcarrier and the element of A matrix which is the array response of s th antenna elements and jth user is given by(6).
, = 2
(6)
Where the total number of antenna elements are s = 1, 2, 3, . . . ,S. And the inter antenna element distance is given by d, and the
direction of Arrival (DOA) for the j th user is given by j and the carrier wavelength is given by . Since we have modelled the linear
array so the distance between inter-elements is da = /2, the array response will be 2
3.1. Adaptive Beamforming
Beamforming (BF) is a type of spatial filtering provided by a array of antenna elements to mitigate interference.Use of BF (spatial
filtering) with array of antenna elements offers two principle advantages:
1) In view to the CCI problem, the capability of interference mitigation is directly proportional to the size (or length) of the spatial
aperture. An array of antenna elements or sensors is able to synthesize a much larger spatial aperture as compared to a single physical
antenna.
2) The more important advantage is that BF gives the ability to perform active signal suppression. This can be done by adaptively
changing the spatial filtering functions to effectively track the desired user and mitigate the interference.
A beamformer is analogous to a Finite impulse response (FIR) filter in the sense that an FIR filter linearly combines temporally
sampled data whereas a beamformer linearly combines spatially sampled data. Therefore, beamformer response can be defined as a
function of location and frequency.
The beamformer processes the output of the antenna elements by applying complex weights to the symbols. The complex weights
of the beamformer can only be considered as a constant value if the statistics of the signal at the input of the beamformer remain
unchanged. In wireless communication systems, the users are not bound to a constant position. Moreover, users can be present in any
location within the service region and hence weights cannot be hard-wired. Hence the beamformer should have the capability of
changing its weights depending on the DOA of desired and interference signals. This requires computation of weights at frequent
intervals and the subsequent class of BF is referred to as adaptive BF and this process is expressed as
=
(7)
where r in (7) is termed as the weighted beamformer output and r = [r(1), r(2), . . . . , r(N)] and w is [w(1), w(2), . . . . , w(N)] T
termed as complex weights. After the beamformer, the received data is applied to serial to parallel (S/P) converter and is converted
into parallel sequence. Finally the obtained parallel sequence is converted into frequency domain by applying FFT.
=
(8)
In (8) r=r1, r2, r3,, rN is the OFDM symbol received in frequency domain. In order to update the weight for the next symbol
the beamformer takes the transmitted pilot sequence and also the pilots received using these pilots it calculates the error vector [12].
Depending upon this error vector, adaptive algorithm based upon Mean Square Error (MSE) computes the next weight for the next
symbol[13][14]. The error vector is as shown below

=
(9)
But the error vector obtained is in frequency domain while pre-FFT beamforming is done in time domain as said in the previous
section. So there is a need to convert this error vector into time domain [15]. The transformation of error vector from frequency
domain to time domain is expressed as below [16]
=
(10)
Here ep is the error in time domain Fp is the IFFT which transforms the error vector from frequency domain to time domain.
After the process of error calculation, Least Mean Square (LMS) algorithm is implemented in order to update the beamformers
complex weights [11][17]. Hence a new weight for the next symbol is calculated we repeat the process till all the weights for all the
symbols are calculated and the desired data is extracted.

4. SIMULATION RESULTS
This section gives the simulation results generated from implementation of single and multiuser OFDM system

549

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3: BER vs Eb/No for BPSK


Fig 3 shows the performance of the system in terms of BER and Eb/No. Simulated curve is plotted against the theoretical curve which
proves that the OFDM system is working properly in BPSK

Figure 4: BER vs Eb/No for QPSK


550

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 4shows the performance of the system in terms of BER and Eb/No. Simulated curve is plotted against the theoretical curve which
proves that the OFDM system is working properly in QPSK

Figure 5: BER vs Eb/No for 8-PSK


Fig 5 shows the performance of the system in terms of BER and Eb/No. Simulated curve is plotted against the theoretical curve which
proves that the OFDM system is working properly in 8-PSK.

Figure 6: BER vs Eb/No for 16QAM


551

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 6 shows the performance of the system in terms of BER and Eb/No. Simulated curve is plotted against the theoretical curve which
proves that the OFDM system is working properly in 16QAM

Figure 7:System performance vs desired user Eb/No for MIMO and M-PSK{M=2,4,8}
Performance of the system in terms of BER and Eb/No. Simulated curve is plotted against the theoretical curve which proves that the
OFDM system is between 8-psk, QPSK, BPSK and MIMO where MIMO have least BER w.r.t SNR comparatively other methods.

Figure 8: MSE vs SNR for MIMO and M-PSK{M=2,4,8}

552

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Performance of the system in terms of BER and Eb/No. Simulated curve is plotted against the theoretical curve which proves that the
OFDM system is between 8-psk, QPSK, BPSK and MIMO where MIMO have better means more MSE w.r.t SNR comparatively
other methods

5. CONCLUSION
MIMO-OFDM is the solution to obtain significant higher data rates and increase range performance at the same time. This increases
the link capacity by simultaneously transmitting multiple data streams using multiple transmit and receive antennas. It makes it
possible to reach data rates that are several times larger than the current highest rate i.e., up to 54 Mbps.
With the introduction of MIMO-OFDM wireless LAN and the advent of the MIMO-OFDM based 802.11n standard, the performance
of wireless LAN in terms of throughput and range is brought to a significantly higher level, enabling new applications outside the
traditional wireless LAN area. Greater spectral efficiency translates into higher data rates, greater range, and an increased number of
users, enhanced reliability, or any combination of the preceding factors. By multiplying spectral efficiency, MIMO-OFDM opens the
door to a variety of new applications and enables more cost-effective implementation for existing applications

REFERENCES;
1.
Farman Ullah, Nadia N Qadri, Muhammad Asif Zakriyya, Aamir Khan , Hybrid Communication System based on OFDM,
I.J. Information Technology and Computer Science, Nov 2013,vol.12, pp 31-38
[1] Ammar H. Khan, Muhammad A. Imran and Barry G. Evans, OFDM based Adaptive Beamforming for Hybrid TerrestrialSatellite Mobile System with Pilot Reallocation, International workshop on satellite and space communications IEEE, pp 201205
[2] Litwin, L. and Pugel, M, The Principles of OFDM, RF signal processing Magazine Jan 2001 pp 30-48, www.rfdesign.com
Accessed [ 21 April 2010]
[3] Shaoping Chen and Cuitao Zhu, ICI and ISI Analysis and Mitigation for OFDM Systems with Insufficient Cyclic Prefix in
Time-Varying Channels, IEEE Transaction on Consumer Elecronics, vol. 50, No. 1, Feb 2004
[4] C. K. Kim, K. Lee and Y.S. Cho, Adaptive beamforming algorithm for OFDM systems with antenna arrays, vol. 46, no. 4, pp.
1052-1058, Nov. 2000.
[5] http://ofdm-n-mimo.blogspot.in/2008/04/ofdm-mimo-operational-characteristics.html
[6] Herman, W. (2006, October). The Magic of OFDM-MIMO for Maximizing Airwaves. Retrieved December 5, 2007.
[7] C. K. Kim, K. Lee and Y.S. Cho, Adaptive beamforming algorithm for OFDM systems with antenna arrays, vol. 46, no. 4, pp.
1052-1058, Nov. 2000.
[8] H. Matsuoka and H. Shoki, Comparison of pre-FFT and Post-FFT processing adaptive arrays for OFDM system in the presence
of co-channel interference, in Proc. 14th IEEE on Personal, Indoor and Mobile Radio Communications PIMRC 2003, vol.2, 7-10
Sept. 2003, pp. 1603-1607.
[9] D. Zheng and P. D. Karabinis, Adaptive beamforming with interference suppression in MSS with ATC, [Online]. Available:
www.msvlp.com
[10] William Y. Zou and Yiyan Wu, COFDM: An Overview, IEEE Transactions on Broadcasting, vol. 41, NO, 1, March 1995
[11] P.D. Karabinis, Systems and methods for terrestrial reuse of cellular frequency spectrum, USA patent 6 684 057, January 27,
2004
[12] P.D. Karabinis, S. Dutta, and W. W. Chapman, Interference potential to MSS due to terrestrial reuse of satellite band
frequencies, [online]. Available: www.msvlp.com
[13] W. G. Jeon, K H. Chuang, and Y. S. Cho, An equalization technique for orthogonal frequency division multiplexing

systems in time-variant multipat channels, IEEE Trans. Commun., vol. 49, pp. 1185- 1191, January 1999

[14] (S. Colieri, M. Ergen, A. Piro, Bahai. A study of channel estimation in OFDM systems Vehicular Technology Conference, 2002.
Proceedings VTC 2002-Fall. 2002 IEEE 56th, Vol. 2 (10 December 2002), pp. 894-898 Vol. 2
[15] L. Hanzo, W. Webb, and T. Keller, Single and Multi-carrier Quadrature Amplitude Modulation: Principles and Applications for
Personal Communications, WLANs and Broadcasting,. John Wiley and IEEE Press 2000
[16] F.Mueller Roemer, Directions in audio broadcasting, Journal of the Audio Engineering Society, vol.44, pp.158-173, March
1993 and ETSI, Digital Audio Broadcasting (DAB),2nd ed.,May 1997. ETSI 300 401
[17] ETSIDigital Video Broadcast (DVB);Framing structure, channel coding and modulation for digital terrestrial television, August
1997. EN 300 744 V1.1.
553

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANDROID SUBURBAN BUS TICKET SYSTEM


Kushal Pal Singh1, Saurabh Kulkarni2, Harshdev Singh Randhawa3, Deepesh Jain4
Prof L.J. Sankpal5
1,2,3,4

Student,5Guide Department of Computer Engineering, Sinhgad Academy of Engineering, Pune,


Maharashtra, India

Abstract- In cities like Pune and Mumbai the busses are the nerves of the city. But they are behaving as an
open invitation for evil minds to do mishaps as there is no maintenance of data of passenger. The tickets cost being of odd
amounts and many other different problems while buying tickets. Also in this advance world we are dependent on the
paper tickets and we even cannot book the ticket in advance doesnt seems fare so here is a solution. In our proposed
system ticket can be bought with just a smart phone application and, where you can carry your suburban tickets in your
smart phone as a QR (Quick Response) code. It uses the smart phones "GPS" facility to validate and delete your ticket
automatically after a specific interval of time and once the user reaches the destination. User's ticket information is stored
in a CLOUD database for security purpose which is missing in the present suburban system. Also the ticket checker is
provided with a checker application to search for the user's ticket with the ticket number in the cloud database for
checking purposes
KeyWords:Android; SQLite; Cloud Database; QR code

Introduction
In the past few years there were more advancement in the field of technology. Considering bus department, e-ticket facility was
introduced where users browse through a governmental website and book their long journey bus tickets which can be printed out after
confirmation to show it to the checker when needed. Also in foreign countries the use of Oyster cards & Octopus card has become
mandatory during travel. Even the PMPML and BEST buses system have the facility of monthly passes. But we suffer if we forget our
travel cards and we have to stand in the Queue for our local suburban tickets, which is a place where e-ticketing and m-ticketing were
able to lay their foot prints. Android Suburban bus (ASB) ticketing is mainly to buy the suburban tickets which are the most
challenging. Our ASB ticket can be bought with just a smart phone application, where you can carry your suburban bus tickets in your
smart phone as a QR (Quick Response) code. It uses the smartphones "GPS" facility to validate and delete your ticket automatically
after a specific interval of time once the user reaches the destination. User's ticket information is stored in a phones "GPS" facility to
validate and delete your ticket automatically after a specific interval of time once the user reaches the destination. User's ticket
information is stored in a cloud database for security purpose which is missing in the present suburban system. Also the ticket checker
is provided with a checker application to search for the user's ticket with the ticket number in the cloud database for checking
purposes.
SQLite implements most of the SQL standard, that uses a dynamically and weak typed SQL syntax that does not guarantee the domain
integrity. SQLite operations can be multitasked, though writes can only be performed sequentially. The source code for SQLite is in
the public domain. SQLite has many buildings to programming languages. It is the most widely used database, the most widely
deployed database engine.
3.Present System
Android Cloud to Device Messaging (C2DM) is a service that helps developers to send data from servers to their applications on
Android devices. This service provides a simple and lightweight mechanism that servers can use to tell mobile applications to contact
the server directly, to fetch updated application or user data. The C2DM service handles all aspects of queuing messages and delivery
to the target application running on target device. A QR code is a type of matrix-barcode (or 2-D code). The code consists of black
modules arranged in a square pattern on a white background. The information encoded can be made up of four standardized modes of
data (numeric, alphanumeric, Kanji, byte/binary).
Encryption:
An android app manages encryption and decryption of QR codes using DES Algorithm (56 bits), Japanese immigration use encrypted
QR codes when placing visas in passports. Error correction:

554

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Code words are 8-bits long and use the Reed-Solomon error correction algorithm consisting of four error correction levels. The
higher is the error correction level, the less is its storage capacity. The approximate error correction capabilities at each of the four
levels are as follows:
Level L
7% of code words can be restored.
Level M
15% of code words can be restored.
Level Q
25% of code words can be restored.
Level H
30% of code words can be restored.
Due to the design of the codes and the use of 8-bit code words, an individual code block cannot exceed 255 code words in length.
Therefore, the larger QR symbols contain much more data and therefore it is necessary tobreak the message up into multiple blocks.
The QR specification defines the block sizes so that no more than 15 errors can be corrected within each block. This limits the
complexity of certain steps in the decoding algorithm. The Code blocks are then interleaved together, making it less.

4. PROPOSED SYSTEM
In this proposed system we actually propose that a system is possible for ticket booking of suburban where we use the android mobile
application for ticket booking. The checker is provided with a checker application which can check the ticket validation either by
referring the code generated in the ticket or by checking the Quick response (QR) code. Every ticket is given a unique code and the
code is transferred to QR code by using google API. The Google API is the tool which can make a QR code of it which is secure code.
The interface of can be created using the Eclipse. The simple interface can be developed using the development tool for android 2.2
and 8 API the phone of the user should have the minimum required android system and the phone should be GPS Enabled. The
Checkers application is a simple decoder application of the QR code which will decode the QR Code and verify the ticket with the
cloud database. In case by some defect the QR code is not able to be decoded then the checker can put the unique ticket code and do
the verification. The Database of the system can be created on the cloud and the reference can be maintained. We even propose the
GPS ticket verification system. Here the ticket is actually stored in the cloud and during the user travelling through the bus when
enters the bus has actually activate the ticket and the ticket application will track the user from the source to destination. And as the
user reaches the destination the ticket will be deleted from the cloud. OR in the technical words the status flag will be changed to the
status used. This is the proposed system which is capable of using the ticket and work for it.

Figure 4.1: Block Diagram for ASB ticket booking


555

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4.1 Individual Details


The installation of application starts from personalinformation. It gathers the customer information such as first name, last name, date
of birth, city, state etc., and all this information will be stored into user mobiles, SQLitedatabase. So whenever user buys the ticket
this information is also sent to database. This process is basically used forsecurity purpose and QR generation.
4.2 Ticket Buying
First the user selects source point, destination point, class,no. of child and adult tickets, ticket type is also choose by user like return or
single etc. Then the user surf through the list of options to prefer either credit buy option or coupon buy it simplifies the buy process
by remembering the credit card details. Once the userprefers any of these options the application moves on to the pin code validation
module.
4.3 Pin Code Justification:
When the customer press the buy button a PHP code in thebus server validates the pin number and passwords, if itis triumphant it
saves both journey details and customer infoin the servers MYSQL database. After this ticket numberand time of buying is
generated by PHP code and balancecredit value is displayed.

4.4 QR Code Generation:


Once the PHP code generate the ticket number and time of buy details saved in the MySQL database are sent to GoogleChart API
engine in order to generate the QR code. All theindividual particulars and ticket information are transformed into QR codes and sent
back to the user mobile as HTTP response and saved in the application recollection.
4.5. GPS Ticket Justification:
GPS plays the job of the checker, where when the user buysthe ticket, the source geopoints , destination geopoints,ticket-type ,
termination time and date are stored in a mobileSQLite database. This facility checks the users currentlocation in accordance with the
destination geopoints, afterwhich the ticket type is checked and therefore the ticket is deleted if two is single or updated if type is
return.
4.6. Examining QR code with QR Reader
In this part, checker will have QR code reader and inspectthe QR code with the application with the purpose ofauthenticate the code
and validate the journey tickets,particularly the time and date of the ticket.
4.7. Read-through with Database:
If suppose the users display is being damaged and not ableto examine the QR code because of other reasons like batterycollapse, the
user can use another infallible option to check the ticket by probing the ticket database with users ticket number for justification
purposes.

Figure 4.2 Block Diagram for ticket checking

556

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CONCLUSION
The system will add a more secure and systematic way to travel in suburban buses. The system will be very adaptive in nature and can
be implemented in different cities with ease. Also the system will give an alternative to the conventional ticket booking and will
change the style of ticket booking.

ACKNOWLEDGEMENTS
We would like to sincerely thank Prof. L.J. Sankpal, our guide from Sinhgad Academy of Engineeringfor her support and
encouragement.

REFERENCES
[I] Damon Oehlman and Sebastien Blanc (20 II) "ProAndroid WebApps develop for Android using TML5, CSS3 &JavaScript "ApressPublications.
[2] Dave Smith and Jeff Friesen's (2011)" Android Recipes Problem Solution Approach" - Apress Publications.
[3] Jeff"JavaJeff" Friesen's (2010) "LearnJavafor AndroidDevelopment" - ApressPublications.
[4] Lauren Darcey and Shane Conder (2010)" Sams Teach yourself Android Application Development" -Sams Publications.
[5] Mark Murphy's (2011)"Beginning Android 3" ApressPublications.
[6] Reto Meier (2009)"Professional Android ApplicationDevelopment" - Wiley Publishing Inc. .
[7] Satya Komatineni (2009) "Pro Android" ApressPublications.
[8] Shawn Van Every's(2009) "Pro Android Media developing Graphics, Music, Video and Rich Media Apps for Smartphones
andTablets" - Apress Publications.
[9] Wallace Jackson's (2011) "Android Appsfor Absolute Beginners"ApressPublications.
[10] Wei - Meng Lee (20 II)"Beginning Android ApplicationDevelopment" - Wiley Publishing Inc

557

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Limnological study of dal lake Srinagar j and k


Muzaffar U Zaman Khan1, Ishtiyaq Majeed Ganaie2, Kowsar Hussain Bhat3
Email: Muzaffarkhan722@gmail.com

ABSTRACT: Dal Lake, known as the jewel of kashmir, is a world famous lake in the city of srinagar,the summer
Capital of Jammu and Kasmir state India. The current study aimed at analysing the various physico-chemical parameters
of the lake and assessing its water quality using the methodology as given in APHA 1998. The results revealed higher
values for nitrogen, showing the excessive use of nitrogenous fertilizers in the floating gardens of the lake and
agricultural fields surrounding the lake. Phosphate values also remained on the higher side indicating the eutrophic nature
of the lake. The values for iron were also high, while as the results for alkalinity revealed that the lake waters fall under
the category of hard waters
KEY WORDS: Dal Lake, APHA, nitrogen, phosphate, fertilizers, floating gardens, eutrophic, alkalinity, hard waters.
INTRODUCTION
Fresh water bodies constitute only 3% of the total volume of water on earth, the remaining 97% being found in the oceans.
These water bodies have a bearing on the economy as they provide potable water, fish and fodder. The fresh waters are
however the most vulnerable habitats, most likely to be changed by the activities of man. It is because lakes acts as sinks
for sewage and waste disposal while the rivers are naturally provided drains for the removal of waste to the sea. The
useful aspects of these water bodies and their vulnerability by mans activities have necessiated

their study and the study

of various geological, Physico chemical and biological aspects of these water bodies comes under the scope of Limnology.
From the stand point of their nutrient content lakes may range from eutrophic (well nourished) and mesotrophic (moderately
nourished) to oligotrophic (little nourished). Eutrophication results from increase in essential plant nutrients as nitrogen, Phosphorus,
iron and carbon.
Cultural eutrophication differs from natural eutrophication in that it is greatly accelerated in the sense of geological time. As the term
implies, man is the causative agent for enrichment of natural water in various ways.
Accelerated nutrient cycle and faster transport of soil constituents increase sedimentation rates and lead to enrichment with nutrients
of surface waters followed by changes in chemical and biological composition of the aquatic habitats. Obviously a lake ecosystem has
physical, chemical, biological and geological inputs and outputs. The substantial amount of disposals and discards of society reaches
lakes. The quality of water body generally reflects the range of human activities within their Catchment area. In a broader sense
potential perturbations of lakes may be related to population density and energy dissipation in the drainage area of the lake. Deposition
of sediments is a continuous process as is seen in Dal, Anchar, Wular, Hokersar and Nilnag lakes and can fill a lake basin completely.
For instance, in Dal Lake the denudation of mountainous Catchment results in heavy quantities of silt flowing into the lake system.
Thus a total volume of silt brought in to the lake has been estimated at 80,000 tonnes per year out of which 70% is brought by the
558

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

feeding channel, Telbal Nallah. In addition another 40,000 to 50,000 tones of dead weed and allochthonous material are added every
year (ENEX, 1978).
This state of affairs calls for a detailed study of freshwater bodies not only to analyze the dynamics of and interaction among its
structural components but also for the causes behind the cultural eutrophication. Only then can we go for the conservation of these
natural resources for the multiple purposes of food supply, irrigation, drinking, recreation & tourist attraction.
GENERAL INFORMATION ABOUT THE PLACE OF WORK.
The Dal lake is known as the Jewel of Kashmir. This is the second biggest lake of Jammu and Kashmir.. It is in the east side of the
ancient city of srinagar.
The Dal having catchments of about 316 km2 is divided into several distinct basins namely Gagribal, the Bod Dal (large lake),
Hazratbal basin and Lokat dal. During its survey in 1870, the Dal was extending from 5 to 6 miles from North to South and 2 to 3
miles from East to West. Presently, the Dal has shrunk and is devoid of its depth , fresh and transparent waters due to high rate of
pollution.
PERHAPS in the whole world ,wrote Walter Lawrence in his famous book, the valley of Kashmir , there is no corner so pleasant as
the Dal lakes.
Dal Lake Area in km2
S.No.

Division

Open water Basin

Marshy land

Total area

01

Hazratbal

5.6

3.2

8.8

02

Bod Dal Basin

4.2

4.2

03

Gagribal Basin

1.3

1.3

04

Boulvard Basin

0.3

0.2

0.5

05

Floating Gardens

0.3

4.5

4.8

Total

11.7

7.9

19.6

Source : Geography of Jammu and Kashmir (Hussain, M); 2003


LAKE CLIMATE
The climatic conditions of the Lake are temperate for major part of the year. The climate is characterized by warm summers having a
maximum temperatures of 330C and cold winters with subfreezing temperatures.
559
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Kaul 1979 states that the climate of Kashmir is highly variable and does not confirm to any definite type. According to the author the
winter is spread over longer period than the summer.

METHODOLOGY :
For the present study two sampling stations (i) Littoral zone (site I) and (ii) Central zone (site II) of Hazratbal basin were selected
for detailed investigations. The water samples were analyzed for various physico-chemical parameters followed by the methodology
in APHA,1998.
PHYSICAL PARAMETERS:
TEMPERATURE :
The atmospheric temperature at the sampling site was recorded with the help of Celsius thermometer, avoiding its exposure of
mercury bulb to direct sunlight
WATER TRANSPARENCY:
Light penetration through lake water was measured with a secchi disc of 20 cm diameter painted white and black on upper surface
and black on the lower surface.
T=X+Y

2
560

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Where T, X and Y represent transparency in cm, depth at which the Disc could not be observed while lowering the rope and
depth at which the disc was visible while raising the rope ( poole and Atkin, 1929) .
CHEMICAL ANALYSIS:
The chemical analysis of the water sampling for various characteristics was carried out using the methods outlined in APHA, 1998.
HYDROGEN ION CONCENTRATION:
The PH of the water samples was measured by using an Elico-digital PH meter. Before use the PH meter was calibrated each time
against buffer solutions of known Hydrogen ion concentration usually on P H4, PH7 or PH9.
SPECFIC CONDUCTIVITY:
The Specific conductivity of water samples was determined by using a systronics direct reading conductivity meter. The instrument
was calibrated by using N/10 KCl solution at 25 0C. The results are expressed as s at 250C.
DISSOLVED OXYGEN:
To a sample collected in a 250 ml glass bottle, 1 ml of each magnesium sulphate solutions and alkaline iodide azide solution was
added one after the other with separate pippets. The precipitate (mangnous hydroxide floc) formed was dissolved after about four
minutes with the help of concentrated Sulphuric acid. The fixed samples were carried to the laboratory where these were titrated
against 0.025 N Sodium thiosulphate solution, using starch solution as indicator. The end point was noted at the first disappearance
of blue colour. The amount of DO present was then calculated by using the formula.
DO (mg/l) = Vol. of the titrant 0.2 1000
Vol. of sample
Where 0.2 value represent 1 ml of sodium thiosulphate equivalent to o.2 mg of oxygen.
ALKALINITY:
For estimation of phenolphthalein alkalinity (i.e alkalinity due to PH and CO2) a sample volume of 50ml was titrated against 0.02 N
H2SO4 in presence of phenolphthalein indicator till disappearance of Pink colour. Volume of titrant used was noted. Then for
estimation of total alkalinity (i.e, alkalinity due to OH, CO 3 and HCO3) the same sample was titrated with 0.02 N NaOH in presence of
methyl orange indicator till the color changes from Yellow to orange. The total volume of titrant was noted. On the other hand, when
there was found no Pink color formation after addition of phenolphthalein indicator, the sample was run through the same procedure
followed by the addition of methyl orange indicator as mentioned above for total alkalinity. Then phenolphthalein alkalinity (P) and
total alkalinity (T) were calculated by using the formula as given below:
Phenolphthalein alkalinity (P) as mg/l CaCo3
= Volume of titrant used N 50,000
Volume of sample
561

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Total alkalinity (T) as mg/L = Volume of titrant used N 50,000


Volume of sample

CALCIUM HARDNESS:
To 25 ml of water sample 2ml of 1N NaOH buffer and a spatula of Solochrome dark blue powder were added. Titration against
EDTA was continued till the colour of the sample changed from, light pink to blue. The calcium hardness was then calculated by
using the formula given below:

Calcium hardness as mg/l CaCO3


= Volume of titrant used (V2) 1000 1.05 (mol. wt. of CaCO3)
Volume of sample

MAGNESIUM HARDNESS:
To 25 ml of water sample 25 ml of distilled water and 1 ml of magnesium buffer were added one after the other followed by 2 drops
of Erichrome Black T indicator. Titration against EDTA was continued till the colour of the sample changed from purple to blue.
The magnesium content of the water sample was the estimated by the following formula.
Mg/L= [Total Hardness (as mg caco3/L) - Calcium Hardness (as mg caco3/L] 0.243
CHLORIDE:
To 100 ml of water sample,1 ml of Potassium Chromate indicator were added. Once the yellow colour was formed, the sample was
titrated against standard silver nitrate solution (0.0141) to a faint brick red colour formation. Then in accordance with the formula
given in APHA (1998), the chloride content of the sample was calculated. The formula is given as:
Chloride mg/l = Volume of titrant used 35.46 0.0141 1000
Volume of sample
SULPHATE:
To 100 ml of the water sample add 20 ml of sulphate buffer and take the absorbance at 420 nm. Then add one spatula of Barium
chloride. Stire well, with magnetic stirrer till turbidity develop. Now again take absorbance at 420 nm (Post absorbance).

562

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IRON:
TO 25 ml of the water sample, add 0.5 ml of 1 : 1 Hcl, 1 drop of bromine water and 0.5 ml of potassium thiocyanate one after the
other. Record the reading at 480 nm.
NITRATE ( NO3-N) :
Free chlorine interferes with the nitrate determination. If the sample is having residual chlorine, remove it by addition of 0.05 ml (one
drop) of sodium arsenite solution for each 0.1 mg of chlorine. Take 10 ml of sample or aliquot diluted to 10 ml in a 50 ml of test tube.
Put all the tubes in a wire rack. Place the rack in cool water bath and add 2 ml of Nacl solution. Add 10 ml of H 2So4 solution after
mixing the contents thoroughly swirling by hand. Now add 0.5 ml of brucine and mix thoroughly. Place the rack in a hot water bath
with boiling water exactly for 20 minutes. Cool the contents again in a cold water bath and take the reading at 410 nm.
NITRITE (NO2-N) :
To 50 ml of colorless filtered sample add 1 ml of each EDTA, Sulphanilic acid, a- naphthylamine hydrochloride and sodium acetate
solutions in sequence. A wine red colour will appear in the presence of nitrites. Take the reading at 520 nm .
AMMONIA (NH3 N):
To 100 ml of the sample add 1 ml of zinc sulphate (ZnSo4) and 0.5 ml of 6N NaoH one after the other. Allow it to stand for 10
minutes till the whole precipitate settles down the bottom of flask. Now take 50 ml of supernatant sample carefully, so that no
precipitate is their in this solution and add one drop of EDTA and 2 ml of Nesler reagent to this aliquot. Allow it to stand for 10
minutes. Take the reading at 520 nm.
ORTHOPHOSPHATE (PO4-P) :
To 100 ml of water sample, add one drop of phenolphthalein indicator. If pink colour develops, add strong acid to decolorize it. Then
add 4 ml of ammonium molybedate and 0.5 ml of stannous chloride one after the other and allow it to stand for 10 minutes to develop
color. Note the reading at 690 nm.
TOTAL PHOSPHORUS:
TO 25 ml of water sample add 1 ml of H2So4 and 5 ml of HNO3 one after the other. Digest it over the hot plate. After cooling off the
flasks add 20 ml of distilled water and one drop of phenolphthalein indictor. Titrate it against 1 N NaoH till pink color appears. Raise
the sample volume up to 100 ml by adding distilled water and add 1-2 drops of strong acid solution to discharge the pink colour. Now
add 4 ml of ammonium molybedate and 0.5 ml of stannous chloride to this sample and allow it to stand for 10 minutes to develop
colour. Note the reading at 690 nm.
RESULTS
The results obtained for various physicochemical parameters are shown below in the tables from table 1 to table 16
PhysicoChemical parameters:
563

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1 Variation In Air And Water Temp (0C) at two sites of Dal Lake

Site I

Site II

Air

Water

Air

Water

Maximum

18.5

16.5

21.2

17.1

Minimum

8.0

6.3

6.7

6.1

Average

15.35

11.9

14.1

12.2

Table 2 variation in Secchi Transparency (m) at two sites of Dal Lake

Site I

Site II

Maximum

2.15

2.5

Minimum

1.5

1.75

Average

1.78

2.06

Table 3 variation in conductivity (s) at two sites of Dal Lake


Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

346

351

319

331

Minimum

327

333

262

275

Average

334

341

299

315

Table 4 Column variations in PH at two sites of Dal Lake


Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

8.06

7.81

7.97

7.9

Minimum

7.4

7.3

7.60

7.58

Average

7.79

7.5

7.7

7.7

564

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 5 Column variation in total alkalinity (mg/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

129

144

120

129

Minimum

91

100

96

77

Average

108

118

112

102

Table 6 variations in D. O. (mg/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

12.4

12.8

12.0

11.2

Minimum

6.0

4.8

7.5

6.0

Average

8.4

8.3

9.8

9.1

Table 7 Variations in Calcium Hardness. (mg/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

32

38

32

35.2

Minimum

24

30

25.6

25.6

Average

29.5

36

29.6

31.2

Table 8 Column variations in Mg (mg/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

4.8

7.7

7.7

7.7

Minimum

3.8

3.8

4.8

1.9

Average

4.3

5.2

5.7

4.5

565

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 9 Column variations in Chloride (mg/l) at two sites of Dal Lake


Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

34

15

10

11

Minimum

9.5

8.5

6.5

7.0

Average

15.8

10.6

9.1

10.2

Table 10 Column variations in Sulphate (mg/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

41

40

28.5

30.9

Minimum

13.6

14.3

12.2

12.2

Average

22.8

22.7

20.2

21.3

Table 11 Column variations in Iron (g/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

556

201

143

191

Minimum

47.9

86.3

86.3

57.5

Average

215.6

131.6

83.5

93.1

Table 12 Column variations in Nitrate Nitrogen (g/l) at two sites of Dal Lake

Site I
566

Site II
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Surface

Bottom

Surface

Bottom

Maximum

844

1147

873

850

Minimum

728

401

699

745

Average

783

739

776

794

Table 13 Column variations in Nitrite - Nitrogen (g/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

58.7

50.9

88.7

39.0

Minimum

19.3

23.8

16.0

18.4

Average

39.8

37.9

36.3

25.1

Table 14 Column variations in Ammonia Nitrogen in (g/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

152

209

172

156

Minimum

12.9

10

12.9

28.7

Average

57

104

121

115

Table 15 Column variations in PO4 - P (g/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

67

122

105

99

Minimum

36

52

74

24

Average

52.6

86.4

92.7

53

567

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 16 Column variations in Total Phosphorous (g/l) at two sites of Dal Lake

Site I

Site II

Surface

Bottom

Surface

Bottom

Maximum

308

280

304

280

Minimum

48

135

96

86

Average

175

201

167

151

DISCUSSION
The various parameters investigated in Dal lake for a period of four months and the seasonal fluctuations of different physical and
chemical factors have been treated separately and their influence up on one another discussed.
The water temperature followed air temperature rather closely as is common for small bodies of water (Wetzel, 1975). In an aquatic
ecosystem water transparency is one of the most important features that controls the energy relationship at different trophic levels.
Transparency of Dal lake ranged from 1.5 to 2.15 m. Rawson (1960), Pechlaner (1968 to 71) have used water transparency as an index
of eutrophication in lakes. Low transparency in the lakes could be due to silt content in water which comes through inflow channels.
Similar situation was reported by Zutshi (1968) in case of Anchar lake.
According to Zutshi et al (1980) light Penetration in water is considerably reduced either as a result of high Plankton density or due to
large quantities of suspended matter.

At site I PH of Dal lake ranged from 7.4 to 8.06 units. While at the bottom it ranged from 7.58 to 7.9. The lake did not exhibit much
difference in their PH values. The lake water seems to be well buffered as no abrupt changes were observed in the P H values. This is in
conformity with the findings of Khan and Zutshi (1980) when total alkalinity is greatest the bicarbonate system prevails and P H range
is usually on the alkaline side as is the case with present findings.

Increased specific conductivity values have been suggested as an indication of high trophic level (Berg et al 1958). Many workers
related increase in electric conductivity to the state of enrichment. Applying this criterion to the Dal lake (327 to 346s) it is observed
that the lake is at a higher level of enrichment. The overall value of conductivity come close to other semi drainage lakes of the region
as reported by Zutshi and Vass (1977).. There was not much difference between bottom and surface conductivity values at both the
sites.

568

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The average value of D.O in the bottom waters of Dal lake was less than the surface water at both the sites. The higher concentration
of oxygen content in the surface water could be attributed to aerial saturation. Also the metabolic activities like microbial respiration
consumes dissolved oxygen and hence reduce the bottom oxygen concentration.
Dissolved Oxygen changes in the water column have been regarded as most reliable parameter in assessing the trophic status and the
magnitude of Eutrophication in an aquatic ecosystem (Edmondson, 1966). In the present study the lack of hypolimnic o 2 deficit may
be due to shallow depth resulting in mixing of the water mass.

At site - I the range of alkalinity varied from 91 to 129 (X = 108). At site II the range was 96 to 120 (X = 112) for the period of 4
months. Earlier Moyle has classified the lake waters as soft, medium and hard on the basis of total alkalinity values. According to this
classification waters having alkalinity up to 40 mg l-1 are soft, with 40 90 mg l-1 medium and above 90 mg l-1 as hard. If this
categorization is applied, Dal lake falls under the hard water type, with bicarbonate alkalinity prevailing throughout the year.
Bottom alkalinity values of the lake were comparatively higher at site I which may probably be due to the precipitation of carbonate
from sediments by aquatic organisms and their subsequent conversion to bicarbonate by carbonic acid.

Chloride content of Dal lake varied from a minimum value of 9.5 mg l

to maximum value of 34 mgl-1 (X = 15.8) . Thresh et al

(1944) attributed high chloride concentration of water to organic pollution of animal origin. Cole (1975) concluded that human and
animal excreation contains on an average 5g cl-1. On the basis of water quality criteria the chloride content of Dal lake falls within the
acceptable limit (< 200mg l-1), but were generally high when compared to the other valley lakes, with an exception of Trigam lake
Kashmir, in which high concentration of chloride were reported (Khan, 1978) .

Calcium is generally the dominant cation in Kashmir lakes (Zutshi, 1980) Zutshi and Khan (1977) recorded a ratio of 4:1 for Calcium
and magnesium in some Kashmir Lakes. In the Present study a ratio of 7:1 for Ca and Mg was obtained in Dal Lake. Calcium being
predominant in the Lake because of predominance of lime rich rocks in their Catchment. It is related to agricultural fertilizers (lime
and superphosphate) used in the floating gardens, in the paddy cultivation in the lake Catchment. The lake may be classified as
calcium rich, as it depicted a range of 24 to 32 mgl-1 ( X=29.5 mgl-1).
But Mg content remains generally low (mean = 4.3 mg l-1) for Dal lake. The low concentration may be due to uptake of Mg 2+ by
plants in the formation of Chlorophyll, magnesium poiphyrin metal complex and in enzymatic transformation (Wetzel, 1975).

The ammonicalnitrogen in Dal Lake ranged between 10 to 209 gl-1(X =104) at site - I and 12.9 to 172 (X=121) at site II. The
ammonia concentration of the lake fall within the acceptable limits of 0.5 mgl -1.

569

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Ellis et al (1946) stated that the amount of ammonia and ammonia compounds in unmodified natural waters is very small (0.1mgl-1)
while quantities more than 1 mgl-1 are indicative of organic pollution. Rybok and Sikorsha (1976) also observed that more than 99%
of total N to be constituted by ammonia in sewage effected zones of the lake.

The nitrate content in the Dal Lake ranging between 72.8 to 844 gl -1 (X = 783) during the study period. At site II it ranges from
699 to 873 (X = 776).
High levels of nitrogen in the lakes may be due to use of fertilizers by Dal dwellers for their floating gardens and by local inhabitants
for their agricultural land on the lake shores. Cooke (1966) has observed the rapid increase in nitrate content usually after rains and
strong winds. Caulton (1970) has suggested that the nitrate nitrogen probably comes predominantly from the atmosphere and
entering the water body via rains. Nitrate nitrogen is an unstable product of either nitrification of free ammonia or denitrification of
nitrates. The concentration in the water depends on the relative abundance of nitrifying and denitrifying bacteria and their activity
(Munawar, 1970)
The Ortho Phosphate concentration ranged from 36 to 67gl-1 (X = 52.6) at site - I and 74 to 105 ( X = 92.7) at site II. Surface
waters of the lake recorded slightly higher values of PO 4 P ( X = 92.7) at site II which may be attributed to the regeneration of Po 4
from the decaying plant and animal remains as pointed out by Cooper (1958). Phosphate concentration remained low during the month
of February, this may be due to the quick uptake and subsequent storage of phosphate by the plankton, locking up of phosphate in the
dense macrophytic vegetation that abounds in the lake and formation of insoluble calcium phosphate complex due to its basically
being a small lake. Sawyer et al (1945) suggested that 0.3 mg l-1 of PO4 P and 0.15mgl-1 of No3 N are critical levels beyond which
algal bloom may appear indicating cultural eutrophication.

At siteI the range of total phosphate in the Dal lake was 48 to 308 gl -1 (X=175gl-1) and siteII between 96 to 304gl-1(X = 167gl1

)Dal lake on the basis of this parameter seems to be enriched. Welch (1952) and Ruttner (1953) reported smaller amounts of

phosphorus in waters free from contaminating effluents. Hutchinson (1957) related the increase in phosphorus as a result of sewage
contamination. Schindler et al (1971) believe total phosphorus to be a nutrient most frequently controlling eutrophication.
Vollenweider (1972) regarded phosphorus as a key element in the processes of eutrophication. The increase in the phosphorus content
may be related to the migratory flocks of bird and pertuberations caused by man in the Catchment.

Next to phosphorus and nitrogen, iron is often considered to be one of the most important chemical Factor for the development of
phytoplankton (Rodhe, 1948). As to the range of concentration of iron suitable for algae, the information is more scarce than for
nitrogen and phosphorus. Chu (1942) gives the range of 0.02 0.8 Fe mg l-1 and the values for Dal lake (0.047 0.55 mgl-1) fall
within the same range. These values are much higher than the permissible levels of 0.05 mg l -1(water resources centre, Canada, 1968)
beyond which the water is unfit for human consumption.

570

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The above study makes it clear that the various physico-chemical parameters of the lake are not in their normal range and
as such the lake water is not suitable for consumption and also the lake diversity is under serious threat mainly as a result of various
anthropogenic activities which are changing the water quality and threatening its very existence.
Dal Lake Restoration
Acknowledging the socio economic importance of the Lake and grave concern voiced by the people, several proposals have been
put forth from time to time. They include Srinagar Master Plan of 1971, Lake Area Master Plan by Stein (1972) ENEX conservation
Report (1978), Dal lake Development Report by Riddle (1985), Dal lake conservation Plan (Iram Consultants New Delhi, 1997) and
yet another project report of Alternate Hydro Electric Deptt. University of Roorke (2000).
Some common measures suggested by these reports are :

An effective monitoring mechanism for monitoring the Dal lake conservation Programme which should comprise of eminent
Limnologists, researchers, experts who have been associated closely with Dal lake studies during the Lake execution Programme to
some international agency/organization well known for Lake Conservation and management practices.
Prioritization of the works for Dal conservation and completion of jobs on stipulated time frame.
Rehabilitation of Dal dwellers keeping in view their socioeconomic conditions.
Effective enactment of laws for encroachments, conservations of open water bodies with floating gardens, land masses.
Restrictions of vegetable gardens, floating gardens, growing of lily pads, Nadru cultivation and demarcation of zones for such
activities.
Aeration, oxidation and ozonization of water effected with algal blooms under close scientific monitoring.
Designing of eco tourism development plan based on carrying capacity of the lake and with proper alignment of houseboats as
envisaged in the project.
Scoring out the possibility of Biotech. method (Constructed Wetland Treatment Compartments) for wastewater treatment as in vogue
in Canada, Italy, US and other European countries.
Devising method for houseboat sanitation through floating septic tank with proper solid waste management through NGOs and eco
activists.
Awareness campaigns through NGOs and educational institutions and strict adherence to water act 1974.
Marking executing agencies accountable before the monitoring cell body overlooking the Dal conservation program.
Promoting the research and development activities through on line monitoring devices as in Japan.
Cleansing and training of peripheral springs and diversion of their freshwaters to supplement water budgets.
571

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Empowerment of LAWDA.

REFERENCES:
A.P.H.A. (1998) : Standard method for the examination of wastes and waste water.
Am. Pub. Health. Association, New York. 16th edition.
Balkhi, M.H, Yousuf, A.R., and Qadri, M.Y.(1987): Hydrobiology of Anchar lake
Kashmir.
Chiaudani, G and (1974) : The N :P ratios and tests wih selenastrum to predict
eutrophication in lakes. Water. Res,Q : 1053 59.
Chu, S.P.(1942) : The influence of the mineral composition of the medium on the
growth of planktonic algae. I . Methods and Culture media. J. Ecol;
30 : 204 325.
Edmondson, W.T. and Hutchicson, G.E. (1934) : Yale North India expedition Article
9. Report on Rotatoria. Memb. Conn.Acd. Sci.91: 153 186. J.
Enex (1978) Pollution of Dal lake, srinagar Kashmir (India) report submitted by the
Enex consortium to the Jammu and Kashmir government.
Gundroo, N.A. (1989) : Dal lake Action for its Restoration and Development
Bhagiroth 36 (4).
Hassan, G.S. (1833) : Tarikh Hassan Vol.1. Directorate of Research and Publication
Srinagar. (Rep.1954).
Khan, M.A. and Zutshi, D.P.(1979) : Relative contribution by Nanno and net Plankton
towards Primary Production of two Kashmir Himalayan Lake. J. Indian. Bot. Soc.58 : 263 67.
Kundangar, M.R.D. and Sarwar, S.G. Shah, M.A. (1994 a) Liminological features of
Dal lake1 1991 1992. Technical Report No. 2(B) Submitted to Govt. of Jammu Kashmir.

572

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Lawrence, R.W (1895) : The Valley of Kashmir Oxford University Press.


Qadri,M.Y. and Yousuf, A.R. (1979) : Physico chemical features of Beehama
spring. Geobios. 6 : 212 214.
Qadri,M.Y. and Yousuf, A.R. (1980 : Influence of physico chemical factors on
The seasonality of Cladocera in lake Manssbal. Geobios. 7 : 273 276.
Trisal, C.L.(1987) : Ecology and Conservation of Dal lake Kashmir. Butter Worth
and co. (Publishers) Ltd.
Vollenweider, R.A. (1970): Water mgt research Scientific fundamentals of the
Eutrophication of lakes and flowing waters with particular reference to nitrogen and phosphorus as factors in
eutrophication. 27 40.
Zutshi, D.P. and Vass, K.K. (1982) : Liminological studies on Dal lake. and chemical
features. Ind. J.Ecol. 5 : 90 97

573

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Gps Based Advance soldier Tracking with Emergency Messages and


Communication System
Mr. Palve Pramod
M.E (VLSI & Embedded System)
G. H. Raisoni College of Engineering & Management, Chas,
Ahmednagar, India Contact No: 7709946039
E-mail: pramod.palave@gmail.com

Abstract In todays world enemy warfare is an important factor in any nations security. The national security mainly depends on
army (ground), navy (sea), air-force (air).The important and vital role is played by the army soldiers. There are many concerns
regarding the safety of these soldiers. As soon as any soldier enters the enemy lines it is very vital for the army base station to know
the location as well as the health status of all soldiers. In our project, the soldier can ask for directions to the army base unit in case he
feels that he is lost. By using the location sent by the GPS, the base station can guide soldier to safe area & GSM will help to
communicate the Soldier unit with Base unit. By getting the exact location of soldiers it will help the Soldiers to discuss about their
war strategies and take guidance from Base unit. The various Health Sensors such as Temperature sensor, Heart rate sensors,
Humidity sensors, Gas detection sensors will help to decide the health status of that particular soldier.

Keywords Tracking, GPS, Sensors, Navigation, GSM, Differential GPS, EGPRS, UMTS.
INTRODUCTION

The infantry soldier of tomorrow promises to be one of the most technologically advanced modern warfare has ever The challenge
was to integrate these piecemeal components into a lightweight package that could achieve the desired result without being too bulky
and cumbersome or requiring too much power. It is necessary for the base station to guide the soldier on correct path if he is lost in the
battlefield, around the world, various research programs are currently being conducted, such as the United States. One of the
fundamental challenges in military operations lies that the soldiers are not able to communicate with control room station. In addition,
the proper navigation between soldiers organizations plays which is useful for control room station to know the exact location of
soldier and accordingly they will guide them. Also High -speed, short-range, soldier-to-soldier wireless communications to relay
information on situational awareness, GPS navigation, Bio-medical sensors, Wireless communication with large amount of data & we
have to copy this data into another flash drive then it can be possible using this small device which can be handled easily. As shown in
the figure the user can transfer the data from source to destination. We can also able to select which folder is to be transfer with the
help of user interface LCD with Up- Down arrows & option & select buttons. Thus it makes the device more flexible. By addition of
some extra software part it may be possible to the functions like delete, copy single file or folder to the base system.
REMAINING CONTENTS

Systems Block Diagram consists of two units such as Soldier Unit and Base Unit.
I. SOLDIER UNIT:
In this module, we have come up with an idea of tracking the soldier as well as to give the health status of the soldier during the war,
which enables the army personnel to plan the war strategies. also the soldier can ask for directions to the army base unit in case he
feels that he is lost. By using the location sent by the GPS the base station can guide the soldier to safe area. Here to find the health
status of soldier we are using a body temp sensor as well as pulse rate sensor. These sensors will measure the body temperature and
the pulse rate of soldier and will be stored in c memory. These signals, travelling at the speed of light, are intercepted by your GPS
574

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

receiver, which calculates how far away each satellite is based on how long it took for the messages to arrive. These sensors will help
to sense physical parameters & informs to Base Station through GSM.
This unit is placed on the soldier. It has mainly 4 parts:
Biomedical sensors
Key pad
GPS Receiver
GSM Modem

Fig 1. Soldier Unit


II. BASE UNIT:
In this unit upon receiving the SMS, the VB s/w shows the solders location on Google maps based on the GPS co-ordinates also the
health status is displayed. In this way the army officials can keep a track of all their solders.

Fig 1. Base Unit

575

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

III. SIMULATION RESULTS

Above fig. shows interfacing of Graphical LCD with ARM processor and Hardware arrangement. To perform this we have written
code in keil software and proteus is used for simulation results.
ACKNOWLEDGMENT
I wish to thanks Prof. Vijaykumar Joshi who helped in selecting this topic & gave a valuable guidance for the completion of this
project in good manner.

CONCLUSION
Following conclusion can be retrieved from above implementation are:
Continuous Communication is Possible: Soldiers can communicate anywhere using RF,DS- SS,FH-SS which can help soldier to
communicate among their squad members whenever in need.
Less complex circuit and power consumption. Use of ARM processor and low power requiring peripherals reduce overall power
usage of system. Modules used are smaller in size and also lightweight so that they can be carried around.

RFERENCES:
[1] Matthew J. Zieniewicz, Douglas C. Johnson, Douglas C. Wong, and John D. Flat The Evolution of Army Wearable Computers
Research,Development and engineering center,US Armu communication OctoberDecember 2002.
[2] Wayne Soehren &Wes Hawkinson Prototype Personal NavigatioNavigationn system,IEEE A&E system magazine April-2008.
[3] Simon L. Cotton and William G. Scanlon Millimeter - wave Soldier to- soldier communications for covert battlefield
operation Defence science and Tecnlogy laboratory, IEEE communication Magzin October 2009.
[4] Alexandrous Plantelopoulous and Nikolaos ,G. BourbakisA Survey on Wearable sensor based system for health monitoring and
prognosis IEEE Transaction on system,Man and Cybernetics ,Vol.40,No.1, January 2010.
[5] A uderey Giremus,Jean-yves Tourneret,Senior member,IEEE &Arnaud DoucetA fixed-Lag particle filter for the joint
Detection/Compensation of interference effects in GPS NavigationDecember-2011.
[6] Hock Beng Lim A Soldier Health Monitoring System for Military Applications 2010 International Conference on Body Sensor
Networks (BSN).
[7] Jouni Rantakoko,Joakim Rydell and peter Stromback,Accurate and Realiable soldier and first responder Poasitioning
:Multisensor System and co-operative localizationApril-2011
[8] Ravidra B.Sathe ,A.S.Bhide.Gps-based soldier tracking and health monitoring system conference on Advances in
communication and computing April 2011-12.
[9] Vincent Pereira,Audrey Giremus,anderic grivel Modeling of multipath environment using copulas For particle filtering based
GPS navigation June-2012
[10] Warwick A. Smith; ARM Microcontroller Interfacing Hardware and Software.
[11] Beng Lim Hock A Health Monitoring System for Military Applications 2010 International Conference on Body Sensor
Networks (BSN).
[12] Peter Stromback,Accurate and Realiable soldier and co-operative localizationApril-2011
576

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

An Efficient Implementation of Speed-PFC of Interleaved Converter


FED BLDC Motor Drive System
SuthesKumar.P1, Prabhakaran.S2
PG Scholar1, Associate Professor2
suthesss@gmail.com
(Department of EEE, NandhaEngineering College, India)

Abstract This project proposes a power factor corrected (PFC) interleaved converter- fed brushless direct current (BLDC) motor
drive.Back Emf sensing method is proposed which reduces switching loss and no attenuation.Speed control is done by controller
which makes the corrected PWM signals.And also efficiency is improved by using interleaved converter.

Keywords Interleaved converter, Back Emf sensing, PID controller


INTRODUCTION

Efficiency and cost are the major concerns in the development of low-power motor drives targeting household applications such as
fans, water pumps, blowers, mixers, etc. The use of the brushless direct current (BLDC) motor in these applications is becoming very
common due to features of high efficiency, high flux density per unit volume, low

maintenance requirements, and low

electromagnetic-interference problems. These BLDC motors are not limited to household applications, but these are suitable for other
applications such as medical equipment, transportation, HVAC, motion control, and many industrial tools. A BLDC motor has three
phase windings on the stator and permanent magnets on the rotor. The BLDC motor is also known as an electronically commutated
motor because an electronic commutation based on rotor position is used rather than a mechanical commutation which has
disadvantages like sparking and wear and tear of brushes and commutator assembly, Power quality problems have become important
issues to be considered due to the recommended limits of harmonics in supply current by various international power quality standards
such as the International Electro technical Commission (IEC). For class-A equipment (<600 W, 16 A per phase) which includes
household equipment, IEC 61000-3-2 restricts the harmonic current of different order such that the total harmonic distortion (THD) of
the supply current should be below 19%. A BLDC motor when fed by a diode bridge rectifier (DBR) with a high value of dc link
capacitor draws peaky current which can lead to a THD of supply current of the order of 65% and power factor as low as 0.8. Hence, a
DBR followed by a power factor corrected (PFC) converter is utilized for improving the power quality at ac mains. Many topologies
of the single-stage PFC converter are reported in the literature which has gained importance because of high efficiency as compared to
two-stage PFC converters due to low component count and a single switch for dc link voltage control and PFC operation. The choice
of mode of operation of a PFC converter is a critical issue because it directly affects the cost and rating of the components used in the
PFC converter. The continuous conduction mode (CCM) and discontinuous conduction mode (DCM) are the two modes of operation
in which a PFC converter is designed to operate. In CCM, the current in the inductor or the voltage across the intermediate capacitor
remains continuous, but it requires the sensing of two voltages (dc link voltage and supply voltage) and input side current for PFC
operation, which is not cost-effective. On the other hand, DCM requires a single voltage sensor for dc link voltage control, and
inherent PFC is achieved at the ac mains, but at the cost of higher stresses on the PFC converter switch; hence, DCM is preferred for
low-power applications. The conventional PFC scheme of the BLDC motor drive utilizes a pulsewidth-modulated voltage source
inverter (PWM-VSI) for speed control with a constant dc link voltage. This offers higher switching losses in VSI as the switching
577

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

losses increase as a square function of switching frequency. As the speed of the BLDC motor is directly proportional to the applied dc
link voltage, hence, the speed control is achieved by the variable dc link voltage of VSI. This allows the fundamental frequency
switching of VSI (i.e., electronic commutation) and offers reduced switching losses. Singh and Singh [11] have proposed a
interleavedconverter feeding a BLDC motor based on the concept of constantdc link voltage and PWM-VSI for speed control which
has high switching losses.

FIG. 1. PROPOSED BLDC MOTOR DRIVE WITH FRONT-END INTER LEAVED CONVERTER.
The circuit diagram for An Efficient Implementation Of Speed -PFC Of Interleaved Converter Fed BLDC Motor Drive
System is shown in fig 1.In this diagram consists of Ac source, interleaved converter, PWM generator, back EMF sensing, Error
amplifier, PID controller, and BLDC motor drives. By applying AC source voltage to the interleaved converter.it converts Ac into DC
supply by using both interleaved converterthe output of the Dc supply is converted into the three phase Ac supply by using three phase
inverter and given to the BLDC motor drive.
This paper presents a interleaved converter-fed BLDC motor drive with variable dc link voltage of VSI for improved power
quality at ac mains with reduced components.
INTERLEAVED CONVERTER:
It is a method of paralleling converters.
It has additional benefits when comparing the general approaches of paralleling converters.
Widely used in personal computer industry to power central processing units.

578

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

TIME INTERLEAVING:
For very-high-speed applications, time interleaving increases the overall sampling speed of a system by operating two or more data
converters in parallel. This sounds reasonable and straightforward but actually requires much more effort than just paralleling two
ADCs. Before discussing this arrangement in detail, compare the sampling rate of a time-interleaved system with that of a single
converter. As a rule of thumb, operating N number of ADCs in parallel increases the system's sampling rate by approximately a factor
of N. Thus, the sampling (clock) frequency for an interleaved system that hosts N ADCs can be described as follows:

The simplified block diagram in Figure 1 illustrates a single-channel, time-interleaved DAQ system in which two ADCs double the
system's sampling rate. This rate (fSYSTEM_CLK) is a clock signal at twice the rate of fCLK1 = fCLK2. Because fCLK1 is delayed with respect
to fCLK2 by the period of fSYSTEM_CLK, the two ADCs sample the analog input signals alternately, producing an overall sample rate equal
to fSYSTEM_CLK. Each converter operates at half the sampling frequency.This simplified block diagram depicts a two-step, timeinterleaved ADC system for high-speed data acquisition.
BRUSHLESS DC ELECTRIC MOTOR:
(BLDC motors, BL motors) also known as electronically commutated motors (ECMs, EC motors) are synchronous motors that are
powered by a DC electric source via an integrated inverter/switching power supply, which produces an AC electric signal to drive the
motor. In this context, AC, alternating current, does not imply a sinusoidal waveform, but rather a bi-directional current with no
restriction on waveform. Additional sensors and electronics control the inverter output amplitude and waveform (and therefore percent
of DC bus usage/efficiency) and frequency (i.e. rotor speed).
PWM GENERATION:
Pulse-width modulation (PWM), or pulse-duration modulation (PDM), is a modulation technique that controls the width of the pulse,
formally the pulse duration, based on modulator signal information. Although this modulation technique can be used to encode
information for transmission, its main use is to allow the control of the power supplied to electrical devices, especially to inertial loads
such as motors. In addition, PWM is one of the two principal algorithms used in photovoltaic solar battery chargers, the other being
579

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

MPPT.The average value of voltage (and current) fed to the load is controlled by turning the switch between supply and load on and
off at a fast pace. The longer the switch is on compared to the off periods, the higher the power supplied to the load.
POWER FACTOR CORRECTION:
PFC (power factor correction; also known as power factor controller) is a feature included in some computer and other power supply
boxes that reduces the amount of reactive power generated by a computer. Reactive power operates at right angles to true power and
energizes the magnetic field. Reactive power has no real value for an electronic device, but electric companies charge for both true
and reactive power resulting in unnecessary charges. PFC is a required feature for power supplies shipped to Europe.In power factor
correction, the power factor (represented as "k") is the ratio of true power (kwatts) divided by reactive power (kvar). The power factor
value is between 0.0 and 1.00. If the power factor is above 0.8, the device is using power efficiently. A standard power supply has a
power factor of 0.70-0.75, and a power supply with PFC has a power factor of 0.95-0.99.Is there a way to correct this inefficient use of
current? The answer is yes, by using power factor correction capacitors. These capacitors are wired in parallel with the load. They
may be installed at the service entrance of the building or be dedicated to a specific device with a low power factor.PF correction
capacitors are sized by the amount of KVAR they are able to correct.

To determine proper sizing, the PF for the building or the

device must be measured under normal operating conditions. A target PF such as 95 percent is selected.

REMAINING CONTENTS

SIMULATION RESULT OF SPEED-PFC OF INTERLEAVED CONVERTER FED BLDC MOTOR DRIVE SYSTEM:

The performance of the proposed BLDC motor drive is simulated in MATLAB/Simulink environment using the Sim-Power-System
toolbox. The performance evaluation of the proposed drive is categorized in terms of the performance of the BLDC motor and BL
interleaved converter and the achieved power quality indices obtained at ac mains. The parameters associated with the BLDC motor
such as speed (N), electromagnetic torque (Te), and stator current (ia) are analyzed for the proper functioning of the BLDC motor.
580

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Parameters such as supply voltage (Vs), supply current (is), dc link voltage (Vdc), inductors currents (iLi1, iLi2), switch voltages
(Vsw1, Vsw2), and switch currents (isw1, isw2) of the PFC BL interleaved converter are evaluated to demonstrate its proper
functioning.
Moreover, power quality indices such as power factor (PF), displacement power factor (DPF), crest factor (CF), and THD of
supply current are analyzed for determining power quality at ac mains. Steady-State Performance The steady-state behaviour of
theProposed BLDC motor drive for two cycles of supply voltage at rated condition (rated dc link voltage of 200 V) is the
discontinuous inductor currents (iLi1 and iLi2) are obtained, confirming the DICM operation of the BL buckboost converter. The
performance of the proposed BLDC motor drive at speed control by varying dc link voltage from 50 to 200 V. The harmonic spectra
of the supply current at rated and light load conditions, i.e., dc link voltages of 200 and 50 V, also shown in Fig. 7(a) and (b),
respectively, which shows that the THD of supply current obtained is under the acceptable limits of IEC 61000-3-2.The dynamic
behaviour of the proposed drive system during a starting at 50 V, step change in dc link voltage from 100 to 150 V, and supply voltage
change from 270 to 170 Voltage.
ACKNOWLEDGMENT
If acknowledgement is there wishing thanks to the people who helped in work than it must come before the conclusion and must be
same as other section like introduction and other sub section.

CONCLUSION
A PFC interleaved converter-based VSI-fed BLDC motor drive has been proposed targeting low-power applications. A new method of
speed control has been utilized by controlling the voltage at dc bus and operating the VSI at fundamental frequency for the electronic
commutation of the BLDC motor for reducing the switching losses in VSI. The front-end BL interleaved converter has been operated
in DICM for achieving an inherent power factor correction at ac mains. A satisfactory performance has been achieved for speed
control and supply voltage variation with power quality indices within the acceptable limits of IEC 61000-3-2. Moreover, voltage and
current stresses on the PFC switch have been evaluated for determining the practical application of the proposed scheme. Finally, an
experimental prototype of the proposed drive has been develop to validate the performance of the proposed BLDC motor drive under
speed control with improved power quality at ac mains. The proposed scheme has shown satisfactory performance, and it is a
recommended solution applicable to low-power BLDC motor drives.

REFERENCES:
1.

AN ADJUSTABLE-SPEED PFC BRIDGELESS BUCKBOOST CONVERTER-FED BLDC MOTOR DRIVE, VASHISTBIST, STUDENT MEMBER,
IEEE, AND BHIMSINGH, FELLOW APR-2014

2.

J. Moreno, m. E. Ortuzar, and j. W. Dixon, energy-management system for a hybrid electric vehicle, using ultracapacitors
and neural networks, ieee trans. Ind. Electron., vol. 53, no. 2, pp. 614623, apr. 2006.

3.

Y. Chen, C. Chiu, Y. Jhang, Z. Tang, and R. Liang, A driver for the singlephase brushless dc fan motor with hybrid winding
structure, IEEE Trans. Ind. Electron., vol. 60, no. 10, pp. 43694375, Oct. 2013.

4.

X. Huang, A. Goodman, C. Gerada, Y. Fang, and Q. Lu, A single sided matrix converter drive for a brushless dc motor in
aerospace applications, IEEE Trans. Ind. Electron., vol. 59, no. 9, pp. 35423552, Sep. 2012.

581

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5.

H. A. Toliyat and S. Campbell, DSP-Based Electromechanical Motion Control. Boca Raton, FL, USA: CRC Press, 2004.

6.

S. Singh and B. Singh, A voltage-controlled PFC Cuk converter based PMBLDCM drive for air-conditioners, IEEE Trans.
Ind. Appl., vol. 48, no. 2, pp. 832838, Mar./Apr. 2012.

7.

B. Singh, S. Singh, A. Chandra, and K. Al-Haddad, Comprehensive study of single-phase ac-dc power factor corrected
converters with high-frequency isolation, IEEE Trans. Ind. Informat., vol. 7, no. 4, pp. 540556, Nov. 2011.

8.

A. A. Fardoun, E. H. Ismail, M. A. Al-Saffar, and A. J. Sabzali, New real bridgeless high efficiency ac-dc converter, in
Proc. 27th Annu. IEEE APEC Expo., Feb. 59, 2012, pp. 317323

582

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Nutritional characterization of Indian traditional Puranpoli


Nitin G. Suradkar1, Deepika Kamble3 and Varsha Fulpagar2
1. Department of Food Science and Technology, CFT, VNMKV, Parbhani, Maharashtra.
2 & 3. Department of Food Technology LIT Nagpur, Maharashtra.
E mail- nsuradkar21@yahoo.com

Abstract- The effort has been made to prepare the puranpoli by using traditional method and to analyze its nutritional composition.
The main ingredients of puranpoli are wheat flour, maida and Bengal gram flour. These unique combinations of cereals and legume
fulfill the nutritional requirement of essential amino acids like Methionine and lysine as well as have good nutritive values and were
quite rich in carbohydrates accompanied by enough protein, ash, crude fibers, fat, and minerals like calcium and iron.

Keywords- Puranpoli, cereals, bengal gram, amino acid, crude fibers, minerals, methionine
INTRODUCTIONIndian cuisine is distinguished by its sophisticated use of spices and herbs and the influence of the longstanding and widespread
practice of vegetarianism in Indian society. The most popular dessert of Maharashtra is Puranpoli and is made in each and every house
during the festivals. Puranpoli is types of Chapati i.e. pan baked bread. The Shelf-life of freshly baked chapatti is 24-36 hrs and
becomes unfit for consumption due to development of mold growth, ropiness and texture deterioration depending upon storage
conditions. Puranpoli is a marathi dish, which is a dessert, considered as a rich food and traditionally made only during auspicious
occasions and during important Indian festivals. Puranpoli is called by different names in different languages like Poli in Tamil,
Lanchipoli in Malayalam, Bobbatlu in Andhra Pradesh and Vermi/wermi in Gujrati, Bakshalu in Telugu, and Holige in Karnataka.
[1].The general appearance of puranpoli is like classical chapatti. [2].Human body requires various macro and micronutrients such as
protein, carbohydrate, fat or lipid as macronutrients and vitamins, minerals, water and fiber as micronutrients. Likewise, human beings
require a number of complex organic compounds as added caloric requirements to meet the need for their muscular activities [3].

MATERIAL AND METHODSRaw materials-Puranpoli is a legume based sweet product prepared from Bengal gram dal, wheat flour, maida, sugar, salt and spices
in proper proportion. The raw materials required for preparation of Puran was evaluated for its physiochemical characteristics and its
nutritive value.

Recipe for the PuranpoliTable 1. Recipe for preparation of puranpoli


Materials
Quantity (g)
Wheat flour
175
Maida
325
Spices
10

583

Materials
Bengal gram dal
Sugar

www.ijergs.org

Quantity (g)
500
385

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Method of preparation of Puran traditionally


Bengal gram dal + Water
Cooking in Pressure cooker at 15 psi for
20 to 25 minutes
Cooked dal
Addition of Sugar, Salt & Spices
Cooking for 10 to 12 minutes with continuous mixing
Cooling
Grinding using wet grinder
Puran of thick consistency

Fig.1 Flow chart preparation of Puran traditionally


Method of preparation of dough from wheat flour and Maida
Mixing of Wheat Flour and Maida (1:3) in Dough mixer
Addition of Hot water ( To avoid clumping)
Mixing for 1 min
Addition of salt (1-1.5%)
Addition of Normal water
Mixing in Dough mixer
Dough
(up to desired consistency and proper mixing time)

Fig.2 Flow chart preparation of dough from wheat flour and Maida
Method of preparation of puranpoli
Rounding and weighing of dough (60g)
Weighing and rounding of puran (30g)
Stuffing of puran in dough and rounding of dough
Flatting of dough by Wheat Sheet Spreader Machine
like Chapati (18cm diameter)
Cooking of Chapati on frying pan up to desired
color and flavor
Puranpoli

Fig.3 Flow chart preparation of Puranpoli


584

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANALYTICAL METHODSMoisture content [4]About 5g of sample was weighed accurately on the balance, and then it was spread uniformly into a Petri-dish and put in a hot oven at
105+1C for 3-4 hrs. After drying; the covered dish was transferred to the desiccators and weighed soon after it reached the room
temperature. The procedure was repeated till constant weight of dried matter was obtained. The loss in moisture was recorded as the
moisture content

% Moisture content =

Weight loss
100
Weight of sample

Fat contentThe sample was transferred to a thimble paper and the top of the thimble was plugged with cotton. The thimble was next placed in the
fat extraction chamber of the Soxhlet apparatus. A previously weighed flask was filled with solvent that is petroleum ether and

was attached to the extraction chamber. The condenser was attached to the assembly. Extraction was carried out at proper
temperature that is about 40-60C (for 6hrs). ). The excess Petroleum ether was recovered by boiling it further. Then the
flask was dried and the weight was recorded. The fat percentage was calculated by estimating the amount of fat per g of
sample.
% Crude fat =

Weight of dried ether soluble materisl


100
Weight of sample

Protein content [5]


Protein in the sample is determined by estimating the percentage nitrogen by Kjeldahls method and further calculating the protein
content. Exactly 1-2 g of defatted sample was weighed in the Kjeldahls flask.10g K2SO4 and 0.5g CuSO4 was added followed by
20ml conc. H2SO4 that raises the boiling point of mixture and ensures complete reaction. The flask was gently heated on a digestion
stand in inclined position until frothing ceases and then it was boiled strongly until the liquid was clear.2-3 glass beads were added to
avoid bumping during boiling. Along with experimental digestion, the blank determination was done by digesting 0.5g of soluble
starch instead of sample in exactly similar manner and diluted to 250ml in volumetric flask. About 20ml of the diluted sample was
taken in the semi-micro-distillation unit followed by rapid addition of 10ml of 50%NaOH. The stopcock was closed and steam
distillation was allowed to proceed for about 20min.The tip of the delivery tube was dipped in a flask containing known volume of
0.1N H2SO4 .The same procedure was repeated for blank. The unutilized H2SO4 was then titrated with 0.05N NaOH. The percentage
nitrogen was calculated as:
% =

Ts Tb x Normality of acid molecular weight 2


Protein= 6.25 x%
Ash content by AOAC method5 g finely ground sample was weighed in silica crucible and ignited on low flame. Then it was transferred to muffle furnace and
heated at 550C for 5 to 6 hr for complete oxidation of organic matter. Then transferred to desiccators and weight is taken.
% Ash content =

AW
IW

100

Where,
AW = Weight of ash
IW = Initial weight of dry matter
585

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Sugar Content by Lane Eynons Method [6]A. Reagents


1. Fehling A Solution: 69.28 g of CuSO4.5H2O was dissolved in water and diluted to 1000 ml water, filtered through filter paper. 2.
Fehling B Solution: 346 g of sodium potassium tartarate and 100 g of NaOH mixed in 1000 ml water and filtered through the filter
paper. 3. Standard Glucose solution: 0.4 g of glucose or sugar was dissolved in distilled water and made up the volume to 100 ml. 4.
Methylene blue Indicator: 1g of methylene blue was dissolved in 100 ml of water.
B.Standardization of Fehling Solution10 ml of Fehling solution (5 ml each Fehling A and B solution) was taken in a conical flask & few glass beads were added in a conical
flask to avoid bumping. Approximately 20-25 ml of distilled water was added. The mixture was boiled for 2 minutes without
removing the flame. The sugar solution from burette was added and a 1-2 drop of methylene blue indicator was added. The titration
was completed approximately within 3 minutes. After complete reduction of copper, methylene blue was turned to colorless & formed
a red precipitate. Burette reading was noted. The titration was repeated for consecutive readings. The titration should be completed
within total of three minutes.
Wt of dextrose x B R
Dextrose factor=
Volume made up
Determination of Reducing SugarSample Preparation
50 g of sample was taken in 500 ml beaker and 400 ml of water was added. The solution was neutralized with 1N NaOH using
phenolphthalein indicator. Then it was boiled gently for 1hr with occasional stirring. Boiling water was added to maintain the original
level. It was then cooled and transferred to a 500 ml volumetric flask. Volume was made up and filtered through whatman paper No.
4. 100 ml aliquot of neutral lead acetate solution was pipette out and mixed with 200 ml of water. Then it was allowed to stand for 10
min., & then precipitate the excess of lead with potassium oxalate solution. Make up to n mark and filter.
Vol. made up x glucose equivalent x100
Reducing sugar (%) =
Burette reading x Wt. of sample
Determination of Total sugarSample Preparation
20 ml of clarified sample solution was taken into a 250 ml conical flask. 10 ml of 10% HCl was added. This solution was boiled
gently for 10 min. to complete the inversion of sucrose, and then cooled. It was then transferred to 250 ml volumetric flask and
neutralized with 1N NaOH using phenolphthalein as indicator. Volume was made up.
Vol. made up x glucose equivalent x100
Total invert sugar (%) =
Titre x Wt. of sample
Sucrose (%) = (Total invert sugar (%) - Reducing sugar (%)) x 0.95
Total sugar (%) = Reducing sugar (%) + Sucrose (%)
Preparation of ash solution for the determination of mineralsThe ash solution was prepared form the sample by wet digestion method of Jackson (1967) to determine the content of trace elements
and total phosphorus.
Estimation of Total Iron content [7]Iron in reduced condition reacts with oxidizing agents like orthophenanthrolene at pH of 4.5 to give reduction that is proportional to
concentration of Iron present.
586

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Reagents1. Standard Iron solution: 0.3512gm of ferrous ammonium sulphate was dissolved in water, few drops of HCl was added and diluted
to 100ml and diluted 5ml of this solution to 250ml with water in a volumetric
flask, so that the final concentration was 0.01mg iron per ml. 2. Hydroxylamine hydrochloride solution: A weighed amount of 10 g of
hydroxylamine hydrochloride was dissolved in glass distilled water and diluted to 100 ml. 3.Acetate buffer solution: Accurately 8.3 g
of anhydrous sodium acetate (previously dried at 100C) was weighed and dissolved in glass distilled water. After transferring 12 ml
of glacial acetic acid, the solution was diluted to 100 ml with glass distilled water. 4. Orthophenanthrolene: 0.1gm of
orthophenanthrolene was dissolved in near about 80ml of hot distilled water (80C). Then cooled and diluted to 100ml and stored in
refrigerator.
Preparation of standard iron curve for the estimation of total iron contentA set of different amount of standard Iron solution was prepared by taking 1 to 5ml in each test tube. A blank was also prepared
without iron solution. To each test tube 0.5ml of hydroxylamine hydrochloride solution was added. Also 2.5ml Acetate buffer was
added followed by 0.5ml orthophenanthrolene solution. The blank was prepared in the same fashion as without sample. Mixed well
and readings were noted by using spectrophotometer at 530nm. Standard graph was plotted by taking the optical density values against
the concentration of iron on a graph paper.
Estimation of calcium contentCalcium was precipitated as calcium oxalate. The precipitate was dissolved in hot dilute sulphuric acid and titrated with standard
potassium permanganate.
Reagents1. Ammonium oxalate solution: Saturated solution 2. Methyl red indicator: 0.5gm of methyl red indicator was dissolved in 100ml of
95% alcohol. 3. Dil. Ammonium Hydroxide (1:4) 4. Dil. Acetic acid (1:4) 5. Dil. HSO (1:4): Acid was slowly added with constant
stirring. 6.0.1N Potassium permanganate: 3.35 gm of dry KMnO weighed accurately and dissolved in water and diluted to 1litre
volumetric flask. 7. 0.01N Potassium Permanganate (Working standard): 10ml 0.1 N Potassium Permanganate solution was diluted to
100ml with water. Fresh solution was prepared before using. (1ml of KMnO solution = 0.2mg of calcium)
Calcium estimationAn aliquot (20ml) of the ash solution obtained by dry mixing was taken in 250ml beaker. 10ml saturated Ammonium oxalate solution
was added and 2 drops of methyl red indicator solution was made slightly alkaline by the addition of dilute ammonia and then slightly
acidic with few drops of acetic acid until the color was faint pink (pH 5.0). The solution was heated to the boiling point and allowed to
stand at RT for overnight. Solution was filtered through Whatman paper no. 42 and washed with water till the filtrate was chloride free
(the filtrate was tested by silver nitrate). The point of filter was broken with platinum wire or pointed glass rod. The precipitate was
washed first using hot dil. HSO and then with hot water and titration was carried out still hot (Temperature 70-80C) with 0.01N
KMnOto the first permanent pink color. Finally, filter paper was added to that solution and titration completed.

Titre value x N of KMnO4 x Total Vol. of ash solution x 100


Calcium (%) =
Vol. taken for estimation x Wt. of sample

RESULTS AND DISCUSSIONProximate analyses of raw materials


Proximate analyses of raw materials were done using standard analytical methods. Proximate analyses include estimation of Moisture
content, Proteins, Carbohydrates, Fat, Ash content, etc. The traditional puranpoli was prepared with basic ingredients like Bengal
gram dhal with sugar. An effort in the research project has been made to nutritional status of the traditional puranpoli by use of wheat
flour and Bengal gram dal. The proximate analysis report of Bengal gram dal and Wheat Flour represented in Table 5.1 and respective
Figure 5.1, shows that the Bengal gram dal contains 9.10% moisture content with 16.40% protein content, 4.60% fat content and
57.42% carbohydrates by difference and 6.28% crude fibers. It indicated that legumes has good amount of proteins and carbohydrates.
587

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Wheat flour
(%)

Ash

2.65

1.25

Protein

16.40

13.70

Carbohydrate

57.42

72.57

Fat

4.60

1.87

Crude Fiber

6.28

10.70

80

Bengal
gram
dhal (%)

60
40
20
0
Crude Fiber

11.00

Fat

9.10

Carbohydrate

Moisture

Protein

Bengal gram
dhal (%)

Moisture

Parameters

Fig.4 Proximate analyses of raw materials

Ash

Table 2. Proximate analyses of raw materials

Wheat
flour (%)

The above table and figure shows that wheat flour contains 11.00% moisture content, 1.25% ash content, 13.70% protein content,
72.57% carbohydrates, 1.87% fat content and 10.70% crude fiber content. It shows that the wheat flour contains more amounts of
carbohydrates
Proximate analysis of puranpoliTable 3. Proximate analysis of puranpoli
Constituents

Puranpoli

Moisture

30.80

Ash

1.38

Protein

16.40

Fat content (%)

27.17

Total sugar (%)

51.80

Reducing sugar

1.90

Non-reducing sugar (%)

49.90

Calcium content (mg/100g)

79

Iron content (mg/100g)

2.98

588

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 5. Proximate analysis of puranpoli

Puranpoli
Iron content (mg/100g)
Calcium content (mg/100g)
Non-reducing sugar (%)
Reducing sugar
Total sugar (%)
Fat content (%)
Protein
Ash
Moisture
0

10

20

30

40

50

60

70

80

Puranpoli contains 30.80% moisture content with 16.40% protein content, 27.17% fat content and 51.80% total sugar, and contain
high amount of non-reducing sugar was about 49.90%. Puranpoli contain less or negligible amount of reducing sugar that is about
1.90%.

CONCLUSION-From the obtained results it was concluded that the puranpoli have good nutritional value because it contain cereal
flour, pulses flour, sugar and spices like cardamom. So puranpoli contains good amount of carbohydrate, fat, essential amino acid like
Methionine and lysine, minerals like calcium and iron. So, it can fulfill the nutritional requirement of a person.

REFERENCES;
[1].Satyanarayana Rao, T.S. N.M. Kaverappa, T. Hemaprakash reddy and K.S.Jayaraman., (1990). Development of ready-to-eat
traditional indian sweet dishes based on jaggery and coconut, Journal of Food Science and Technology, India, 27 (5), 355-358.
[2]. Vijayalakshmi Inamdar, Bharati. V, and Ramanaik., (2005). Nutrient composition of traditional festival foods of north Karnataka,
J. Hum. Ecol., 18(1), 43-48.
[3]. William Benton, 1972. Encylopaedia Britannica Inc, vol. 16: 802-805.
[4]. AOAC. 1984. Official Methods of Analysis. 15th Ed. Association of official Analytical Chemists. Washington,DC.
[5]. Association of official analytical chemists (AOAC). Official method of analysis : sugar products. Arlington; 1984
[6]. AOAC. 1975. Official Methods of Analysis. Fourteenth edition. Association of official Analytical Chemists.Washington, DC.
[7]. Skoog and West, Fundamentals of Analytical Chemistry, 2nd Ed., Chapter 29

589

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Study of Fault Management Algorithm and Recover the Faulty Node


Using the FNR Algorithms for Wireless Sensor Network
Mr . Takale Dattatray1, Mr.Priyadarshi Amrit
1

Student of ME (Information Technology), DGOFFE, University of Pune, Pune


Asst.Professors, Department of Computer Engineering, DGOIFE, Bhigwan University of Pune,Pune

Abstract - In Wireless Sensor Network every Sensor node having a tendency to shut down ,due to computation power, Hardware Fail,
Software Fail, environmental Condition and energy depletion. Fault Tolerance is a major problem in a wireless sensor Network. A
Fault Management is key of Network Management. Fault management Algorithms is divided into fault detection, fault diagnosis and
fault recovery.The fault detection schemes classified in a two types: Centralized Approach and Distributed Approach.Fault diagnosis
is the whole process of fault management. In a Fault Diagnosis cover only three question like where the fault is located,what type of
fault it is like node failure,how a fault does occurs. Fault recovery is the last phase of the fault management process. The various
algorithms are available for the recover the Faulty Node like FNR Algorithm. The increase the Lifetime of WSN Using Fault Node
Recovery Algorithm when Sensor Node is dead. Fault Node Recovery Algorithm it depend on Generic Algorithm and Grade
Diffusion Algorithm. The algorithm can result in fewer replacements of sensor nodes and more reused routing paths. This Algorithm
also increases the number of active nodes, reduces the rate of data loss and reduced energy consumption.
Keywords-- Wireless sensor networks, Genetic algorithm, Grade diffusion algorithm, Fault Node Recovery Algorithm.
INTRODUCTION:

The wireless sensor network is nothing but collection of Sensor Node organized in a Cooperative Network. Each Sensor Node has
Capability to process the data, sense the Data and the transfer there Live Data to Base Station or Data Collection Centre. In Wireless
Sensor Network, each Sensor Node has limited Computational Power to process and transfer live Data to Base Station. Sensor In
Wireless Sensor Network every Sensor node having a tendency to shut down ,due to computation power, Hardware Fail, Software
Fail, environmental Condition and energy depletionFault tolerance is one of the critical issues in WSNs. The existing fault tolerance
mechanisms either consume significant extra energy to detect and recover from the failures or need to use additional hardware and
software resources. Fault Tolerance is a major problem in a wireless sensor Network. A Fault Management is key of Network
Management. Fault management Algorithms is divided into fault detection, fault diagnosis and fault recovery.The fault detection
schemes classified in a two types: Centralized Approach and Distributed Approach.Fault diagnosis is the whole process of fault
management. In a Fault Diagnosis cover only three question like where the fault is located,what type of fault it is like node failure,how
a fault does occurs. Fault recovery is the last phase of the fault management process. The various algorithms are available for the
recover the Faulty Node like FNR Algorithm.
The aim is to provide Energy efficient and cost effective communication in Wireless Sensor Networks. The proposed algorithm
enhances the lifetime of a sensor nodes when a sensor node is shut down and it depends on Grade diffusion algorithm combined with
the genetic algorithm. The algorithm can result are in the replacements of sensor nodes and more reused routing paths. This Algorithm
also increases the number of active nodes, reduces the rate of data loss and reduced energy consumption
1.

Fault Management Framework:Fault Tolerance is a major problem in a wireless sensor Network. A Fault Management is key of Network Management. Fault
management Algorithms [10] is divided into fault detection, fault diagnosis and fault recovery.Fault detection schemes
classified in a two types: Centralized Approach and Distributed Approach.Fault diagnosis is the whole process of fault
management. In a Fault Diagnosis cover only three question like where the fault is located,what type of fault it is like node
failure,how does a fault occurs. Fault recovery is the last phase of the fault management process. The various algorithms are
available for the recover the Faulty Node like FNR Algorithm.
A) Fault Detection :
The Fault Detection is the First phase of a Network Management. Fault detection schemes classified in a two types:
Centralized Approach and Distributed Approach.

590

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

a.

Centralized Approach:
In this Approach, Base Station Responsible for whole Network Management and Base Station have a unlimited Energy.
A centralized framework called MANNA was presented in for fault management. In MANNA each Sensor node is
assigned role of a manager and manager is collect the all information of every Sensor Node. Centralized approach
provides good fault management while it is not suitable for large scale networks. Another drawback is that the central
controller becomes a single point of data traffic concentration and hence consumes large amount of energy of the nodes.
Third, this central controller becomes a single point of failure for the entire network.

b.

Distributed Approach:
In Distributed Approach, big Network is divided in small Network. Each Sub-Network has a Central manager and has
detected the fault in the Network.In designed a distributed fault management framework called WSND iag to identify
faulty node. In the Distributed Approach, some techniques are used like, Neighbour coordination, Clustering, Node level
Measurement.

B) Fault Diagnosis:
Fault diagnosis is the whole process of fault management. In a Fault Diagnosis cover only three question like where the fault
is located,What type of fault it is like node failure,how does a fault occurs.The identification of root cause is the main task to
repair the fault.
C) Fault Recovery:
Fault Recovery is the last Phase of the Fault Management System and in this Phase Network is reconstructed. The identify
the faulty node and replace it number of technique available
Song Jia et al. proposed a recovery algorithm [12] in 2013 based on minimum distance redundant node. We propose
algorithm, Recover the sensor node using the Minimum Distance Redundant node Recovery algorithm. The MDRN
algorithm is Applies on the sink node with unconstrained energy consumption which knows the locations of all active nodes
and redundant nodes in the WSNs. Using algorithm will have great recovery accuracy and coverage quality and also increase
a lifetime if the Wireless Sensor Network.
Rajashekhar Biradar in 2013 proposed [11] an Active node based Fault Tolerance using Battery power and Interference
model (AFTBI) in WSN to identify the faulty nodes using battery power model and interference model. Fault tolerance
against low battery power is designed through hand-off mechanism where in the faulty node selects the neighbouring node
having highest power and transfers all the services that are to be performed by the faulty node to the selected neighbouring
node. Fault tolerance against interference is provided by dynamic power level adjustment mechanism by allocating the time
slot to all the neighbouring nodes. If a particular node wishes to transmit the sensed data, it enters active status and transmits
the packet with maximum power; otherwise it enters into sleep status having minimum power that is sufficient to receive
hello messages and to maintain the connectivity.
Ting Yang et al. in 2013 proposed [3] the novel rectification algorithms (greedy negative pressure push algorithm and
dynamic local stitching algorithm) is proposed to cooperatively repair broken transmitting paths in Wireless Sensor
Networks. Using adjacency information, Greedy negative pressure push algorithm can efficiently grow the transmitting path
to achieve the minimum energy consumption for relays model. These algorithms only stitch broken fragments of the original
path.
The main challenge in wireless sensor network is to improve the fault tolerance of each node and also provide an energy
efficient fast data routing service. An energy efficient node fault diagnosis and recovery for wireless sensor networks is
referred as fault tolerant multipath routing scheme for energy efficient wireless sensor network (FTMRS).The FTMRS is
based on multipath data routing scheme. One shortest path is use for main data routing in FTMRS technique and other two
backup paths are used as alternative path for faulty network and to handle the overloaded traffic on main channel shortest
path data routing ensures energy efficient data routing.
In Wireless Sensor Network all sensor nodes have the equal probability to fail and therefore the data delivery in sensor
networks is inherently faulty and unpredictable. Most of the sensor network applications need reliable data delivery to sink
instead of point-to-point reliability. Therefore, it is vital to provide fault tolerant techniques for distributed sensor network
applications. Rehena, Z. et al. in 2013 presented [8] a robust recovery mechanism of nodes failure in a certain region of the
network during data delivery. It dynamically finds new node to route data from source nodes to sink. The proposed algorithm
is integrated easily in data delivery mechanisms where area failure in a certain geographical region is not considered. This
591

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

recovery mechanism is focused on multiple-sink partitioned network. It is found that it quickly selects alternative node from
its 1-hop neighbour list when there are no forwarding nodes available and establishes route from source to sink.
W. Guowei et al. Proposed a Dynamical Jumping Real-time Fault-tolerant Routing Protocol (DMRF).[4] When a Sensor
node fails , network congestion or void region occurs then the transmission mode will going to jumping transmission mode
leading to reduced transmission delay and guarantees the data to be sent to its destination within the specified time limit.
Each node can dynamically adjust the jumping probabilities to increase the ratio of successful data transmission by using
feedback mechanism. This Algorithm results in reduced effect of failure nodes, congestion and void region andreduced
transmission delay, reduced number of control packets and higher ratio of successful transmission. Feedback mechanism is
used to enhance the successful transmission Data. The feasibility proof and performance analysis are presented to testify the
superiority of DMRF.
PROPOSED SYSTEM:
The aim is to provide Energy efficient and cost effective communication in Wireless Sensor Networks. The proposed algorithm
enhances the lifetime of a sensor nodes when a sensor node is shut down and it depends on Grade diffusion algorithm combined with
the genetic algorithm. The algorithm can result are in the replacements of sensor nodes and more reused routing paths. This Algorithm
also increases the number of active nodes, reduce the rate of data loss and reduced energy consumption.
A. Directed Diffusion Algorithm:
Directed Diffusion algorithm is presented by c. Intanagonwiwat in 2003. In DD algorithm is a reduced a transmission count
of data and energy consumption. The DD algorithm is a Query Driven transmission protocol in whichthe sensor nodes send
the data back to the sink node only when it fits the queries. The Main Disadvantages of DD algorithm is energy
consumption is high and no reuse the routing path .that wise this algorithm is not popular.
B. Grade Diffusion Algorithm:
The Grade Diffusion (GD) algorithm is presented by H.C. Shih in 2012.The Grade Diffusion algorithm are identifies the
routing path of an every sensor node and also identifies the set of neighbour node of every sensor node to reduce the
transmission loading. The GD algorithm also creates a grade value, routing table, payload value, and set of neighbour node
for the every sensor node. The Grade Diffusion Algorithm Updates the Routing path in Real time in the Wireless Sensor
Network and the data is quickly and correctly updates.
C. System Architecture :
Fault Node Recovery algorithm is based on the grade Diffusion algorithm with combination of generic algorithm. The
Grade Diffusion algorithms are used in FRN Algorithm for create a grade value, payload value, neighbour value and routing
table of every sensor node. In the FNR algorithm is Calculate the number of non-functioning sensor nodes in wireless
sensor network at the time of operation is in process, and the parameter. B-th is calculated according to Equation (1).
The GradeDiffusion Algorithm create grade Value, payload value, routing table,
set of neighbour node of every Sensor Node. If B-th Value is Larger than the Zero, then FNR Algorithm is Replace by
Non-Functional Sensor Node to Functional Sensor Node in the Sensor Network using a Generic Algorithm. The given
Equation Find out the Bandwidth of Sensor Node.

In (1) Grade is given to grade Value of Every Sensor Node.


Nioriginal: is the Number of Sensor Node with grade Value i.
Ninow: Number of Sensor Node still functioning at the current time with grade Value i.
The parameter is set by the user and must have a value between 0 and 1. If the number of sensor nodes that function for
each grade is less than , Ti will become 1, and B-th will be larger than zero. Then, the algorithm will calculate the sensor
nodes to replace using the genetic algorithm. The parameters are encoded in binary string and serve as the chromosomes for
the GA. The elements (or bits), i.e., the genes, in the binary strings are adjusted to minimize or maximize the fitness value.
The fitness function generates its fitness value, which is composed of multiple variables to be optimized by the GA. Each
iteration of the GA, a predetermined number of individuals will produce fitness values associated with the chromosomes.
592

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

D.

Genetic algorithm :
GA is a search technique used in computing to find true or approximate solutions to
optimization and search problems. The Genetic algorithms are categorized as global search heuristics. The genetic
algorithm (GA) is a based on the natural genetic concept. Genetic algorithm is Directed random search technique deployed
in 1975.
There are 5 steps in the genetic algorithm: Initialization, Evaluation, Selection, Crossover, and Mutation. This Step is most
important in our algorithm. This step isimplementing after faulty Node detected in Wireless sensor Network.
1) Initialization:
In the initialization step, the generic algorithm (GA) are Create the chromosomes. Every chromosome is an
expected solution or result. The number of chromosomes is determined according to the population size, which is
defined by the user. The gene is the main concept value of gene is the either 1 or 0. The length of Chromosome is
calculated by number of non-functional sensor node.

Fig 2 : Chromosome and its gene.


In Fig. length of Chromosome is 10 and gene is either 1 or 0. A 1 mean node is replaced and 0 means node are not
replaced.In above fig 10 non-function node having a length is 10 an defined by 6,9,12,27,81,57,34,53,66 etc.
2) Evaluation:
The Fitness Value is calculating according to the Fitness Function. The Parameter of fitness function is
chromosome and gene. The fitness function is defined over the genetic representation and measures the quality of
the represented solution.

Where,
Ni = number of replaced sensor nodes and their grade value at i .
Pi= number of reusable routing paths from sensor nodes with their grade value at i
T N = total number of
sensor nodes in the original WSN.
T P= totalnumber of routing paths in the original WSN.
A high fitness value is sought because the WSN is looking for the most available routing paths and the least number
of replaced sensor nodes
3)

593

Selection:
In the selection step, the select the chromosomes with the lower fitness values and it is currently not working. We
use the elitism strategy and keep the half of the chromosomes with good fitness values and put them in the mating
pool. Those node currently not working this node will be deleted and New chromosome is replace after the
crossover step

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 3: Selection Step


4)

Crossover :
The crossover step is used in the genetic algorithm to change the individual chromosome. In this algorithm, we
use the one-point crossover strategy to create new chromosomes. Two individual chromosomes are chosen from
the mating pool to produce two new offspring. A crossover point is selected between the first and last genes of the
parent individuals. Then, the fraction of each individual on either side of the crossover point is exchanged and
concatenated. The rate of choice is made according to roulette-wheel selection and the fitness values.

Fig 4: Crossover Step

5) Mutation:
In this algorithm, mutation step is to flip a gene randomly in the chromosome. The chromosome with the genes of
1 replaces the sensor node to extend the network lifetime.

Fig 5 : Mutation Step


6)

594

Simulation
The Project, the implement the Grade Diffusion (GD) Algorithms, Genetic algorithm (GA), and Fault Node
Recovery (FNR) algorithm. The Various parameter such as a energy consumption in mJ, Power consumption in
mW, Number of Active Node, Number of dead Node, time taken a node is calculated and comparisons of above
the three algorithm with this parameter. Simulation of the proposed algorithm will be performed with the help of
NS-2 and the simulation results will show how the faulty sensor nodes are recovered by using most reused paths
and these results are compared with existing models.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CONCLUSION
The Study of a various Recovery algorithm by using number of research paper. In real wireless sensor
networks, the each sensor nodes has a battery power supplies and thus have limited energy resources. The proposed
algorithm enhances the lifetime of a sensor nodes when a sensor node is shut down and it depends on Grade
diffusion algorithm combined with the genetic algorithm. The algorithm can result in fewer replacements of sensor
nodes and more reused routing paths. This Algorithm also increases the number of active nodes, reduces the rate of
data loss and reduced energy consumption.

REFERENCES:
[1] Fault Node Recovery Algorithm for a Wireless Sensor Network, Hong-Chi Shih, Student Member, IEEE, Jiun-Huei Ho,
Bin-Yih Liao, Member, IEEE, and Jeng-Shyang Pan, Senior Member, IEEE IEEE SENSORS JOURNAL, VOL. 13, NO. 7,
JULY 2013.
[2] Directed diffusion for wireless sensor networking, C. Intanagonwiwat, R. Govindan, D. Estrin, J. Heidemann, and F. Silva,
IEEE/ACM Trans. Netw., vol. 11, no. 1, pp. 216, Feb. 2003.
[3] DLS: A dynamic local stitching mechanism to rectify transmitting path fragments in wireless sensor networks, Ting
Yang, Yugeng Sun, Javid Taheri and Albert Y. Zomaya Journal of Network and Computer Applications, vol.36, pp. 306
315,2013.
[4] Dynamical Jumping Real-Time Fault-Tolerant Routing Protocol for Wireless Sensor Networks. Guowei Wu, Chi Lin,
Feng Xia, Lin Yao, He Zhang and Bing. Sensors,vol.10,pp.2416-2437, 2010.
[5] Grade diffusion algorithm, in Proc. J. H. Ho, H. C. Shih, B. Y. Liao, and J. S. Pan 2nd Int. Conf. Eng. Technol. Innov., 2012,
pp. 20642068.
[6] Mobile Ad Hoc Networking (MANET): Routing Protocol Performance Issues and Evaluation Considerations. S. Corson
and J. Macker, New York, NY, USA: ACM, 199.
[7] Fault Tolerant multipath routing scheme for energy efficient Wireless Sensor Networks,P.Chanak,T.Samanta and I.
Banerjee, International Journal of Wireless & Mobile Networks, vol. 5, No. 2, pp. 33-45, 2013.
[8] Handling area fault in multiple-sink Wireless Sensor Networks, Rehena, Z.; Das, D.; Roy, S.; Mukherjee, N.Advance
Computing Conference (IACC), 2013 IEEE 3rd International , pp.458-464, 2013.
[9] A self-managing fault management in WSNs, Beenu Baby and Joe Mathew Jacob, ISSN vol 1 July 2013.
[10] Comparative Study of Fault Management Algorithms in WirelessSensor Networks c . Virmani , k.garg IJERT
2278-0181 Vol. 1 Issue 3, May 2012

ISSN:

[11] Fault tolerance in wireless sensor network using hand-off and dynamic power adjustment approach Rajashekhar Biradar,
Journalof Network and Computer Applications, vol. 36, pp. 1174-1185,2013.
[12] An Efficient Recovery Algorithm for Coverage Hole in WSNs, Song Jia, Wang Bailing and Peng Xiyuan in Proc. 2 nd
ICASP., ASTLVol. 18, pp. 5-9, 2013

595

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Calculation Methodologies for The Design of Piping Systems


Mr. Suyog U. Bhave.
M.E. Student, D. Y Patil Institute of Engineering and Technology, Pune, India.
Principal Piping Engineer, Petrofac Engineering India Pvt. Ltd, Mumbai, India
Mail: Suyog.bhave@gmail.com

Abstract- Piping systems are constantly present in industrial facilities, being in some cases associated with the transport of fuels,
processing of crude oils and chemical plants. Due to the nature of those fluids, the design of the piping system that transports them is a
task of great responsibility, which must follow codes and standards to guarantee the systems structural integrity. Many times the
piping systems operate at a temperature higher than the temperature at which they are assembled, leading to the thermal expansion of
the systems pipes and since no piping system is free to expand, the thermal expansion will lead to stresses. Besides the stresses
caused by thermal expansion, the studied systems will also are subjected to constant loads caused by their weight, as well as
occasional loads like wind, earthquake. In this perspective, calculation methodologies were developed in order to do quick analysis of
the most common configurations, according to the codes like ASME B31.3, allowing that way improvements on the flexibility of the
projected systems.
Although the methodology developed may only be used in simple systems and gives very conservative results, in practical cases it can
be used to analyse complex systems, by dividing them in simpler cases.
[6, 8, 10]

Keywords
Piping Systems, Flexibility, Stress Analysis, Thermal Expansion, ASME B31.3, design methodology, expansion loop.

I.

Introduction

The first piping systems were constructed between 3000 b.C. and 2000 b.C. in the ancient Mesopotamia to be used on the irrigation of
large areas of cultivated land. Initially used in agriculture, due to the growing need to cultivate larger areas, piping systems also had a
crucial role in the development of big cities and during the industrial revolution with the discovery of steam power. Piping systems
also turned out to be essential in the exploration of oil.
In the present civilization, piping systems are constantly present, either in residential and commercial buildings, either in industrial
facilities. In oil refineries and others industrial process plants, pipelines represent between 25% and 50% of the total cost of the
facilities.
Since piping systems are associated with facilities of high degree of responsibility, stress analysis represent a fundamental stage of the
piping design, in order to prevent failures and cause of accidents. Taking into account that piping systems are subjected to multiple
loads, stress analysis represents a complex task. Besides the stresses caused by the piping weight, fluids and isolation, piping systems
are also subjected to temperature changes, internal and external pressure, and occasional events such as water hammer, wind and
earthquakes.
Due to the temperature variations that occur in piping systems, between the installation and operation temperatures, they will be
subject to expansion and contraction. In the general terms, both contraction and expansion are called thermal expansion. Since every
piping system has restrictions that prevent the free expansion, thermal expansions will always create stresses, but, if the system is
flexible enough, the expansion may be absorbed without creating undue stresses that may damage the system, the supports and the
equipment to which the pipes are connected.
One of the greatest challenges in the pipe stress analysis is to provide the system enough flexibility to absorb the thermal expansions.
Even nowadays, that pipe stress analysis covers much more than flexibility analysis, it still is one of the main tasks of the engineers
that work in this area. Many times due to the inexistence of a quick method that allows a verification of the flexibility of projected
systems, they turn out to be too stiff or too flexible.
Engineers constantly face the need to minimize the costs and at the same time obtain a system with enough flexibility, without
sacrificing the security requirements. The shortest the system, the lowest the price, since it will use less material, but this configuration
will have flexibility problems in the majority of the cases, due to the incapacity to absorb thermal expansion. On the other hand,
systems that are too long may have problems due to pressure drop. The increase of a systems flexibility may be obtain due to the
596

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

changes in direction, although, in the cases where the flexibility obtained that way isnt enough, additional flexibility may be obtain
using pipe loops and expansion joints. The attention given to pipe stress analysis has increased in the last decades, due to the high
safety requirements of the modern process plants. For that reason, the access to an efficient computer program, such as CAESAR II,
ANSYS, AUTOPIPE to perform the stress calculations, reduces the design costs, since it decreases the time necessary to perform the
analysis. In order to prove the structural integrity of a piping system, it is necessary to follow the procedures and specifications of the
piping codes. There are several codes that involve the design of piping systems, but the ones more often used are the ASME B31
Codes. However there are design limitations stated in Codes, these are needed to be consider during analysis assumptions and
deciding methodology.
It is extremely important to make a correct design of piping systems, avoiding their failure, which may cause huge material damage
and even loss of human lives. The objective of this paper is to present calculation methodologies for the design of piping systems.
[1, 4, 5, 8]

II.

CODES AND STANDARDS

In order to satisfy the safety requirements, local regulations, design constraints of Client, piping systems have to be designed and built
according to determinate codes and standards.
In the United States, the American Society of Mechanical Engineers (ASME) has assumed the leadership in the formation of
committees that have elaborated The Piping Code. The Piping Code is constituted by a set of requirements that assure a correct and
safe operation of the piping systems. The code ASME B31 establishes the allowable stresses, the design, the fabrication, the erection,
the tests, the fatigue resistance and the operation for non-nuclear piping systems. For this paper we are particularly interested in the
facilities covered by the codes B31.3.
ASME B31.3 Process Piping: For piping systems used in process plants, such as petrochemical plants, this is the code that covers
almost all the requirements to design, erection, testing of piping systems.The stress analysis requirements are detailed in this code can
be applied to all the plants designed according to this Code. This Code is constantly improved considering latest industrial trend and
feedback from various committee members, consultants and industrial operation groups.
There are other Codes and Standards from different countries like UK, EU, Japan which also are used and applied to the stress
analysis. There are consultancy practices which provide guidelines and rules to design piping system.
[3, 6, 8]

III.

THERMAL EXPANSION AND FLEXIBILITY

Most of the piping systems work at temperatures higher than the installation temperature. This temperature raise, will lead to the
thermal expansion of the pipes, which for the cases of interest of this paper will always be metallic pipes. The thermal expansion of a
material is evaluated by the thermal expansion coefficient . The thermal expansion of a pipe may be calculated by the following
expression:
= (1)
where is the pipe length at the reference temperature (usually the installation temperature).
If a piping system does not have enough flexibility, in order to compensate the thermal expansions, the stresses originated may
damage the system, as well as the equipment to which it is connected.

Fig. 1
Consider piping connecting from Tank T1 to Tank T2. If the piping is straight as shown in the Fig. 1 and fluid with higher temperature
flow though the piping, the section of pipe between T1 and T2 will expand. However, there is no space to expand and result in high
loads on the nozzles. However if we provide the loop as shown in the Fig. 2, the nozzle loads can be decreased.
In this case, flexibility may be increased adding additional sections of pipe perpendicularly to the original section, to absorb the
expansion that is the principle of pipe loops. In other words, by using pipe loops, the thermal expansion is absorbed by the bending of
the perpendicular sections of pipe.
597

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

There are several methods to increase systems flexibility, being often used the installation of pipe loop as depicted in Fig. 2. In some
cases, due to spatial constrains, expansion joints are the alternative to the installation of pipe loops. There are several types of
expansion joints, but all of them are elements much more sophisticated than the pipe loops, which are nothing more than additional
sections of pipe. In addition, expansion joints are subjected to breakdowns and require maintenance. For these reasons, the design of
pipe loops to increase the systems flexibility is preferable to the use of expansion joints.

Fig.2 Pipe loop


Besides the use of pipe loops being much simpler than the use of expansion joints, it still matters to know the best pipe loop
configurations, in order to maximize its potential to increase flexibility.

In first place, in order to keep forces and moments balanced, the loop as depicted in Fig. 2 must be symmetric. Concerning the relation
between the loop dimensions, there is some divergence between companies, while some established design guidelines defining 3=2,
other defined the best configuration as the one that follows the relation:
3 = 1/22

(2)

To calculate the length 2 necessary to absorb the thermal expansion without damaging the pipe, following expression, which derives
from the guided cantilever method, may be used:
2 =

3EhD
SA

(3)

where is the modulus of elasticity of the material at the operation temperature, is the outer diameter of the pipe and is the
allowable expansion stress.
Relatively to loops locations, they must be centered between anchorages, 1 = 5. In cases that is not possible to center the loop, it
should be tried that the pipe sections, at each side of the loop has their dimensions as close as possible.
Besides the anchorages at the ends of the pipe loops, in many cases there are also intermediate guides and vertical supports. The
function of the vertical supports is to support the pipes weight, assuring the allowable pipe span. The guides are used to control the
thermal expansion, assuring that the loops play their role correctly, since they direct the expansion to the sections defined by the tubes
of length 2 and 4.
[5, 9, 11, 12]

IV.

CODE STRESS REQUIREMENTS

According to the Codes ASME B31.3, only the maximum stresses are calculated, which is implicit in the stress intensification factors
(SIF) that derive fundamentally from fatigue tests.Code requirements are specified with the cautions and limitations which need to be
carefully considered while designing piping system. Even Code specifies the actions when the condition cannot fully comply with
Code requirements. The expressions to calculate the stresses presented in the ASME B31.3, are only influenced by the moments,
ignoring the forces. This is due to the fact that the stresses originated by forces are usually too low when compared with the stresses
originated by moment. Before calculate the stresses, the moments have to be reoriented accordingly to the planes of the component
that is under analysis, due to the different SIF of each direction. The SIF calculation is the most crucial step and careful considerations
are required. The in plane and out of plane moment need to be considered. The stresses developed are more in the cases like direction
change, inside diameter change, sudden obstructions. The example of elbow is shown in Fig. 3, shows inner and outer plane
moments.
598

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 3 Moments in bends


Stresses are calculated in the nodes located at the ends of each element. For the shear stresses, the maximum tension in the outer
surface of the pipe, is given by (ASME B31.3):
= =

(4)

where is the shear moment and is the section modulus.


The bending stress acting on the two different planes can be combined, consequently the combined bending stress acting on the
longitudinal direction is given by (ASME, B31.3):
=

iiMi

+ ioMo 2 / Z(5)

where and , are respectively the inner and outer plane bending moments, and and are respectively the inner and outer plane
stress intensification factors (Fig- 3).
The flexibility analysis is done by the comparison between the combined effect of multidimensional tensions and the allowable stress.
The ASME B31.3 codes use the Tresca criterion to obtain the combined tension effect , also called expansion stress:
=

Sb2 + 4St2

(6)

According to the ASME B31.3 Code, the stresses to which a piping system is subjected may be separate in three main classes, for
which the codes establish limits:
A)
B)
C)

The stresses caused by sustained loads


The stresses caused by occasional loads
The stresses caused by thermal expansion.

A. Sustained stresses :
Sustained stresses in piping systems are caused by weight, pressure and any other constant load (ASME B31.3).
The ASME B31.3,establishes a limit to the sustained stress:

(7)

Where W is the weld joint strength reduction factor and is sum of the longitudinal stresses, due to sustained loads such as pressure
and weight. The allowable stress at the operation temperature is denoted as .

B. Occasional stresses
599

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Occasional stresses are caused by occasional events, such as water hammer, earthquakes and wind.
According to ASME B31.3, + should be lower than 1.33, where are the stresses produced by occasional loads.
C. Thermal expansion stresses
The thermal expansion usually leads to fatigue failure, so the systems integrity depends on the stress range and on the number of
operation cycles.
In the case of ASME B31.3 the stresses caused by thermal expansion, must satisfy the following condition:

(8)

Where,
= (1.25+0.25)
OR
= (1.25 + ) if >.
In both cases is the stress reduction factor, defined by each code and SA is the allowable stress.
[2, 3, 5, 6, 9]

V.

Thermal Expansion Calculations

There are several methods to calculate the stresses caused by thermal expansion. The method used in this paper is the Spielvogel
Method, which is based on the Theory of the Elastic Center and on the Castigliano Theorem.
The work done to deform a pipe of length L, subjected to an axial force and a moment , is given by:
=

P2 ds

0 2

(9)

where, is the modulus of elasticity of the material, is cross-sectional area of the tube and is the moment of inertia of the cross
section.
Considering a piping system in the x-z plane with both ends anchored and with no intermediate restrictions, it is known that due to
thermal expansion each end will be subjected to a pair of forces, and , and one moment . Considering the equations of static
equilibrium, the forces will have the same modulus in both ends.
From equation (9), the Theory of the Elastic Center and the principal described above, the following system of equations may be
deduced:

+
= (10)

I xz

+ = (11)

where are the line moments of inertia of the system in the plane m-n, and and , are respectively the thermal expansion in the
direction x and in the direction z.
Solving the system of equations (10) and (11), the forces on the centroid of the system and are obtained. The value of these
forces is equal at any point of the system. In order to obtain the bending moment at any point of the system, the following
expression can be used:
= (12)
where and are the coordinates of the point in question, in the referential with origin on the centroid.
600

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A. Reactions
According to ASME B31.3, the calculation of the stresses caused by thermal expansion shall be done using the cold modulus of
elasticity . However, accordingly to the same codes, the maximum value of the reactions must be considered at the installation and
at the maximum expansion conditions.
For systems with both end anchored and no intermediate restrains, the maximum values of the reactions, depend on the level of coldspring and are given by (ASME B31.3):

For the condition of maximum thermal expansion:


= ( 1

(13)

For the installation condition:


=max {,1, }

(14)

where is the cold-spring factor (varying from 0 for systems in no cold-spring, to 1 for system with 100% cold-spring), is the value
of the reaction base on , is the maximum reaction and is the reaction at the installation condition.
B. Stresses in loops with intermediate restrain
The Spielvogel method (1961) was developed for loops without intermediate restrains, being the maximum expansion stress in loops
with intermediate guides, as the one depicted in Fig. 4, obtained by the expression established by the Grinnell Corporation (1981):

= ( )

(15)

where is the maximum expansion stress for the loop with intermediate guides and is the maximum expansion stress for a loop of
the same size with anchors in the points where the guides are.

Fig. 4 Loop with intermediate guides


[1, 6, 11]

VI.

Sensitivity Analysis

In this section are reported the results of the sensitivity analysis of symmetrical pipe loops with both ends anchored. The objective of
the analysis is to verify how the reactions and the stresses caused by the thermal expansion vary with the modification of the operation
conditions, namely the temperature, and with variations of the loop and pipe dimensions.
In addition it is intended to compare the difference between the results for the forces, moments and stresses, obtained with different
methods: the Spielvogelmethod and using the commercial software CAESAR II.
In Loops without guide case, the reactions caused by thermal expansion, will only be a force in the x direction Fx and a bending
moment My, as illustrated in Fig -5.

Fig.5 Free body diagram of a pipe loop, considering only the thermal expansion
601

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In this case, the maximum stress due to thermal expansion will occur in the corner C (Fig. 4). For the stressintensification factor it was
considered turns of radius =3/2.
First it is studied the variation of the segment 2 (consequently the variation of 4, since the loop is symmetric), fixing the remaining
dimensions, being obtained the charts for the reactions variation of figures 6 and 7.
Looking at the charts it can be concluded that with the increasing of 2, the amplitude of the reaction decreases and the results of the
three methods converge. The results of the Spielvogel Method and of the Grinnell Method are very similar with a relative difference of
about 1%. Besides that, the results of these two methods are much more conservative than the results of the software CAESAR II,
with a relative difference around 38% for the forces results and 23% for the moments results.
Regarding the evolution of the maximum stresses , illustrated in Fig. 8, the reduction of stresses with the increase of 2 can be
observed. Once again these methods give very similar results being the relative difference equal to 2.2%. The relative difference
between the CAESAR results and the other methods is of about 44%.
From this analysis, it can be conclude that from the configurations suggested by different authors the one more advantageous in terms
of stresses and reactions is the one that follows the relation 3=1/22.

Fig. 6 Chart of Fx Vs. L2

Fig. 7 Chart of My Vs. L2

Fig. 8 Chart of SE,max Vs. L2


[4, 6, 9, 11]

Acknowledgment
I would like to thanks Prof. Pavan Sonawne of D. Y. Patil Institute of Engineering & Technology for guiding and providing valuable
suggestions to complete this paper.
602

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Conclusions
The objective of this work is to make summary of calculation methodologies for the design of piping systems of fuels, steam and
process plants. There are several codes and standards that can be used so assure the integrity of the systems, being the ASME B31.3
the most used.
According to the ASME B31.3 Code, the stresses to which a piping system is subjected may be separate in three main classes, for
which the codes establish limits: the stresses caused by sustained loads, the stresses caused by occasional loads and the stresses caused
by thermal expansion. Since the stresses due to occasional loads are only verified in very specific cases, the methodologies developed
are only for the sustained loads and thermal expansion. Besides the Codes stress requirements, it is also important to analyze the
systems in the operation conditions, namely the loads on the supports and the displacements. The determination of the loads caused by
the thermal expansion is a much more complex task. The method used to calculate the forces and the moments due to thermal
expansion is the Spielvogel Method, it is more versatile than other methods which are dependent of tables and charts. Pipe loops are a
very effective way to increase systems flexibility. It have been concluded that from the different configurations, suggested by
different authors, for pipe loops, the best is the one that follows the relation 2=23.
These methodologies give results very similar to the CAESARs results, but more conservative, due to the fact of neglecting the
curvature of the directions changes, more detailed investigation of SIF, more specific parameters based on actual condition, less
assumption in analysis software. Even there are limitations to Codes, so rather than over designing, the conservative approach may be
applied based on actual conditions and the specific analysis so that material and cost can be saved. Selection of proper methodology is
the key to the design optimization.
REFERENCES:
[1] Sharma P., Design and Analysis of a Process Plant Piping System, International Journal of Current Engineering and
Technology, 2014.
[2] ukanovi D, ivkovi M, Jakovljevi A, Applying Numerical Method In The Strength Calculation Of High Pressure
Steamline, IIPP, 2013.
[3] Satyanarayana T., sreenivasulu V., kiran C., Modeling and Stress Analysis of Flare Piping, International Journal of Latest Trends
in Engineering and Technology (IJLTET), 2013
[4] Miranda J and Luis A, Piping Design: The Fundamentals, UNU-GTP and LaGeo, 2011.
[5] Peng, L.C., and Peng, T.L.,Pipe Stress Engineering, Houston, Texas, USA, ASME Press, 2009.
[6] ASME B31.3,Process Piping. ASME, American Society of Mechanical Engineers, 2008.
[7] Frankel, M., Facility Piping Systems Handbook, 2nd ed. , New York, McGraw-Hill, 2002.
[8] Nayyar, M. L., Piping Handbook, 7th ed., McGraw-Hill, 2000.
[9] Woods G. E. and Baguley, R. B., Pratical Guide to ASME B31.3 Process Piping, Alberta, CASTI Publishing Inc.C, 1997.
[10] Kannappan, S., Introduction to Pipe Stress Analysis, Knoxville, Tennesse, John Wiley & Sons, 1985.
[11] Spielvogel, S. W., Piping Stress Calculations Simplified, 5th ed., New York, Byrne Associates, Inc., 1961.
[12] Kellogg, The M.W., Company, Design of Piping Systems, 2nd ed., New York, John Wiley & Sons, 1956.
[13] Crocker, S. and McCutchan A., Piping Handbook, McGraw-Hill, 1945

603

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Effect of Carbon Nanotubes Addition on Fracture Toughness in


Aluminium Silicon Carbide Composite
Binoy C.N*, Sijo M.T**
*

M. Tech Research Scholar, SCMS School of Engineering & Technology Karukutty


**
PhD Research Scholar, GEC Thrissur

Abstract In this paper, an approach to find the effect of carbon nanotubes addition on fracture toughness in aluminium silicon
carbide composite is been analyzed. Although properties like specific strength, specific stiffness, elevated temperature strength, wear
and corrosion resistance increases has proven, property of fracture toughness is decreasing. In this paper aluminium silicon carbide in
the proven composition 25% Silicon Carbide and 75% Aluminium has casted by using stir casting process as per standard dimensions
(ASTM E399). With the same dimension aluminium silicon carbide composite with added carbon nanotubes were also casted in the
ratio 0.2% multi walled Carbon Nanotube, 24.8% Silicon Carbide and 75% Aluminium. All the test pieces were tested by using
universal testing machine and stress intensity factor KIC was found out. Results showed that there is increase in fracture toughness of
aluminium silicon carbide with added carbon nanotubes by 16% than the 25% Silicon Carbide and 75% Aluminium combination of
composite
Keywords composite; metal matrix composite; aluminium silicon carbide; carbon nanotube; stir casting; fracture toughness;
stress intensity factor.

INTRODUCTION

Composite materials are important engineering materials due to their outstanding mechanical properties. Composites are materials
in which the wanted properties of separate materials which are combined by mechanically or metallurgically binding them together.
Each and every component retains its structure and characteristic, but the composite generally exhibits better properties than the
component properties separately. Composite materials offer superior properties to conventional alloys for various applications as they
have high stiffness, strength and wear resistance. Composite materials came from the continuous trying to improve various properties
of engineering materials; they are composed of a combination of distinctly different two or more micro or macro constituents that
differ in the form of composition and it is insoluble in each other. There are mainly two constituent in composites. One constituent is
called the reinforcing phase and the one in which it is embedded is called the matrix. Matrix will larger in quantity as reinforcement is
embedded into it. The reinforcing phase material may be in the form of fibers, particles, or flakes. In this work Aluminium is chosen
as matrix and silicon carbide is chosen as reinforcement.
From the previous studies in the composite of aluminium silicon carbide, it is clear that adding silicon carbide in the aluminium matrix
as reinforcement will increase the mechanical properties of the composite than pure aluminium. It increases properties such as
hardness, strength, damping capacity, impact strength etc. However it has been studied by Lihe Quian et al, that adding of
reinforcement particles can drastically degrade the ductility and fracture toughness of the composite materials. Fracture toughness is a
property which describes the ability of a material containing a crack to resist fracture, and it is one of the most important properties of
any material for virtually in all design applications or we can say that Fracture toughness is a quantitative way of expressing a
material's resistance to brittle fracture when a crack is present in it. If a material has a large value of fracture toughness it will probably
undergo ductile fracture. Materials with lower fracture toughness value will undergo brittle fracture. Technically, fracture toughness is
an indication of the amount of stress required to propagate a pre existing flaw. As the fracture mechanics with a solid saying that there
are no perfect materials without cracks, makes the topic more relevant. As composite materials are used in more crucial parts the need
for fracture toughness is very essential property. Based on the problem of fracture toughness studies were made. However the high
cost and difficulty of processing these composite is studied as a problem of making composite materials. Among the various methods,
stir casting route is simple, less expensive, with no damage to the reinforcement particles and used for mass production.
Aluminium LM6 metal chosen as matrix and silicon carbide with grain size 20m was chose as reinforcement. Reinforcement particle
size 20m was selected with reference to the best combination of aluminium silicon carbide metal matrix composites. Another factor
for decreasing fracture toughness is the volume and size. Carbon nanotube is an allotrope of carbon with cylindrical structure. They
have unusual mechanical and electrical properties, so that they find many applications as additives in various structural materials.
According to the American Society for Testing and Materials, ASTM E399 is used for testing fracture toughness, i.e. it is the standard
604

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

test method for linear-elastic plane-strain fracture toughness (KIC) of metallic materials. In a study made by Prathap Chandran et al,
proved that Carbon Nanotubes (CNT) has positive effect on increasing fracture toughness. But its not yet tested in Aluminium Silicon
Carbide composite. In this paper, an attempt to study the effect of carbon nanotubes in enhancing the fracture toughness is being
studied in an experimental way. The driving force behind the fabrication of nano composites is to achieve high functional properties
for high end applications. As Aluminium silicon carbide metal matrix composites are used in various fields and crucial parts like in
aerospace, aircrafts, underwater, automobile, substrate in electronics, golf clubs, turbine blades, brake pads etc, it is very important
that it should have good fracture toughness. Thus it can be used in places where fracture toughness is needed along with the normal
properties of metal matrix composites.
LITERATURE SURVEY

Conventional monolithic materials have limitations in achieving good combination of stiffness, strength, density and toughness. To
overcome these problems and to meet the ever increasing demand of modern day technology, most promising materials of present
interest are composites. Composites are used in many applications in present days as they can be manufactured by tailoring the
different properties without making many compromises in the required properties. For example: Fuselage and wings of an aircraft
must be lightweight, strong, stiff and tough. Natural rubber alone is relatively weak, by adding carbon black strength can be improved.
Amol D. Sable et al; found that with increase in composition of SiC, there is an increase in hardness and impact strengths exist
and best results has been obtained at 25% weight fraction of SiC particles. Homogenous dispersion of SiC particles in the Al matrix
shows an increasing trend in the samples prepared by applying manual stirring process and with 2-Step method of stir casting
technique respectively [1]. Brian G Falzon, et al; found that the addition of CNT in carbon fiber composite improved the average
fracture toughness by 161% [3]. S. Balasivanandha Prabu, et al; found that Uniform distribution is in processing temperatures 750C
and 800C.Ultimate strength of metal matrix composite decreases with increasing in holding time. The best result is got in 20 min
holding time. Viscosity of Al matrix decreases with increase in processing temperatures. In the tension test ultimate strength increased
gradually up to 800C and starts to decrease gradually due to the improper distribution of SiC in the Al matrix. Holding time
influences the viscosity & particles distribution. The hardness values increases with increasing of processing temperatures from 750C
to 800C at 20 minutes holding time [4]. J Stein et al; found that 2% CNT dispersion in high performance aluminium alloy (Al, Mg, N
atomized) showed 5% increase in Youngs modulus, Yield strength by 9% and tensile strength by 15% with respect to pure aluminium
in the same conditions. These increases are stated to CNT/ matrix load transfer and it is also attributed to the generation of additional
dislocations [5]. Khalid Mahmood Ghauri, et al; found that SiC/Al composite which produced by reinforcing the various proportions of
SiC (5, 10, 15, 25 and 30%) in aluminium matrix using stir casting technique that as the volume fraction of SiC in the composite is
gradually increased, the hardness and toughness increase. Beyond a level of 25-30 percent SiC, the results are showing decreasing
tendency and is not very consistent, and depends largely on the uniformity of distribution of SiC in the aluminium matrix [6].
Lihe Quian et al; found that adding silicon carbide in the aluminium matrix as reinforcement will increase the mechanical
properties of the composite than pure aluminium. It increases mechanical properties, however adding of reinforcement particles can
drastically degrade the ductility and fracture toughness of the composite materials [7]. B.R. Sridhar et al: found that there is reasonable
increase in hardness and decrease of ductility with increasing silicon carbide content. This can be attributed with increase in volume
fraction of silicon carbide in the alloy with increasing silicon carbide.SiC particles strongly attach with aluminum particles at last stage
and also 5% addition of SiC increases hardness 15% and 10% addition of SiC increases hardness by 17% than pure aluminium. With
increasing silicon carbide content the material failure is found in brittle mode [8]. Prathap Chandran, et al; studied on the common
believe fact that the dispersion of carbon nanotubes in a composite has a profound effect on the properties of the composite. Ball
milling was carried out using two different parameters to obtain distinctly different degrees of dispersion of carbon nanotubes (4
wt.%) in Al-9 wt.% Si powders. Composite disks, 80 mm in diameter, having good and bad dispersions of carbon nanotubes were
obtained by hot pressing. Optical micrographs and Raman spectroscopy images showed the presence of larger carbon nanotube
clusters in the bad dispersion sample. Transmission electron microscopy images confirmed the presence of large clusters in the bad
dispersion sample, while the good dispersion sample showed individual carbon nanotubes in the Al matrix. authours found that
Dispersion of Carbon NanoTubes in Al Si alloy composite has profound effect on the mechanical properties (41% increase in
hardness, 27% increase in elastic to plastic work ratio, 185% increase in compression yield strength, 109% increase in fracture
strength)[10] .Rohit Kumar et al: found that SiC particle of 20 with four volume fractions of 0, 5, 10 and 15% were incorporated into
the alloy at the liquid state Stir casting followed by extrusion. By using optical and electron microscope microstructure analysis was
carried out. It was found that tensile strength and yield strength of composites decreases with increasing the volume fraction of SiC
particles, while hardness increases with increasing the volume fraction of the SiC particles in composites [11]. Rabindra Behera et al:
found that Increasing in SiC percentage addition hardness increases, hardness is found highest in middle section compared to end
sections, the forgeability of the composites decreases with increase in the wt% of SiC but the mechanical properties like hardness
increased on increasing the wt% of SiC [12]. Sourav Kayal et al: found that SiC particles of 20 in Aluminium alloy, 2.5 to 15 wt% in
steps of 2.5 wt% increments showed that cooling rate decreases with increasing SiC %, with increase in SiC particles hardness was
found to increase. It is found that cooling rate decreases with the introduction of SiC with increasing SiC content [13].T.Lyashen Ko, et
al; Experimented with a treated carbon nanotube-reinforced epoxy leaf is inserted at the midplane of the laminates and the fracture
properties are measured by end-notched-flexure and 3-point bend tests. It is found that 85% improvement in mode II fracture
toughness with addition of a small amount of that SP1 protein treated carbon nanotubes at the midplane of the carbon fabric (epoxy
605
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

laminate) [14]. T. Laha et al: found that Presence of nanosized grains in the AlSi alloy matrix and carbon nanotubes provides
excellent interfacial bonding between Al alloy matrix and CNT was observed. The elastic modulus and hardness is found to be higher.
Two thermal spraying techniques were used one is plasma spray forming and the next is high velocity oxy fluid spray technique. The
High Velocity Oxy Fluid sprayed composite experience dense and more compact microstructure. Both the spray deposits contain nano
sized grains in the AlSi alloy matrix [15]. A A Cerit et al: found that Hardness of the AlSiC can be increased by raising the volume
fraction of SiC when the particle size is constant and Hardness decreases by increasing particle size Best result got for SiC of 20 and
40% volume. S.Das, D Chatterjee found that Stir Casting of Aluminium composite usually inherit porosity and degree of porosity
depends on the processing parameters such as pouring temperature, volume fraction of reinforcement and type of matrix choosing.
METHODOLOGY

A
EXPERIMENTAL METHOD
As the aim of this study was to find the effect of carbon nanotubes in aluminium silicon carbide composite on improving fracture
toughness of the composite, the preferred method is by testing experimentally.
The main step followed in the method was
Casting of aluminium silicon carbide composite samples and testing of its fracture toughness.
Casting of aluminium silicon carbide composite samples by adding carbon nanotubes and testing of its fracture toughness.

Comparing both results


SELECTION OF COMPONENTS

Usually there are several matrixes and reinforcements. From a number of matrices Aluminium was selected due to its easiness in
availability. It is light weight, i.e. Aluminium weighs less by volume than most other metals. Actually, it is about one-third the weight
of, steel, iron, brass or copper. Aluminium profiles can be made as strong as needed for most applications. In cold weather
applications aluminium is particularly well served because, as temperatures falls, aluminium actually becomes stronger. We can say
also that it is less expensive also. From a number of reinforcements Silicon Carbide was selected due to availability, Increases strength
to weight ratio by 3 times more than mild steel, Improves wear resistance, Thermal stability etc. Carbon nanotubes which is having
high strength to weigh ratio is proved in enhancing the mechanical properties and fracture toughness in a magnesium alloy composite
but is not yet tested in aluminium metal matrix composite. Carbon nanotubes have main properties such as high tensile strength, it
posses high thermal and electrical conductivity, it has low thermal expansion coefficient and high aspect ratio which make carbon
nanotubes very useful in many sectors. In a belief that carbon nanotube will enhance the fracture toughness, it is also added in
composite making. From several casting techniques such as squeeze casting, powder metallurgy etc. Stir Casting is decided to be used
for casting process because stir casting the best cost effective method which produces high strength materials. It can be also called as
fabricating method for high strength low cost materials. It can offer wide range of shapes and larger sizes up to 500 Kg. In this method
there wont be any damage to reinforcement.
EXPERIMENT AND TESTING

Stir casting involves incorporation of reinforcement particulate into liquid aluminium melt and allowing the mixture to solidify at
normal conditions. Here, the crucial thing is to create good wetting between the particulate reinforcement and the liquid aluminium
melt. The simplest and most commercially used technique is known as vortex technique or stir-casting technique.
A APPARATUS AND PROCEDURE FOR STIR CASTING
The apparatus used for stir casting method is shown below. It has the following parts
Motor
- For stirring of blade (Stirrer)
Dimmer
- Controls the speed of the motor
Shaft
- Connects stirrer and motor
Graphite crucible
- Molten Aluminium is kept in this
Furnace
- For the purpose of heating and Melting
Thermo couple - For the temperature measurements
The aluminium scraps and silicon carbide powder is carried out in the graphite crucible in to the electric furnace. First the scraps of
aluminium were preheated for 3 to 4 hours at 450C and silicon carbide powder also heated with 900C and both the preheated
mixtures is then mechanically mixed with each other below their melting points. Then this metal-matrix AlSiC is poured into the
606

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

graphite crucible and then it is put in to the electric furnace at 760C temperature. After completing the experiment the slurry has been
taken into the sand mould within thirty seconds allow it to solidify.

Figure 1 Apparatus for Stir Casting


B CASTING OF ALUMINIUM SILICON CARBIDE
Composites are being an investigative subject from the past and its still improving its level. In the present study as subject to find
the effect of carbon nanotube in addition. Raw material aluminium collected by crushing the bike pistons which is made up of
aluminium. Silicon carbide was given by Carborundum Universal, Naalukettu. Carbon nanotubes were given by Amrita University.
Stir casting machine were made available from the Karunya University Coimbatore.
Stir casting process starts with placing empty crucible in the muffle. At first heater temperature is set to 500C and then it is
gradually increased up to 900C. High temperature of the muffle helps to melt aluminium quickly, reduces oxidation level, enhance
the wetability of the reinforcement particles in the matrix metal. Required quantity of aluminium alloy is taken cleaned to remove
dust and poured into the crucible. Silicon carbide will be preheated to remove moisture and it will be added continuously into the
molten aluminium as reinforcement. Reinforcements are heated for half hour and at temperature of 500C. When matrix was in fully
molten condition stirring is started. The speed of stirrer is gradually increased from 0 to 300 RPM with the help of speed controller.
Silicon carbide is then allowed to fall into the molten aluminium in a uniform rate. Temperature of the heater is set to 630C which is
below the melting temperature of the matrix. A uniform semisolid stage of the molten matrix was achieved by stirring it at
630C.Pouring of preheated reinforcements at the semisolid stage of the matrix enhance the wetability of the reinforcement and it
reduces the particle settling at the bottom of the crucible. Dispersion time was taken as 5 minutes. After stirring 5 minutes at
semisolid stage slurry was reheated and hold at a temperature 900C to make sure slurry was fully liquid. Stirrer RPM was then
gradually lowered to the zero. The stir casting apparatus is manually kept side and then molten composite slurry is poured in the

607

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 2 Casting of Metal matrix composite casting


metallic mould. Mould is preheated at temperature 500C before pouring of the molten slurry in the mould. This makes sure that
slurry is in molten condition throughout the pouring. While pouring the slurry in the mould the flow of the slurry is kept uniform to
avoid trapping of gas. Then it is quick quenched with the help of air to reduce the settling time of the particles in the matrix.
C CASTING OF ALUMINIUM SILICON CARBIDE WITH CARBON NANOTUBES
The procedure is same for the casting of aluminium silicon carbide with carbon nanotubes except in preheating stage. Carbon
nanotubes are having a very high melting point which is equal to almost 3500C which is higher than silicon carbide. While pre
heating both silicon carbide and 0.2 weight percentages of carbon nanotubes will be mechanically mixed thoroughly for continuously
20 minutes, then it is transferred to graphite crucible for preheating the same half an hour is employed for silicon carbide carbon
nanotube mix at 500C. The mould is preheated to a temperature of 500C. When matrix was in the fully molten condition stirring is
started. The speed of the stirrer is gradually increased from 0 to 300 RPM with the help of speed controller. Silicon carbide carbon
nanotube mix is then allowed to fall into the molten aluminium in a uniform rate. Temperature of the heater is set to 630C which is
below the melting temperature of the matrix. Dispersion time was taken as 5 minutes. After stirring 5 minutes at semisolid stage
slurry was reheated and hold at a temperature 900C to make sure slurry was fully liquid. Stirrer RPM was then gradually lowered to
the zero. The stir casting apparatus is manually kept side and then molten composite slurry is poured in the metallic mould. Thus the
test pieces for Aluminium silicon carbide with added nanotubes were made.
D SHAPING TEST PIECES FOR FRACTURE TOUGHNESS TESTING
As the material for testing is aluminium and fracture toughness is to be tested, American Society for Testing Material standard was
made. ASTM E399 standard was chosen. Standard size for the test piece is calculated and mould is prepared. Mould is clamped in
between bench vice after preheating the mould at 500C for 30 minutes. After the composite is made by stir casting and is at right
temperature for pouring, the slurry is then poured into the mould which is preheated so that the chance for trapping gas in between the
casting is reduced. The casted test piece is then allowed to cool normally at room temperature. After the required test pieces are made,
as per required dimensions for fracture toughness two holes are to be drilled and a v grove is to be made. Drill hole centers were
calculated and marked. Drilling operation is followed then. After the hole making, the next step is to make the v grove. Grove
dimension are calculated and marked for cutting operation. Cutting followed after the marking and the standard dimension with grove
is made by filing operation using flat file and triangular file.
E TEST FOR FRACTURE TOUGHNESS
After shaping the standard test pieces for fracture toughness testing, the next step is to test for the fracture toughness. As fracture
toughness cannot be found directly as a value, we need to find the maximum load that the test pieces can afford to resist the crack
propagation. Grove will act as a crack. As per the definition for fracture toughness is the ability of a material which is having a crack
to resist the propagation of the crack. In the test piece prepared there is crack and we need to find how much load it can resist the crack
608

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

propagation. For that Universal Testing Machine can be used. The jaws for clamping the required test piece are prepared and test piece
is clamped properly. After setting the test piece properly the load is given gradually. No sudden loading is given because the

Figure 3 Test Plate with Fixture


correct energy at which the material fails to resist crack is to be find out. By gradual loading the load at which test piece fails to resist
crack is found out and recorded. The same procedure is followed for every test piece. Test pieces were fixed in jaw and gradual
loading will be given till the material with crack fail to withstand load. Load at which each test piece fail is found out and recorded
these recorded data is used for finding the stress intensity factor of the composite which is also known as the fracture toughness of the
composite.

Figure 4 Test Plate after reaching maximum load


F COMPARING RESULTS
As per the standard test pieces were casted and cooled. Drill holes and grove were made as per measurements. Fracture toughness
testing was done in universal testing machine by fixing a suitable fixture for holding the standard test piece. Maximum load for each
standard test piece will be taken in the formula for finding stress concentration factor (K IC). For the piece which is having highest
(KIC) will be having increased fracture toughness.
609

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CALCULATION AND RESULTS

A ANALYSIS
As per the ASTM E399 standard for finding the stress intensity factor, aluminium silicon carbide composite test pieces with and
without adding carbon nanotubes. This experiment is based on the previous studies in another composite, adding of carbon nanotubes
will increase the fracture toughness. With the same basement an attempt was made in aluminium silicon carbide composite also. The
standard test piece which is shaped according to the standard was clamped into the universal testing machine using a fixture which
was made by cutting and drilling operation on a mild steel. The pieces were fixed in universal testing machine properly. Load of 100
KN was given for the testing. The unit markings needed for breaking each test piece is the load at which test piece fails to withstand
the crack.
The load at which each test pieces failed to with stand the load was recorded. The details are given below.
Table 1 Load at which each test piece failed
SL
NO

Aluminium % in
Composite

Silicon Carbide % in
Composite

Carbon Nanotube % in
Composite

Maximum Load at
Test Piece Failed

75

25

8 KN

75

25

7.5 KN

75

25

7.75 KN

75

24.8

0.2

9 KN

75

24.8

0.2

8.5 KN

75

24.8

0.2

9.5 KN

B CALCULTION
Average load at which breaking occurs for the 25% Silicon Carbide 75% Aluminium composite is
= 7.75 KN
Average load at which breaking occurs for the 24.8% Silicon Carbide 75% Aluminium composite with 0.2% carbon nanotubes
is = 9 KN

Formula for finding Stress Intensity Factor (K IC) =

Where

P = Maximum load in Newton


B = Thickness in mm
W = Width in mm
a = Vertical distance between centre of circle to crack in mm

For a = 32 mm w = 84 mm f(a/w) = 6.9 [16]


Table 2 Calculation for Stress Intensity Factor

25% SiC
75%
Al
composite
24.8% SiC
75%
Al
composite
with 0.2%
CNT

610

Thicknes
s of Test
plate (B)
in mm

Width
of Test
plate
(W) in
mm

Average
load of
breaking
occurs in
Newton

Stress
Intensit
y Factor
(KIC) in
MPam

0.01

0.084

7750

18.450

0.01

0.084

9000

21.426

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C RESULTS
All the test pieces casted were tested using universal testing machine under a load of 100 KN. Higher value of stress intensity
factor indicates higher fracture toughness. By comparing the results we can confirm that the fracture toughness of the test piece which
is enforced with carbon nanotubes has enhanced fracture toughness than the best combination of aluminium silicon carbide mix. It
showed that by adding carbon nanotubes (24.8% SiC 0.2% CNT & 75% Al) fracture toughness could be improved. It is found that test
piece with carbon nanotube addition has positive effect on improving the fracture toughness in aluminium silicon carbide composite
and it showed that by adding carbon nanotubes (24.8% SiC 0.2% CNT & 75% Al) there is 16 % increase in fracture toughness than
the best combination of 25% SiC and 75% Al
ACKNOWLEDGMENT
The authors gratefully acknowledge the support provided by Karunya University, SCMS College, Amrita University, Marvadi
group of Institutions and Carborandum Universal who helped us for completing this work into a success.
CONCLUSION
In this study, the subject was to find the effect of carbon nanotube addition in aluminium silicon carbide composite. Literature survey
about the specified topic made clear that although there is many advantage for metal matrix composites in advancing the properties
than the pure metal which is being used in metal matrix composite, fracture toughness property is decreasing with increase in the
reinforcement. As the fracture mechanics is starting with a solid saying that there are no perfect materials without cracks, makes the
topic more relevant. As composite materials could be used in more crucial parts, the need for fracture toughness is very essential.
Aluminium silicon carbide with carbon nanotube and without carbon nanotube matrix was casted and shaped into standard test pieces.
Test pieces were shaped according to the ASTM e399, which is a standard used for fracture toughness testing. Standard piece will
look like a square plate with two drill holes and a v grove which is a pre crack. All the test pieces were tested and by using the 100
KN load in universal testing machine and the stress intensity factor was found out. Higher value of stress intensity factor indicates
higher fracture toughness. It is found that test piece with carbon nanotube addition has positive effect on improving the fracture
toughness in aluminium silicon carbide composite and it showed that by adding carbon nanotubes(24.8% SiC 0.2% CNT & 75% Al)
there is 16 % increase in fracture toughness than the best combination of 25% SiC and 75% Al.

REFERENCES:
[1] Amol D. Sable, Dr. S. D. Deshmukh Characterization of AlSiC metal-matrix by stir casting IJMET Volume 3, Issue 2, JulyDecember (2012)
[2] Amol D. Sable, Dr. S. D. Deshmukh Preparation of metal-matrix composites by stir casting method IJMET Volume 3, Issue 2,
July-December (2012)
[3] Brian G Falzon, Stephen C Hawkins, Chi P Huynb, Racim Radjet, Callum Brown An investigation of mode I and mode II
fracture toughness enhancement using aligned carbon nanotubes forests at the crack interface Composite Structures 106 (2013)
6573, 2013
[4] G. G. Sozhamannan, S.Balasivanandha Prabu, V.S.K.Venkatagalapathy Effect of processing paramters on metal matrix
composites: stir casting process Journal of Surface Engineered Materials and Advanced Technology, 2012, 2, 11-15
[5] J. Stein , B. Lenczowski, N. Frty, E. Anglaret High performance metal matrix composites reinforced by carbon nanotubes
18th international conference on composite materials
[6] Khalid Mahmood Ghauri, Liaqat Ali, Akhlaq Ahmad, Rafiq Ahmad, Kashif Meraj Din. Ijaz Ahmad Chaudhary, Ramzan Abdul
Karim Synthesis and characterization of Al/SiC composite made by stir casting method Pak. J. Engg. & Appl. Sci. Vol. 12,
Jan., (2013)
[7] Lihe Qian, Toshiro Kobayashi, Hiroyuki Toda, Takashi Goda, Zhong-guang wang Fracture Toughness of a 6061 Al Matric
Composite Reinforced with fine SiC particles material transactions, vol 43, no 11 (2002) pp. 2838 to 2842, Japan Institute of
Metals
[8] Mohan Vanarotti, B.R. Sridhar, S.A Kori, Shrishail B Padasalgi Synthesis and characterization of aluminium alloy A356 and
silicon carbide metal matrix composite 2012 2nd International Conference on Industrial Technology and Management (ICITM
2012)
[9] Muhammad Hayat Jokhio, Muhammad Ibrahim Panhwar, and Mukhtiar Ali Unar Manufacturing of aluminum composite
material using stir casting process Mehran University Research Journal of Engineering & Technology, Volume 30, January
(2011)

611

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[10] Prathap Chandran, Tadepalli Sirimuvva, Niraj Nayan, A.K. Shukla, S.V.S.Narayana Murty, S.L. Pramod, S.C. Sharma, Srinivasa
R. Bakshi Effect of carbon nanotube dispersion on mechanical properties of aluminum-silicon alloy matrix composites Journal
of Materials Engineering and Performance 1028Volume 23(3) March 2014
[11] Rohit Kumar, Ravi Ranjan, R.K Thyagi Investigation on the mechanical performance of silicon carbide reinforced using stir
cast and grain refined 6069 matrix composites Journal of Machinery Manufacturing and Automation Mar. 2013, Vol. 2, Issue.
1, PP. 6-15
[12] Rabindra Behera, S.Das, D. Chatterjee, G.Sutradhar Experimental investigation on the effect of reinforcement particles on the
forgeability and the mechanical properties of aluminum metal matrix composites Materials Sciences and Applications, 2010, 1,
310-316
[13] Sourav Kayal, Behera. R, Nandi. T, Sutradhar. G Solidification behavior of stir casting al alloy metal matrix composites
International journal of Applied Engineering Research, Dindigul Volume 2, No 2, 2011
[14] T.Lyashen Ko, N. Lerman, A. Wolf, H. Harel, G. Marom Improved mode II delamination fracture toughness of composite
materials by selective placement of protein-surface treated CNT Composites Science and Technology 85 (2013) 2935, 2013
[15] T. Laha, Y. Liu, and A. Agarwal, Carbon Nanotube Reinforced AluminumNanocomposite via Plasma and High VelocityOxy Fuel SprayForming Journal of Nanoscience and Nanotechnology Vol.7, 110, 2007
[16] Elements of fracture mechanics, Tata McGraw-Hill Education, 2009 by Prashanth Kumar

612

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Leachate Pollution of Cocoyam and Pawpaw Crops Grown Around Anaekie


Obiakor Illegal Dumpsite, Awka, Anambra State, Nigeria
Onwuka, Shalom U.1 Solomon, Bessie E. 2 Ikekpeazu, Felix Osita 3
(Environment Management, Nnamdi Azikwe University, Awka)1
(Environment Management, Nnamdi Azikwe University, Awka) 2
(Department of Architecture, Nnamdi Azikwe University, Awka) 3

ABSTRACT: Leachate pollution of Aneakie Obiakor illegal dumpsite, Awka was studied. Leachate and plant
samples (cassava and pawpaw) were collected from the waste dumpsite and control site during wet season, and
analysed for level of concentration of 10 heavy metals; Cd, Zn, Cu, Ni, Pb, As, Hg, Cr, Fe using Atomic
Absorption Spectrophotometric (AAS). The result was compared with the control and WHO/FAO standard. It
was observed that all the heavy metals present in the plants were within WHO/FAO safe limit with the
exception of Pb, Hg, As and Cd. The study revealed that the presence of leachate on agricultural soil results in
heavy metal accumulation in soils and bioaccumulation in plants. The work therefore, recommended that the
agricultural farmland on which this illegal waste dumping is going on should be recover by stopping the illegal
waste dumping and using bioremediation and phytoremediation to extract the already accumulating pollutant in
the soil.
Keywords Cocoyam, Heavy metals, Leachate, Pawpaw, Waste Dumpsite
INTRODUCTION
Solid wastes that are disposed on lands are buried in the soil especially in humid areas. The wastes are subjected
to leaching by percolating rain water. The leaching process is accompanied by chemical reactions that tend to
consume all available oxygen, while releasing carbondioxide, methane, ammonium, biocarbonate, chloride
sulphate and heavy metals. The liquid mix of the constituent is referred to as leachate. Leachate emanating from
dumpsite contains contaminants and toxic constituent derived from solid wastes as well as from liquid and
industrial waste (Odukoya, Bamgbose and Arowolo, 2007 and Oni, 1987). Leachate from dumpsites is of
particular interest when it contains potentially toxic heavy metals. These metals are known to bioaccumulate in
soil and have long persistence time through interaction with soil component and consequently enter food chain
through plants or animals (Dosumu, Salami and Adekola 2003). Household and industrial garbage may contain
toxic materials such as lead, arsenic, copper, nickel, cadmium, mercury, iron from batteries, insect sprays, nail,
polish, cleaners, plastics polyethylene or PVC (polyvinyl chloride) made bottles and other assorted products.
Inorganic chemical contamination of the environment is due essentially to anthropogenic source, improper
disposal and lack of awareness of the health-risk created by such indiscriminate disposal.
1

Awka economy is reasonably dependent on the crops that are grown in the region. This includes; yams, cassava,
corn, kernels and palm oil (Egboka and Okpoko, 1984). Studies on refuse dumpsites in Awka revealed that
refuse dumps can substantially increase the environmental burden of heavy metals in Awka Municipality
(Nduka, Orisakwe, Ezenweke, Chendo and Ezenwa, 2008). Moreover, soil contamination can adversely affect
human health when contaminated agricultural produce grown in and around dumpsites are ingested, or when
infiltration and surface run-off contribute to ground and surface water contamination.

613

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The uncontrolled input of heavy metals in soils is undesirable because once accumulated in the soil, the metals
are generally very difficult to remove (Smith, Hopmans and Cook, 1996). However, it is a common practice of
small farmers to make use of abandoned waste dumpsite for crop production due to lack of resources to acquire
fertilizers for getting meaningful harvest (Okoronkwo et al., 2005). Chaney (1980) and Smith et al (1996)
cautioned on the use of waste in crop production since it may be possible for heavy metals from waste to
accumulate in the soil and thereby enter the food chain and cause health hazard. To this effect, this study
assesses the leachate pollution of the agricultural produce of Anaekie Obiakor Illegal Dumpsite in Awka.
1.1

The Problem of the Study

In Awka, there are only two recognized waste dumpsites namely; Agukwa and Umoeke waste dumpsites.
Unfortunately in Anaeke Obiakor Lane, illegal waste dumping is grossly going on, this has resulted to all kinds
of environmental and health problems for the inhabitant of the area.
Activities of crop production and cultivation such as cassava, plantain, vegetables, corn, yam, cocoyam,
potatoes and fruits are grown in the area. It is an axiom that crops absorb whatever is present in the soil medium
and use them for photosynthesis. Therefore, these hazardous pollutants, especially the heavy metals absorbed
become bioaccumulated in the roots, stems, fruits, grains and leaves of the crops (Fatoki 2000). These finally
get transferred to man through food chain. Also, during the dry season, when wind blows, it carries the dust
particles emitted from the dumpsite to the leaves of foods crops planted around the area. Plants around the
dumpsite are observed to have a blanket deposit of fine particles on the leaves surface after rainfall.
This become worrisome to the researcher, considering that heavy metal pollution may constitute hazard to the
health of the inhabitants of Anaekie Obiakor, who grow and consume crops grown around the dumpsite. Heavy
metals are not easily metabolized in human body. According to the scholars, Usman, Nda-Umar, Gobi,
Abdullahi, Jonathan (2012), heavy metals become toxic in human when they are not metabolized by the body
and accumulate in the soft tissues causing health problems. Therefore, there is a great need to assess the heavy
metal content of plants of the area in order to ascertain the rate of leachate pollution of the agricultural land.
1.2

Aims and Objectives

The aim of this work is to assess the leachate pollution of the selected agricultural products of Aniekie Obiakor
Illegal Dumpsite, Awka.
To achieve the above aim, the following objectives will be required:

To determine the heavy metals concentration of some plants and leachate of Aniekie Obiakor dumpsite.
To ascertain the level of interaction between the leachate and the farm products
To suggest possible ways of managing or reclaiming contaminated soil.

1.3

Research Hypothesis

Ho:

The leachate of Anaekie Obiakor Dumpsite does not interact with agricultural products of the area.

614

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Methodology

Experimental design was used to derive information used for the study. Laboratory analyses of the leachate
samples collected from Anaekie Obiakor dumpsite were carried out. This methodology was chosen because the
data needed for the study include heavy metals concentration of the leachate.

The roots of two plant samples; pawpaw and cocoyam were collected from the farm which shared the same
boundary with the dumpsite, stored in polyethene bag and carried to the laboratory for analyses. The samples
obtained from ashing was dissolved with 50cm3 of concentration hydrochloric acid (Hcl) and made up to
100cm3 with distilled water. This was filtered into plastic sample bottle using filter paper
The leachate sample also collected from the experimental site was transferred into a beaker and 5cm 3 of nitric
acid was added. The beaker with the content was placed on a hot plate and evaporated down to about 20cm3.
The beaker was cooled and another 5cm3 of nitric acid was also added. The heating was continued, and then
small portion of nitric acid was added until the solution appeared light coloured and clear. The beaker wall and
watch glass were washed with water and the sample was filtered to remove any insoluble materials that could
clog the atomizer. The volume was adjusted to 100cm3 with distilled water and (Ademoroti, 1996). The heavy
metals studied were Cd, Hg, As, Pb, Cr, Fe, Ag, Zn, Ni and Cu (Table 1.1).
The heavy metal content of the leachate and plant was determined using the Atomic Absorption
Spectrophotometers (AAS) Unican 969 Instrument. The trace metals in the samples were determined with
aliquots of the digest. The quantity of each trace metal in each sample was calculated by proportion methods
using the standard curve method. The absorbance and concentration were read from a calibration curve drawn
by computer software attached to AAS.

615

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 2.1 Concentration of Heavy Metals (Mg/kg) (mg/l) in Leachate and Plant Root from Anaekie
Obiakor Dumpsite
ND: Not Detected
Source: Authors Laboratory Analysis and Computation (2012)

DISCUSSION

Dumpsite

Heavy Metals
As
Cd
Zn
Pb
Cr
Hg
Cu
Ni
Fe
Ag
TOTAL

3.1

Control Site

Pawpaw

Cocoyam

Leachate

Pawpaw

Cocoyam

0.20
0.76
0.20
0.55
0.34
0.96
0.10
0.12
0.56
ND
3.79

0.10
0.81
0.20
0.51
0.25
0.79
ND
0.20
0.71
ND
3.57

0.40
1.75
0.06
1.88
0.41
0.48
0.30
0.35
0.81
ND
6.44

0.03
0.22
ND
0.21
0.12
0.51
ND
0.01
0.31
ND
1.41

0.02
0.30
ND
0.10
0.02
0.30
ND
0.01
0.41
ND
1.16

DISTRIBUTION OF HEAVY METALS IN ANAEKIE OBIAKOR ILLEGAL

Table 1.1 shows the distributions of heavy metals in the plant roots and leachate of Anaekie Obiakor dumpsite.
Silver (Ag) was not detected in any of the samples. The result revealed that cocoyam and pawpaw plant from
the dumpsite had the highest total concentration of heavy metals when compared with the control. This implies
that the influx of leachates through the soil is gradually affecting the agricultural products of the dumpsite due
to increase in heavy metal concentration. The high levels of heavy metals in the dumpsite leachate and plants
could be attributed to huge amounts of waste products dumped at the dumpsites.
The distribution pattern of heavy metals among the plant species was highly variable (Table 2.1), which may be
attributed to differential exposure time and absorption capacity of the plants for different heavy metals. Pb, Cd,
Hg and Fe are the most abundant heavy metals in the leachate and plant of the dumpsite. There is therefore need
to alert residents of the area about the health effect of ingesting agricultural products planted around the
dumpsite.

3.2 Test of interaction (relationship) between leachate and the Pawpaw plant in the dumpsite.
The analysis of the interaction (relationship) between the leachate and the Pawpaw plant in the dumpsite was
carried out using simple regression at 5% level of significance as displayed in Table 3.1.
616

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 3.1 Summary of Regression result for Hypothesis


Std.
Variables
Coefficients
Error
Constant
0.190
0.121
Leachate (X)
0.294
0.135
Dependent Variable (Y): Pawpaw (polluted)
Source: Authors Computation (2012)

t-statistics
1.571
2.183

Probability
0.153
0.050

Mathematically the estimated regression model is:


Y = 0.190 + 0.294X
SEM = (0.121) (0.135)
t = (1.571) (2.183)
P = (0.153) (0.050)
R2 = 0.373
The coefficient of determination (R2) from the model result is 0.373 or 37.3%. This is quite low, indicating a
weak positive relationship between the leachate and the agricultural product of the area (Pawpaw). Since the pvalue (0.050) = 0.05, the null hypothesis is rejected and concluded that there is a weak positive interaction
relationship between the leachate and pawpaw crop of the dumpsite. However, it is necessary to note that
although the interaction is weak, there is still a positive interaction between the leachate and farm produce
(pawpaw). The weak relationship may be as a result of short time of interaction between the two.
3.3

Test of Interaction (Relationship) between Leachate and the Cocoyam Plant in the Dumpsite

The analysis of the interaction (relationship) between the leachate and the Cocoyam plant in the dumpsite was
carried out using simple linear regression at 5% level of significance as displayed in Table 3.2.
Table 3.2 Summary of Regression Result for Hypothesis
Std.
Variables
Coefficients
Error
Constant
0.149
0.115
Leachate (X)
0.324
0.129
Dependent Variable (Y): Pawpaw (polluted)
Source: Authors Computation (2012)

t-statistics
1.286
2.511

Mathematically the estimated regression model is:


Y = 0.149 + 0.324X
SEM = (0.115) (0.129)
617

www.ijergs.org

Probability
0.234
0.036

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

t = (1.286) (2.511)
P = (0.234) (0.036)
R2 = 0.441
The coefficient of determination (R2) from the model result is 0.441, or 44.1%. This is fairly low, indicating a
weak positive relationship between the leachate and the agricultural produce of the area (Cocoyam). Since the
p-value (0.036) < 0.05, Ho is rejected and we conclude that the leachate of Anaekie Obiakor dumpsite does
interact with this agricultural produce of the area (Cocoyam). Although, we rejected the null hypothesis, it is
still necessary to note that there is a positive relationship between the leachate and cocoyam of the place. This
low coefficient of determination can be traced to the short time of interaction between the leachate and
cocoyam. This implies that shorter time of interaction enhance weak positive interaction.
Fig. 3.1 presents a positive trend between the leachate and the agricultural produce of the area (Pawpaw and
Cocoyam) in a graph. From the figure, the leachate has the highest heavy metal concentration, followed by the
pawpaw and cocoyam plant as also shown by the statistical analyses.
2

Percent %

1.5
Leachate(X)

Pawpaw
0.5

VIII

FIGURES AND TABLES

Cocoyam

0
As

Cd

Zn

Pb

Cr

Hg

Cu

Ni

Fe

Ag

Fig. 3.1 Line Graph Displaying the Percentage Heavy Metal Concentration of Leachate and the
Agricultural Products of the Dumpsite (Pawpaw and Cocoyam)
3.3

STATISTICAL ANALYSIS

Hypothesis: From the calculations made, tested at 5% significant, the calculated value, 0.050 (pawpaw) and
0.036 (cocoyam) are less and equal to the tabulated, which is 0.05. This shows that there is a weak positive
interaction between the leachate and farm produce of Anaekie Obiakor dumpsite.
The implication of this is that the leachate actually pollutes the agriculatural products of Anaekie Obiakor
Dumpsite. The weak interaction could be as a result of short term exposure of the plant to heavy metal
pollution.
4
CONCLUSION AND RECOMMENDATIONS
The leachate and plants of Anaekie Obiakor Dumpsite contains heavy metals; and the leachate interacts with the
agricultural products of the area.
618

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The paper makes the following recommendations:


There is also need for satisfactory soil and food quality monitoring procedures so as to prevent potential
health hazards of heavy metal contamination of agricultural land.
The agricultural farmland on which this illegal waste dumping is going on should be recover by stopping
the illegal waste dumping and using bioremediation and phytoremediation to extract the already
accumulating pollutant in the soil.
REFERENCES:
[1]
Odukoya, O.O., Bamgbose, O. and Arowolo, T.A. (2007). Heavy Metals Pollution from Leachates in
Aquatic and Terrestrial environment. Journal of Pure and Applied Science. Pp. 467 472.
[2]
]Dosumu, O. O., Salami, N. and Adekola, F. A. (2003). Comparative Study of Trace Element Levels.
Bull. Chem. Soc. Ethiop, 17 (1): Pp. 107 112.
[3]
Egboka and Okpoko. (1984). Gully Erosion in the Agulu-Nanka Region of Anambra State, Nigeria.
Challenges in Afican Hydrology and Water Resources, Proceedings of the Harare Symposium, ISAHS
Publications, 144p.
[4].
Odukoya, O.O., Bamgbose, O. and Arowolo, T.A. (2007). Heavy Metals Pollution from Leachates in
Aquatic and Terrestrial environment. Journal of Pure and Applied Science. Pp. 467 472.
[5]
Smith, Hopmans , Cook (1996). Accumulation of Cr, Pb, Cu, Ni Zn and Cd in Soil following Irrigation
with Untreated Urban Effluents in Aust. Environ Pollut. 94(3): Pp. 317 323.
[6].
Okoronkwo NE, Ano AO, Odoemenam (2005). Environment, Health and Risk Assessment with the use
of an Abandoned Municipal Waste Dumpsite for Food Crop Production. African Journal of
Biotechnology 4(11): Pp. 1217-1221
[7]
Chaney Rl (1980). Health Risk Associated with Toxic Metals in Municipal Sludge in: Bilton G et al (ed)
Sluge-Health-Risk Of Land Application. Ann Arbor Scie. Publi MI, Pp. 59 83
[8].
Fatoki o.S. (2000): Trace Zinc and Copper Concentration in Roadside Vegetation and Surface Soils: A
Measurement of Local Atmospheric Pollution in Alice, South Africa. International Journal of
Environmental Studies, 57: Pp. 501513.
[9]
Usman I.N, Gobi S.N., Abdullahi M., Jonathan Y. (2012). Assessment of Heavy Metal Species in Some
Decomposed Municipal Solid Wastes in Bida, Niger State, Nigeria. Advances in Analytical Chemistry,
2(1): Pp. 6-9
[10]. Egbokhare, Francis, Oyetade, Oluwole (2002). Harmonization and Standardization of Nigerian
Languages. CASAS. Pp. 106
[11] Egboka, G. Nwankwor,. P. Orajaka, and A. 0. Ejiofor (1989). Principles and Problems of
Environmental Pollution of Groundwater Resources with Case Examples from Developing Countries.
Environmental Health Perspectives 83: Pp. 39-68
[12] Sullivan A. and kreiger (1992). Hazardous Materials Toxicology Clinical Principals of Environmental
Health (SSL). 57: Pp. 617-633
[13]. Ademoroti, C.M.A. (1996). Environmental Chemistry and Toxicology. Foludex Press Ltd., Ibadan

619

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Improved Reliable Based Node Classification Using Dissect Method in Mobile


AD HOC Network
Mr. N. SENTHIL KUMARAN, M.C.A. , M.Phil., 1

R.MOHANA PRIYA,2

ASSISTANT PROFESSOR & HEAD,

M.Phil- FULL TIME RESEARCH SCHOLAR,

DEPARTMENT OF COMPUTER APPLICATION,

DEPARTMENT OF COMPUTER SCIENCE,

VELLALAR COLLEGE FOR WOMEN, ERODE, INDIA.

VELLALAR COLLEGE FOR WOMEN, ERODE, INDIA.

n.senthilkumaran@hotmail.com

monarangan@gmail.com, 9789692346

Abstract Certificate Revocation mechanisms play an important role in securing a network[6]. Malicious nodes directly threaten
the robustness of the network Malicious nodes directly threaten the robustness of the network as well as the availability of
nodes. Protecting legitimate nodes from malicious attacks must be considered in MANETs [1]. The main challenge for certificate
revocation is to revoke the certificates of malicious nodes promptly and accurately. In this paper additionally uses two concepts: They
are fixed window model and sliding window model of which the latter produces the best output with slight increased calculation
overhead. In monitoring-based intrusion detection, each node monitors the forwarding behavior of its neighboring nodes. In this paper,
proposed scheme is based upon a improved reliable based node classification scheme, which outperforms other techniques in terms of
being able to quickly revoke attackers certificates and recover falsely accused certificates.

KeywordsMobile Ad Hoc Networks, False Positive, False Accusation, Malicious Node, Intrusion Detection, Certificate Authority
I.INTROUCTION
An ad hoc network is a collection of mobile nodes forming a temporary network without the aid of any centralized
administration[5]. A wireless ad hoc network is a decentralized type of wireless network which does not rely on a pre existing
infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless network [3]. Instead, each node
participates in routing by forwarding data for other nodes, so the determination of which nodes forward data is made dynamically on
the basis of network connectivity. Due to dynamic infrastructure of MANETs and having no centralized administration makes such
network more vulnerable to many attacks [3].Mobile ad hoc networks have attracted much attention due to their mobility and ease of
deployment [1]. Each device in a MANET is free to move independently in any direction, and will therefore change its links to other
devices frequently[5]. Each must forward traffic unrelated to its own use, and therefore be a router. The primary challenge in building
a MANET is equipping each device to continuously maintain the information required to properly route traffic[5]. Such networks may
operate by themselves or may be connected to the larger Internet.
Security is one crucial requirement for these network services [1].The existing works maintains two different lists, warning lists
and blacklist, in order to guard against malicious nodes. Determine the threshold, however, remains a challenge. If it is much larger
than the network degree, nodes that launch attacks cannot be revoked, and can successively keep communicating with other nodes.
The proposed work is based on improve the reliable of warned node to take part in certificate revocation process. To enhance the
accuracy, proposed system used the threshold based mechanism to assess and vindicate warned nodes as legitimate nodes or not,
before recovering them. The performances of the scheme are evaluated. In this paper the proposed system used two concepts; fixed
window model and sliding window model of which the latter produces the best output with slight increased calculation overhead. In
monitoring- based intrusion detection, each node monitors the forwarding behavior of its neighboring nodes. In most cases, a node
620

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

only monitors its next hop in a route. In network if any node doing some malicious activity then we are revoking the certificate of that
node, in other word we are removing that node from all communication links.
The main contributions of this paper are:

To certificate revocation and provide secure communications.

To find any attack should be identified as soon as possible.

To verify that a public key belongs to an individual and to prevent tampering and forgoing.

To mitigate malicious attacks on the network.

The rest of this paper is organized as follows: Section 2 presents the related work followed by the structure of the cluster
based scheme and introduces the certificate revocation process describes in Section 3. Section 4 g i v e s a b r i e f e x p l a n a t i o n o f
the proposed approach. Section 5 presents the performance evaluation of our scheme. Finally, conclude the paper in Section 6.

II. RELATED WORK


Mobile ad hoc network gets much attention because in mobile ad hoc network topologies are dynamically formed, it is
infrastructure less and mobile in nature[5]. Because of the mobility in nature it is difficult to provide security in the mobile ad hoc
networks. Provide security in mobile ad-hoc network we forming the cluster and providing the valid certificate to each node present in
that network[2]. By using that certificate nodes are securely communicating with each other. Suppose any node doing some malicious
activity then in that case certificate of that node is revoked. For revoking the certificate of misbehaving node there are mainly two
techniques: voting based technique and non-voting based technique [1]. The decision processes to satisfy the condition of certificate
revocation is, however, slow.
In monitoring-based intrusion detection, each node monitors the forwarding behavior of its neighboring nodes. In most cases, a
node only monitors its next hop in a route. For monitoring purpose, nodes keep track of a window of packets that it sent recently to its
next hop. Two types of window can be used to keep track of monitoring: fixed window or sliding window.
To understand the similarities and differences between the fixed and sliding windows, assume that noise does not impact the
overhearing of transmission within a nodes radio range. In such a scenario, a malicious node can drop up to L-1 packets out of W on
the average without risking suspicion by neighbors. The temporary drop rates can be different. The sliding window approach is free of
this deficiency since in any consecutive W-transmitted packets, a malicious node may drop at most L-1 packets without risking
suspicion by neighbors. To model state is sliding windowbased monitoring using a discrete-time Markov chain.

III MODEL OF THE CLUSTER-BASED SCHEME


In this section, we introduce the model of cluster-based revocation scheme, which can quickly revoke attacker nodes upon
receiving only one accusation from a neighboring node. The scheme maintains two different lists, warning list and blacklist, in order
to guard against malicious nodes from further framing other legitimate nodes. Moreover, by adopting the clustering architecture, the
cluster head can address false accusation to revive the falsely revoked nodes.
CLUSTER FORMATION
Cluster based [20] mechanism for formation of cluster in the mobile ad hoc network. Each cluster having the cluster head
(CH),cluster members and the group of cluster having one CA. If any node comes in the particular range then firstly it take valid
621

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

certificate from the CA. These cluster heads are responsible for the routing process. A gateway is a node that has two or more cluster
heads. Each cluster head has several cluster members. Due to the clustered structure there will be less traffic, because route requests
will only be passed between cluster heads.

The steps for implementation of cluster


Step 1-If new node then first take valid certificate from CA.
Step 2- New node sends the CHP packet to the neighboring nodes.
Step 3-a) If that node doesnt getting any reply within time period T then it becomes CH.
b) If any node sending the CMP packet to new node then new node become the CH otherwise it become the cluster member.
Step 4-If CM moves from out of the range then wait for CHP. In time period T If getting the CHP then the cluster member
otherwise it declare itself as a cluster head.
CERTIFICATION AUTHORITY
It is a trusted third party having authority of providing valid certificate to node present in the cluster as well as newly joining
node. It is also having the responsibility of maintaining the updated warned list and black list. Warned list is containing the accuser
node and black list containing the accused node present in the network[20]. Another responsibility of CA is to broadcast updated
warned list and black list to node present in the entire network.
CLASSIFICATION OF NODE
In our scheme we are classifying the node into three types
1.

Normal node / legitimate node-Those nodes that are secure and communicate with other node securely. It also is having the
authority to revoking the certificate of accused node.

2.

Malicious node- Those nodes that are insecure for communication. They are performing the malicious activity in the network
and does not having any authority to revoking the certificate of node present in the network.

3.

Warned node-warned node means those nodes that accusing the other nodes are consider as warned node. Also it does not
having any authority to revoking the certificate of other malicious nodes. Only it is having the permission to communicate
with other nodes along with some restriction.

CERTIFICATE REVOCATION
In the cluster if any node wants to communicate with other node then by using the valid certificate they securely communicate
with each other. Suppose in the network some nodes are doing the malicious activity and it is found by the neighboring node then
neighboring node first check the local black list [8]. If that node present in that black list then the certificate of the malicious node is
directly revoked by that neighboring node by using non-voting based Scheme [1] otherwise it send the accusation packet (AP) to the
CA. Accusation packet containing all the information of accuser, accused node and destination. After sending the information CA
check the certificate of accuser[18]. If it is valid then it put accuser into WL and accused node in BL and the list is updated. Finally
updated information is send to the all nodes that are present in the network[6]. If it is a malicious node then it will await for
certification revocation. Once the node is revoked then it should remove from the network.
COPING WITH FALSE ACCUSATION
In this scheme it enable CH to detect false accusation present in the cluster. For finding the falsely accused node present in the
cluster first he monitoring the all attacks done by the CMs [7]. After monitoring it sends recovery packet to the CA to recover the
certificate of the falsely accused node.CA verify the recovery packet received from the CH and he remove the falsely accused node
622

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

from BL to WL. But here is another problem is the numbers of nodes in WL are increases and accuracy of revoking the certificate of
malicious node in the network decreases, because no of normal nodes in the network are less.
Step 1: The CA disseminates the information of the WL and BL to all nodes in the network.
Step 2: CH E and F update their WL and BL, and determine that node B was framed.
Step 3: E and F send a recovery packet to the CA to revive the falsely accused node B.
Step 4: Upon receiving the first recovery packet (e.g., from E), the CA removes B from the BL and holds B and E in the W L, and
then disseminates the information to all the nodes.
Step 5: The nodes update their WL and BL to recover node B.

IV. PROPOSED SCHEME


The proposed system contains all the existing system implementation. In addition, it extends the protocol model to consider
the d r o p detection packet. Markov model is to determine analytically the expected time to suspect its next hop by a monitoring node.
Markov models are commonly used to analyze the expected time to encounter a bug in a software system.
Fixed Window Protocol [FWP]
The fixed window protocol monitors the packet drops detection by checking the front and rear side of the packet. So the drop
detection can be finding effectively. The sliding window protocol monitors the packet drop detection in the sequence of packets. It is
possible to reduce the number of false positive due to monitoring by having higher threshold values, allowing a node to exceed the
not-overheard threshold multiple times before labeled as suspicious, or both. This will mitigate the false positive problem in normal
networks without attacks.
Sliding Window Protocol [SWP]
The sliding window protocol is a mechanism for reliable message delivery in a distributed setting where a sender and a receiver
communicate over loss channels. Any message delivery protocol has the same basic four components, a sender, a receiver, a channel
from the sender to the receiver, and a channel from the receiver to the sender.
In addition to the four components we model, there are two external users. The user on the sender side inputs the data to be sent
by the sender and the user on the receiver side gets the data that the receiver delivers. The basic idea of the sliding protocol is that a
window of size n 0 determines how many successive packets of data can be sent in the absence of a new acknowledgment. The
window size may be fixed or may vary depending on the conditions of the different components of the protocol. In our model we will
assume that n varies none deterministically. Each packet of data is sequentially numbered, so the sender is not allowed to send packet i
+ n before packet i has been acknowledged. Thus, if is the sequence number of the packet most recently acknowledged by the
receiver, there is a window of data numbered i + 1 to i + n which the sender can transmit. As successively higher-numbered
acknowledgments are received, the window slides forward.
The acknowledgment mechanism is cumulative in that if the receiver acknowledges packet k, where k i + 1, it means it has
successfully received all packets up to k. Packet is acknowledged by sending a request for packet k+1. Typically, transmitted data is
kept on a retransmission buffer until it has been acknowledged. Thus, when k is acknowledged, packets with sequence number less
than or equal to are removed from the retransmission buffer. Packets that are not acknowledged are eventually retransmitted. For our

623

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

modeling of the sliding window protocol we assume that sequence numbers are unbounded, we do not assume that the channels are
FIFO

V. PERFORMANCE EVALUATION
The experiments were conducted using cluster based revocation certificate scheme. The result of proposed Marko Chain
model is discussed and compared with existing routing protocol. To measures the performance of the works under packet drop
threshold, attacker detection, and throughput are evaluated.
In packet drop threshold is normally sending the data from source to destination. Comparing existing de livery drop packet
with proposed work is Marko chain model for hope by hope, multi-hop fixed window protocol [MFWP] and multi-hop sliding
window protocol [MSWP] in communication mobile-ad-hoc networks. The packet drop were calculated so that their values lie in
fixed and sliding window protocol is greater but in multi-hop fixed and sliding window protocol the drop count is lesser. In
static threshold [1] accuracy of realizing normal node from WL is less. Performance for cluster based revocation certificate in
dynamic threshold, first design a simplified mechanism to determine the number of neighboring nodes for any given node.
Within time Tv, the given node crosses through an area and meets a number of neighbors N. Since mobile nodes are assumed
uniformly distributed in the network, we may approximate N by

Where r denotes the transmission range of nodes, v is the velocity, and p is the density of nodes in the network. Based on the
obtained number of neighboring nodes N, we can on firm the value of threshold K. By this mechanism it is possible to remove the
normal node from WL and allow that normal node to take a part in revocation process.
Comparison of both existing and proposed system experiment result is Fig 1.1 represents experimental result for existing system.
The finding malicious node and revocation node process within minutes details. The Fig 1.2 represents experimental result for
proposed system. The finding malicious node and revocation node process within minutes details as followed.

Revocation Time (Sec)

Existing FWP-SWP Scheme


600
500
400
300

No.Of.Attacker

200
100
0
1

Number of Malicious Node

FIG 1.1 FWP-SWP- Number of Attacker

624

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Revocation Time (Sec)

Proposed MFWP-MSWP Scheme


700
600
500
400

No.of.Attacker

300
200
100
0
1

Number of Malicious Node

FIG 1.2 MFWP-MSWP- NUMBER OF ATTACKER

ACKNOWLEDGMENT
I am greatly obliged to Dr. (Mrs.) S. K. JAYANTHI, Associate Professor and Head, Department of Computer Science, Vellalar
College for Women (Autonomous), Erode -12 for providing me all the facilities, and an inspiration to complete the research Paper. I
take

this

golden

opportunity

to

express

my

profound

and

grateful

thanks

to

my

beloved

Guide,

Mr. N. SENTHIL KUMARAN, M.C.A., M.Phil., Assistant Professor & Head, Department of Computer Applications, Vellalar
College for Women (Autonomous), Erode-12 for his exemplary guidance and help, with valuable and creative suggestions and
constant encouragement for the successful completion of the paper.

VI.CONCLUSION
In this paper, we have enhanced our previously clustering-based certificate revocation scheme which allows or fast certificate
revocation. In order to address the issue of the number of normal nodes being gradually reduced, we have developed threshold based
mechanism to restore the accusation function of nodes in the WL. The effectiveness of our proposed certificate revocation scheme in
mobile ad hoc networks has been demonstrated through extensive results. In the proposed work is detecting attacker and improve
certificate authority in mobile ad hoc network. In this model, they are several certificate issue solving and communication between
nodes with cluster member. The proposed protocol MFWP and MSWP model applied for mobile ad hoc network is to improve and
effective management for BL and WL maintains communication process. In contrast to existing system, we propose a improved
reliability based node classification scheme combined with the merits of both voting-based and non-voting based mechanisms to
revoke malicious certificate and solve the problem of false accusation

REFERENCES:
[1] Wei Liu, Hiroki Nishiyama, Nirwan Ansari, Jie Yang, and Nei Kato, Cluster Based Certificate Revocation with Vindication
Capability for Mobile Ad Hoc Network, IEEE Transactions on Parallel and Distributed Systems, Vol.24, No.2, Feb 2013.
[2] Pratik Gite, Sanjay Thakur, Different Security issues over MANET, International Journal of Computer Science Engineering
& Information Technology

625

Research, Vol.3, Issue 1, March 2013.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[3] Gagandeep, Aashima, Pawan Kumar, Analysis of Different Security Attacks in MANETs on Protocol Stack A-Review,
International Journal of Engineering and Advanced Technology (IJEAT), Vol.1, Issue-5, June 2012.
[4] Scalable Network Technologies: Qualnet, http://www.scalable-networks.com, 2012.
[5] Priyanka Goyal, Vinti Parmer, Rahul Rishi, MANET: Vulnerabilities, Challenges, Attacks, Applications, International Journal
of Computational Engineering and Management, Vol.11, Jan 2011.
[6] W. Liu, H. Nishiyama, N. Ansari, and N. Kato, A Study on Certificate Revocation in Mobile Ad Hoc Network, Proc. IEEE
Intl Conf. Comm. (ICC), June 2011.
[7] K. Park, H. Nishiyama, N. Ansari, and N. Kato, Certificate Revocation to Cope with False Accusations in Mobile Ad Hoc
Networks, Proc. IEEE 71st Vehicular Technology Conf. (VTC 10), May 16-19, 2010.
[8] Claude Crpeau and Carlton R. Davis, A Certificate Revocation Scheme for Wireless Ad Hoc Networks School of Computer
Science, McGill University, Montreal, QC, Canada H3A 2A7.

[9] P. Sakarindr and N. Ansari , Security Services in Group Communications Over Wireless Infrastructure, Mobile Ad Hoc, and
Wireless Sensor Networks, IEEE Wireless Comm., vol. 14, no. 5, pp. 8-20, Oct. 2007.
[10] A.M. Hegland, E. Winjum, C. Rong, and P. Spilling, A Survey of Key Management in Ad Hoc Networks, IEEE Comm.
Surveys and Tutorials, vol. 8, no. 3, pp. 48-66, Third Quarter 2006.
[11] H. Yang, J. Shu, X. Meng, and S. Lu, SCAN: Self-Organized Network-Layer Security in Mobile Ad Hoc Networks, IEEE J.
Selected Areas in Comm., vol. 24, no. 2, pp. 261-273, Feb. 2006
[12] A. Shamir, Identity-Based Cryptosystems and Signature Schemes, Proc. CRYPTO 84, 1984, pp. 4753. and Computing, pp.
254-265. 2005.
[13] Kong, X. Hong, Y. Yi, J.-S. Park, J. Liu, and M. Gerla, A Secure Ad-hoc Routing Approach Using Localized Self-Healing
Communities, Proc. Sixth ACM Intl Symp. Mobile Ad hoc Networking and Computing , pp. 254-265. 2005.
[14] P. Yi, Z. Dai, Y. Zhong, and S. Zhang, Resisting Flooding Attacks in Ad Hoc Networks, Proc. Intl Conf. Information
Technology: Coding and Computing, 2005.
[15] H. Yang, H. Luo, F. Ye, S. Lu, and L. Zhang, Security in Mobile Ad hoc Networks: Challenges and Solutions, IEEE Wireless
Comm., vol. 11, no. 1, pp. 38-47, Feb. 2004
[16] J. Newsome, E. Shi, D. Song, and A. Perrig, The Sybil Attack in Sensor Network: Analysis & Defenses, Proc. Third Intl
Symp. Information Processing in Sensor Networks, pp. 259-268, 2004.

626

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[17] C. Bettstetter, G. Resta, and P. Santi, The Node Distribution of the Random Waypoint Mobility Model for Wireless Ad Hoc
Net-works, IEEE Trans. Mobile Computing, vol. 2, no. 3, pp. 257-269, July-Sept. 2003.
[18] C. Gentry, Certificate-Based Encryption and the Certificate Revocation Problem, EUROC RYPT: Proc. 22nd Intl Conf.
Theory and Applications of Cryptographic Techniques, pp. 272-293, 2003.
[19] IEEE-SA Standards Board, IEEE Std. 802.15.4, IEEE, 2003.
[20] L. Zhou, B. Cchneider, and R. Van Renesse, COCA: A Secure Distributed Online Certification Authority, ACM Trans.
Computer Systems, vol. 20, no. 4, pp. 329-368, Nov. 2002

627

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Implementation of Fast Normalized Cross Correlation Algorithm for


Large Scale Image Search
MR.B.MATHANKUMAR, MRS.S.JEYANTHI
DEPARTMENT OF MCA, PSNA COLLEGE OF ENGG & TECH, DINDIGUL, INDIA,MATHAN81@GMAIL.COM
Department of CSE, PSNA College of Engg & Tech, Dindigul, India,sk.jeya@gmail.com

Abstract - In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on
large-scale image search.Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it
benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance.The
correlation between two signals (cross correlation) is a standard approach to feature detection as well as a component of more
sophisticated techniques. Unfortunately the normalized form of correlation (correlation coefficient) preferred in template matching
does not have a correspondingly simple and efficient frequency domain expression. For this reason normalized cross-correlation has
been computed in the spatial domain.Due to the computational cost of spatial domain convolution, several inexact but fast spatial
domain matching methods have also been developed. For obtaining normalized cross correlation from transform domain convolution
the Fast Normalized Cross Correlation Algorithm is introduced. This new algorithm provides an order of magnitude speedup over
spatial domain computation of normalized cross correlation.

Keywords -Image, Cross correlation, normalized, domain, feature, spatial domain, coefficient, pattern
INTRODUCTION
In recent years, content-based image search has attracted more and more attentions in the multimedia and the computer
vision community [1][19]. Many approaches are based on the Bag-of-Visual-Words (BoVW) model [8] and adopt the invariant local
features for image representation. In the BoVW model, an image is represented by a visual word vector. The visual words are usually
generated by clustering the extracted local features. Some widely used unsupervised clustering algorithms are standard k-means,
hierarchical k-means (HKM) [7]
The correlation between two signals (cross correlation) is a standard approach to feature detection [6,7] as well as a
component of more sophisticated techniques (e.g. [3]). Textbook presentations of correlation describe the convolution theorem and the
attendant possibility of efficiently computing correlation in the frequency domain using the fast Fourier transform. Unfortunately the
normalized form of correlation (correlation coefficient) preferred in template matching does not have a correspondingly simple and
efficient frequency domain expression. For this reason normalized cross-correlation has been computed in the spatial domain (e.g., [7],
p. 585). Due to the computational cost of spatial domain convolution, several inexact but fast spatial domain matching methods have
also been developed [2]. This paper describes a recently introduced algorithm [10] for obtaining normalized cross correlation from
transform domain convolution. The new algorithm in some cases provides an order of magnitude speedup over spatial domain
computation of normalized cross correlation.
Since we are presenting a version of a familiar and widely used algorithm no attempt will be made to survey the literature on
selection of features, whitening, fast convolution techniques, extensions, alternate techniques, or applications. The literature on these
topics can be approached through introductory texts and handbooks [16,7,13] and recent papers such as [1,19]. Nevertheless, due to
the variety of feature tracking schemes that have been advocated it may be necessary to establish that normalized cross-correlation
remains a viable choice for some if not all applications.
The rest of the paper is organized as follows. Section II reviews the pipeline of content-based large-scale image search
system with local features. Section III discusses the proposed Fast Normalized Cross Correlation Algorithm. Our transforming domain
computation is introduced in Section IV Phase correlation and normalization are given in Section V and Section VI. Finally, the
conclusion is made in Section VII.

628

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RELATED WORKS
In content-based large-scale image retrieval with local features, the Bag-of-Visual-Words (BoVW) model has been widely
adopted. Generally, most approaches follow a pipeline which consists of several key steps, including feature representation, feature
quantization, image indexing, image scoring and post-processing. In this section, we make a review of the pipeline and discuss related
works in each step. Invariant local features have been popularly adopted for image representation owing to its invariance to various
transformations and robustness to occlusions and background changes. The extraction of local features usually involves two steps,
namely interest point detection and feature description. The interest point detection identifies some key points that have high
repeatability over various changes. The commonly used interest point detectors include Difference of Gaussian (DOG) [5], Hessian
affine [25], and MSER [26]. Then, a descriptor is constructed to capture the visual appearance of the local region corresponding to a
interest point. The descriptor is usually designed to be invariant to rotation and scale changes and robust to affine distortion and the
addition of noise, etc. Some commonly used descriptors include FNCC, SURF [5], [27] and some recently developed descriptors [28]
[30]. To obtain a compact representation of an image for scalable indexing and retrieval, after extracting the local features, a visual
vocabulary is usually built and the local features are quantized to visual words. The visual vocabulary is generated by unsupervised
clustering algorithms, such as hierarchical k-means (HKM) [7], approximate k-means (AKM) [20]. With the visual vocabulary
defined, each local feature is quantized to a visual word in the visual vocabulary. Usually, to speed up quantization process,
approximate nearest neighbor (ANN) approaches are adopted, such as k-d tree [31], vocabulary tree [7]. In [24], a scalar quantization
approach is proposed to suppress the quantization error. After the features are quantized to visual words, an image can be represented
by a visual word vector [8]. The similarity between two images can be measured by the L1 or L2 normal distance between their visual
word vectors. And inspired by the success of text search engines, the inverted file structure has been successfully used for large-scale
image search [8],[10], [20], [32]. In the inverted file structure, each visual word is followed by a list of entries. Each entry records the
image ID and some other clues to verify the feature matching, such as geometric clues in [20] and [32], binary signatures in [33]. As
the BoVW model ignores the spatial context information between local features, some researchers propose to conduct geometric
verification to the ranked candidates list returned by the BoVW model [20], [32], [34], [35]. In [20], a transformation model between
the query image and the candidate image is estimated and those matches that do not fit the model well are filtered out. In [32], a
verification map is built to filter out the false matches.

Figure c

Figure d

Fig. 1. Two example of matches obtained with 64-bit binary FNCC code generated by the proposed algorithm. The matched
features are connected by the solid red line. (a) One example of the PDup dataset [32]. (b) One example of the UKBench dataset [7].
Unlike the above geometric verification approaches, the bag-of-spatialfeatures [10] explicitly embeds the spatial context into
the representation based on visual words. Some works explore the semantic information of images [36], [37]. In [36], a sparse graphbased semi-supervised learning approach is used to inferring images semantic concepts from community-contributed images and their
associated tags. In [37], the semantic gap measure is introduced into the active learning process to handle the user interaction.

629

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

FAST NORMALIZED CROSS CORRELATION ALGORITHM


Template Matching by Cross-Correlation:The use of cross-correlation for template matching is motivated by the distance measure
(squared Euclidean distance) d 2 f, t (u,v)= [f (x,y)-t(x-u,y-v)]2
(where f is the image and the sum is over x,y under the window containing the feature t is positioned at u,v). In the expansion of d2
d 2 f, t (u,v) = [f 2(x,y)-2 f (x-y) t (x-u,y-v) + t2(x-u,y-v)]..........(1)
the term t2(x-u,y-v) is constant. If the term f 2(x,y) is approximately constant then the remaining cross-correlation term c(u,v)=
f(x,y)t(x-u,y-v) is a measure of the similarity between the image and the feature.
There are several disadvantages to using (1) for template matching:
If the image energy f 2 (x, y) varies with position, matching using (1) can fail. For example, the correlation between the feature and
an exactly matching region in the image may be less than the correlation between the feature and a bright spot.
The range of c(u,v) is dependent on the size of the feature.
Eq. (1) is not invariant to changes in image amplitude such as those caused by changing lighting conditions across the image
sequence.
The correlation coefficient overcomes these difficulties by normalizing the image and feature vectors to unit length, yielding a cosinelike correlation coefficient
x , y [f(x,y)-f u,v][t(x-u,y-v)-t]
(u,v) = { x, y[f(x,y)-f u,v]2 x, y[t(x-u,y-v)-t]2}0.5
where t is the mean of the feature and f
correlation.

u,v

............. (2)

is the mean of f (x,y) in the region under the feature.We refer to (2) as normalized cross-

Feature Tracking Approaches and Issues :It is clear that normalized cross-correlation (NCC) is not the ideal approach to feature tracking since it is not invariant with
respect to imaging scale, rotation, and perspective distortions. These limitations have been addressed in various schemes including
some that incorporate NCC as a component. This paper does not advocate the choice of NCC over alternate approaches. Rather, the
following discussion will point out some of the issues involved in various approaches to feature tracking, and will conclude that NCC
is a reasonable choice for some applications.
SSDA. The basis of the sequential similarity detection algorithm (SSDA) [2] is the observation that full precision is only needed
near the maximum of the cross-correlation function, while reduced precision can be used elsewhere. The authors of [2] describe
several ways of implementing `reduced precision'. An SSDA implementation of cross-correlation proceeds by computing the
summation in (1) in random order and uses the partial computation as a Monte Carlo estimate of whether the particular match location
will be near a maximum of the correlation surface. The computation at a particular location is terminated before completing the sum if
the estimate suggests that the location corresponds to a poor match.
The SSDA algorithm is simple and provides a significant speedup over spatial domain cross-correlation. It has the disadvantage
that it does not guarantee finding the maximum of the correlation surface. SSDA performs well when the correlation surface has
shallow slopes and broad maxima. While this condition is probably satisfied in many applications, it is evident that images containing
arrays of objects (pebbles, bricks, other textures) can generate multiple narrow extrema in the correlation surface and thus mislead an
630

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SSDA approach. A secondary disadvantage of SSDA is that it has parameters that need to determined (the number of terms used to
form an estimate of the correlation coefficient, and the early termination threshold on this estimate).
Gradient Descent Search : If it is assumed that feature translation between adjacent frames is small then the translation (and
parameters of an affine warp in [19]) can be obtained by gradient descent [12]. Successful gradient descent search requires that the
interframe translation be less than the radius of the basin surrounding the minimum of the matching error surface. This condition may
be satisfied in many applications. Images sequences from hand-held cameras can violate this requirement, however: small rotations of
the camera can cause large object translations. Small or (as with SSDA) textured templates result in matching error surfaces with
narrow extrema and thus constrain the range of interframe translation that can be successfully tracked. Another drawback of gradient
descent techniques is that the search is inherently serial, whereas NCC permits parallel implementation.
Snakes: Snakes (active contour models) have the disadvantage that they cannot track objects that do not have a definable contour.
Some ``objects'' do not have a clearly defined boundary (whether due to intrinsic fuzzyness or due to lighting conditions), but
nevertheless have a characteristic distribution of color that may be trackable via cross-correlation. Active contour models address a
more general problem than that of simple template matching in that they provide a representation of the deformed contour over time.
Cross-correlation can track objects that deform over time, but with obvious and significant qualifications that will not be discussed
here. Cross-correlation can also easily track a feature that moves by a significant fraction of its own size across frames, whereas this
amount of translation could put a snake outside of its basin of convergence.
Wavelets and other multi-resolution schemes : Although the existence of a useful convolution theorem for wavelets is still a
matter of discussion (e.g., [11]; in some schemes wavelet convolution is in fact implemented using the Fourier convolution theorem),
efficient feature tracking can be implemented with wavelets and other multi-resolution representations using a coarse-to-fine multiresolution search. Multi-resolution techniques require, however, that the images contain sufficient low frequency information to guide
the initial stages of the search. As discussed in section 6, ideal features are sometimes unavailable and one must resort to poorly
defined ``features'' that may have little low-frequency information, such as a configuration of small spots on an otherwise uniform
surface.
Each of the approaches discussed above has been advocated by various authors, but there are fewer comparisons between
approaches. Reference [19] derives an optimal feature tracking scheme within the gradient search framework, but the limitations of
this framework are not addressed. An empirical study of five template matching algorithms in the presence of various image
distortions [4] found that NCC provides the best performance in all image categories, although one of the cheaper algorithms performs
nearly as well for some types of distortion. A general hierarchical framework for motion tracking is discussed in [1]. A correlation
based matching approach is selected though gradient approaches are also considered.
Despite the age of the NCC algorithm and the existence of more recent techniques that address its various shortcomings, it is
probably fair to say that a suitable replacement has not been universally recognized. NCC makes few requirements on the image
sequence and has no parameters to be searched by the user. NCC can be used `as is' to provide simple feature tracking, or it can be
used as a component of a more sophisticated (possibly multi-resolution) matching scheme that may address scale and rotation
invariance, feature updating, and other issues. The choice of the correlation coefficient over alternative matching criteria such as the
sum of absolute differences has also been justified as maximum-likelihood estimation [18]. We acknowledge NCC as a default choice
in many applications where feature tracking is not in itself a subject of study, as well as an occasional building block in vision and
pattern recognition research (e.g. [3]). A fast algorithm is therefore of interest.
TRANSFORM DOMAIN COMPUTATION
Consider the numerator in (2) and assume that we have images f(x,y)=f(x,y)-f u,v and t(x,y)=t(x,y)-t in which the mean value
has already been removed:
(u,v)= f(x,y)t(x-u,y-v) ..... (3)
631

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

For a search window of size M2 and a feature of size N2 (3) requires approximately
1)2 multiplications.

N2 ( M - N + 1)2 additions and

N2 ( M - N +

Eq. (3) is a convolution of the image with the reversed feature t'(-x,-y) and can be computed by
F-1{F(f)F*(t)} (4)
where F is the Fourier transform. The complex conjugate accomplishes reversal of the feature via the Fourier transform property F f
* (-x) = F *(w).
Implementations of the FFT algorithm generally require that f' and t' be extended with zeros to a common power of two. The
complexity of the transform computation (3) is then 12 M 2 log2 M real multiplications and 18 M2 log2 M real additions/subtractions.
When M is much larger than N the complexity of the direct `spatial' computation (3) is approximately N2M2 multiplications/additions,
and the direct method is faster than the transform method. The transform method becomes relatively more efficient as N approaches M
and with larger M, N.
Fast Convolution
There are several well known fast convolution algorithms that do not use transform domain computation [13]. These approaches
fall into two categories: algorithms that trade multiplications for additional additions, and approaches that find a lower point on the
O(N2) characteristic of (one-dimensional) convolution by embedding sections of a one-dimensional convolution into separate
dimensions of a smaller multidimensional convolution. While faster than direct convolution these algorithms are nevertheless slower
than transform domain convolution at moderate sizes [13] and in any case they do not address computation of the denominator of (2).
PHASE CORRELATION
Because (4) can be efficiently computed in the transform domain, several transform domain methods of approximating the image
energy normalization in (2) have been developed. Variation in the image energy under the template can be reduced by high-pass
filtering the image before cross-correlation. This filtering can be conveniently added to the frequency domain processing, but selection
of the cutoff frequency is problematic--a low cutoff may leave significant image energy variations, whereas a high cutoff may remove
information useful to the match.
A more robust approach is phase correlation [9]. In this approach the transform coefficients are normalized to unit magnitude
prior to computing correlation in the frequency domain. Thus, the correlation is based only on phase information and is insensitive to
changes in image intensity. Although experience has shown this approach to be successful, it has the drawback that all transform
components are weighted equally, whereas one might expect that insignificant components should be given less weight. In principle
one should select the spectral pre-filtering so as to maximize the expected correlation signal-to-noise ratio given the expected second
order moments of the signal and signal noise. This approach is discussed in [16] and is similar to the classical matched filtering
random signal processing technique. With typical image correlation the best pre-filtering is approximately Laplacian rather than a pure
whitening.
NORMALIZING
Examining again the numerator of (2), we note that the mean of the feature can be precomputed, leaving
(u,v) = f(x,y)t(x-u,y-v)- f u,v t(x-u,y-v)
Since t' has zero mean and thus zero sum the term f u,v t (x-u,y-v) is also zero, so the numerator of the normalized cross- correlation
can be computed using (4).
632

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Examining the denominator of (2), the length of the feature vector can be precomputed in approximately 3N2 operations (small
compared to the cost of the cross-correlation), and in fact the feature can be pre-normalized to length one.
The problematic quantities are those in the expression x,y [f (x,y)-f u,v]2. The image mean and local (RMS) energy must be
computed at each u,v, i.e. at (M-N+1)2 locations, resulting in almost 3N2(M-N+1)2 operations (counting add, subtract, multiply as one
operation each). This computation is more than is required for the direct computation of (3) and it may considerably outweight the
computation indicated by (4) when the transform method is applicable. A more efficient means of computing the image mean and
energy under the feature is desired.
These quantities can be efficiently computed from tables containing the integral (running sum) of the image and image square
over the search area, i.e.,
s(u,v) = f(u,v) + s(u-1,v) + s(u,v-1) - s(u-1,v-1) & s2 (u,v) = f2 (u,v) + s2 (u-1,v) + s2 (u,v-1)-s2(u-1,v-1)

with s(u,v) = s2(u,v) = 0 when either u,v < 0. The energy of the image under the feature positioned at u,v is then
(u+N-1,v +N-1)
-

s2 (u-1,v+N-1)

s2 (u+N-1,v-1)
+

e f (u,v) = s2

s2 (u-1,v-1)

and similarly for the image sum under the feature.


The problematic quantity x,y [f(x,y)-f u,v]2 can now be computed with very few operations since it expands into an expression
involving only the image sum and sum squared under the feature. The construction of the tables requires approximately 3M2
operations, which is less than the cost of computing the numerator by (4) and considerably less than the 3N2(M-N+1)2 required to
compute x,y [f(x,y)-f u,v]2 at each u,v.
This technique of computing a definite sum from a precomputed running sum has been independently used in a number of fields;
a computer graphics application is developed in [5]. If the search for the maximum of the correlation surface is done in a systematic
row-scan order it is possible to combine the table construction and reference through state variables and so avoid explicitly storing the
table. When implemented on a general purpose computer the size of the table is not a major consideration, however, and flexibility in
searching the correlation surface can be advantageous. Note that the s(u,v) and s2(u,v) expressions are marginally stable, meaning that
their z-transform H(z) = 1/(1 - z-1) (one dimensional version here) has a pole at z=1, whereas stability requires poles to be strictly
inside the unit circle [14]. The computation should thus use large integer rather than floating point arithmetic.
Figure 1: Measured relative performance of transform domain versus spatial domain normalized cross-correlation as a function of
the search window size (depth axis) and the ratio of the feature size to search window size.

633

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Feature
Size

Search
Window(s)

402

1102

402

1102

Flint

1 min. 40
seconds

Fast NCC

16 seconds
(subpixel=1)
21 seconds
(subpixel=8)

n/a

Table: Two tracking sequences from Forest Gump were re-timed using both direct and fast NCC algorithms using identical features
and search windows on a 100 Mhz R4000 processor. These times include a 16 2 sub-pixel search [17] at the location of the best wholepixel match. The sub-pixel search was computed using Eq. (2) (direct convolution) in all cases.
Search
Window(s)

168 X 86

115 X
200,150
X150

Lengt
h

Direct
NCC

Fast
NCC

896
frames

15 hours

1.7
hours

490
frames

14.3 hours

57
minutes

The performance of this algorithm will be discussed in the context of special effects image processing. The integration of
synthetic and processed images into special effects sequences often requires accurate tracking of sequence movement and features.
The use of automated feature tracking in special effects was pioneered in movies such as Cliffhanger, Forest Gump, and Speed.
Recently cross-correlation based feature trackers have been introduced in commercial image compositing systems such as Flame/Flint
[20], Matador, Advance [21], and After Effects [22].
The algorithm described in this paper was developed for the movie Forest Gump (1994), and has been used in a number of
subsequent projects. Special effects sequences in that movie included the replacement of various moving elements and the addition of
634

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

a contemporary actor into historical film and video sequences. Manually picked features from one frame of a sequence were
automatically tracked over the remaining frames; this information was used as the basis for further processing.
The relative performance of our algorithm is a function of both the search window size and the ratio of the feature size to search
window size. Relative performance increases along the window size axis (Fig. 1); a higher resolution plot would show an additional
ripple reflecting the relation between the search window size and the bounding power of two. The property that the relative
performance is greater on larger problems is desirable. Table 1 illustrates the performance obtained in a special effects feature tracking
application. Table 2 compares the performance of our algorithm with that of a high-end commercial image compositing package.
Note that while a small (e.g. 102) feature size would suffice in an ideal digital image, in practice much larger feature sizes and
search windows are sometimes required or preferred:
The image sequences used in film and video are sometimes obtained from moving cameras and may have considerable translation
between frames due to camera shake. Due to the high resolution required to represent digital film, even a small movement across
frames may correspond to a distance of many pixels.
The selected features are of course constrained to the available features in the image; distinct ``features'' are not always available at
preferred scales and locations.
Many potential features in a typical digitized image are either out of focus or blurred due to motion of the camera or object (Fig. 2).
Feature match is also hindered by imaging noise such as film grain. Large features are more accurate in the presence of blur and noise.

As a result of these considerations feature sizes of 20 2 and larger and search windows of 502 and larger are often employed.
The fast algorithm in some cases reduces high-resolution feature tracking from an overnight to an over-lunch procedure. With
lower proxy resolution and faster machines, semi-automated feature tracking is tolerable in an interactive system. Certain applications
in other fields may also benefit from the algorithm described here.

CONCLUSION
For example, image stabilization is a common feature in recent consumer video cameras. Although most such systems are stabilized
by inertial means, one manufacturer implemented digital stabilization and thus presumably used some form of image tracking. The
algorithm used leaves room for improvement however: it has been criticized as being slow and unpredictable and a product review
recommended leaving it disabled [15].

REFERENCES:
[1] P. Anandan, ``A Computational Framework and an Algorithm for the Measurement of Visual Motion'', Int. J.
Vision, 2(3), p. 283-310, 1989.

Computer

[2] D. I. Barnea, H. F. Silverman, ``A class of algorithms for fast digital image registration'', IEEE Trans. Computers, 21, pp. 179186, 1972.
[3] R. Brunelli and T. Poggio, ``Face Recognition: Features versus Templates", IEEE Trans.Pattern Analysis and Machine
Intelligence, vol. 15, no. 10, pp. 1042-1052, 1993.
[4] P. J. Burt, C. Yen, X. Xu, ``Local Correlation Measures for Motion Analysis: a Comparitive Study'', IEEE Conf. Pattern
Recognition Image Processing 1982, pp. 269-274.
[5] F. Crow, ``Summed-Area Tables for Texture Mapping'', Computer Graphics, vol 18, No. 3, pp. 207-212, 1984.
635

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[6]

R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, New York: Wiley, 1973.

[7] R. C. Gonzalez and R. E. Woods, Digital Image Processing (third edition), Reading, Massachusetts: Addison-Wesley, 1992.
[8] A. Goshtasby, S. H. Gage, and J. F. Bartholic, ``A Two-Stage Cross-Correlation Approach to Template Matching'', IEEE Trans.
Pattern Analysis and Machine Intelligence, vol. 6, no. 3, pp. 374-378, 1984.
[9] C. Kuglin and D. Hines, ``The Phase Correlation Image Alignment Method,'' Proc. Int. Conf.Cybernetics and Society, 1975, pp.
163-165.
[10] J. P. Lewis, ``Fast Template Matching'', Vision Interface, p. 120-123, 1995.
[11] A. R. Lindsey, ``The Non-Existence of a Wavelet Function Admitting a Wavelet Transform Convolution Theorem of the
Fourier Type'', Rome Laboratory Technical Report C3BB, 1995.
[12] B. D. Lucas and T. Kanade, ``An Iterative Image Registration Technique with an Application to Stereo Vision'', IJCAI
1981.
[13] S. K. Mitra and J. F. Kaiser, Handbook for Digital Signal Processing, New York: Wiley,1993.
[14] A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Englewood Cliffs, New Jersey: Prentice-Hall, 1975.
[15] D. Polk, ``Product Probe'' - Panasonic PV-IQ604, Videomaker, October 1994, pp. 55-57.
[16] W. Pratt, Digital Image Processing, John Wiley, New York, 1978.
[17] Qi Tian and M. N. Huhns, ``Algorithms for Subpixel Registration'', CVGIP 35, p. 220-233, 1986.
[18] T. W. Ryan, ``The Prediction of Cross-Correlation Accuracy in Digital Stereo-Pair Images'', PhD Thesis,University of
Arizona, 1981.
[19] J. Shi and C. Tomasi, ``Good Features to Track'',Proc.IEEE Conf. on Computer Vision and Pattern Recognition,1994.
[20]

Flame effects compositing software, Discreet Logic, Montreal, Quebec.

[21] Advance effects compositing software, Avid Technology, Inc., Tewksbury, Massachusetts.
[22] After Effects effects compositing software, Adobe (COSA), Mountain View, California

636

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Sensory Evaluation of optimized and Stabilized Sugarcane Juice


Suman Singh1* ,P.K.Omre1, Kirtiraj Gaikwad2
Dept. of Post Harvest Process and Food Engineering, G.B. Pant University of Agriculture & Technology, Pantnagar 263145, India

Food Packaging Laboratory Department of Packaging, Yonsei University Wonju, Gangwon-do 220-710 South Korea
Simanki.singh27@gmail.com *

Abstract: Sugarcane (Saccharum officinarum), is a giant grass belonging to the family graminae. Mythological
texts of India dating back over 3000 years ago, mention the name of sugarcane and its products. The Sanskrit
word SARKARA from which the word SACCHARUM seems to have been derived also indicates the
antiquity knowledge of sugarcane in India (Lakshmikantham, 1983). In general sugarcane juice is spoiled
quickly by the presence of sugars (Krishnakumar and Devadas 2006).In present investigation an attempt has
been made to preserve sugarcane juice with the help of hurdle technology. Sugarcane variety CoP 32320 has
been selected for preparing sugarcane juice. Fresh sugarcane juice preserve with the help of optimized
parameter and hurdle technology. The quality of Sugarcane beverage evaluated by sensory evaluation in
interval of every 15 days for 180 days although sugarcane beverage were hot fill and aseptic pack in air tight
jar so aseptically withdrawal of sample has been taken for avoiding contamination in aseptic environment. The
sensory parameters of colour, flavour, taste and overall acceptability were evaluated with 10 trained panelist
based on 9 point Hedonic rating scale with maximum score considered as the best for optimized sugarcane
juice.
Keyword- Sugarcane Juice, sensory evaluation, quality
Introduction
The total production of sugarcane in India has been increased from 355 M tonnes during 2012-13 to 360 M
tonnes during 2013-14 .In India maximum cane area is to be found in Uttar Pradesh among the different states
of the country. In 2013-14, sugarcane was planted in 5.35 million hectares across the country out of which 1
million hectares was in Maharashtra and over 2 million hectares in Uttar Pradesh, official estimates show
(Directorate of Economic and Statistics, Ministry of Agriculture). Uttar Pradesh and Maharashtra are the two
largest sugarcane producing states in the country, accounting for more than 80 per cent of the annual crop
production.The sugarcane plant is composed of four principal parts, the leaf, the stalk, the root system and the
flower. The stalk is approximately cylindrical and is composed of number of section or internodes (king et al.
1965). The sugar content of cane is dissolved in juice contained in millions of plant cells each one of which
must be ruptured for the juice to be expressed (Mathur, 1975). However, processing and marketing of sugarcane
juice is limited by its rapid deterioration (Prasad & Nath, 2002; Yusof, Shian, & Osman, 2000). Development
of effective treatments or procedures to keep the fresh quality of sugarcane juice would allow it to be more
widely marketed, and would enhance its quality and safety as well. Considerable efforts have been aimed at
stabilizing the juice quality during processing and distribution. The most widely used method for delaying
637

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

deterioration is blanching before juice extraction (Margherita & Giussani, 2003) and addition of antioxidant
agents (Ozoglu & Bayindirli, 2002). Blanching treatment is usually performed by exposing vegetables or fruits
to hot or boiling water for several seconds or minutes (Kidmose & Martens, 1999; Margherita & Giussani,
2003; Severini, Baiano, De Pilli, Romaniello, & Derossi, 2003). The most widespread antioxidant and acidify
agent used in juice processing is ascorbic acid (Choi, Kim, & Lee, 2002; Pizzocarno, Torreggiani, & Gilardi,
1993). In view of above information, the present investigation was envisaged to select a suitable high yielding
variety of sugarcane for juice production and evaluate the juice quality on the basis of sensory parameters.
MATERIAL METHOD
SOURCE OF MATERIAL
Sugarcane variety (COP3230) was collected from crop research center (CRC) pantnagar, Udham Singh Nagar.
After the pretreatment of sugarcane stalk the sugarcane was crushed in a three-roller crusher to get the raw
juice. Brix, and total solids were measured using standard methods - Refractometer method, colorimetric
method, respectively. Deola a natural clarificant is also procured from local market of pantnagar. The citric
acid, ascorbic acid, and pectin is purchased from R.K Scientific Rudrapur.
Pre-treatment and Extraction of Sugarcane Juice
the fully matured sugarcane stalk were harvested from crop research centre of Pantnagar the the sugarcane stalk
was cut in small pieces in order to make pre-treatment process convenient. After cutting of sugarcane stalk the
small pieces was peeled and scrubbed with the help of knife then sugarcane stalk pieces were washed and
blanched in hot water at temperature of 100C for 5 minutes in Oder to inactivate enzymatic activity during the
processing of juice and also prevent the discolouration of sugarcane juice. The pre-treated sugarcane stalk
pieces were passes through sugarcane juice extractor roller and juice was collected in stainless steel and filtered
through double layer of muslin cloth. The filtered juice was used for further processing of stabilization.
Experiments were conducted to stabilizing the sugarcane juice by hot filling method and to identify the process
variables and their experimental range. Sugarcane stalk is treated with hot water for blanching in Oder to
suppress enzymatic activity then juice was extracted with the help of crusher then juice were filtered by
muscline cloth then the filtered juice was treated with ascorbic acid, citric acid, deola after that pectin were
added in amount of (.05mg/100ml) the magnetic stirrer were used in order to achieve homogenize mixing of all
of the component in juice and after that juice was heated at temperature of 80C in closed environment for
suppressing aroma and flavor losses of fresh sugarcane juice after that when juice temperature reached 80C the
hot juice filtered with the help of filter paper and rapidly transfer in glass bottles of borosil while it was too hot
638

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and then seal the bottles and rapidly cool it up to 20C by spraying the water on bottles then place the botlles in
storage temperature range ( 10C, 20C, 30C) in incubator.

Table 1. Independent Variables in RSM


Independent variables

Code

Coded level

Ascorbic acid (mg/100

X1

-1

+1

Citric acid (mg/100 ml)

X2

-1

+1

Deola (ml/100ml)

X3

-1

+1

Storage Temperature (C)

X4

-1

+1

ml)

RESULT AND DISCUSSION


Storage study of stabilized sugarcane juice for sensory parameters
Color
The initial color score for the sugarcane juice sample of Expt. 6, Expt.13, Expt. 14, and Expt.15 was ranged
from 7.1 to 8.5 for 0 days after the treatment while control sample have color score 7 The color decreased
significantly (P<0.01) during storage of sugarcane juice between 0 to 180 days. It was found very little effect
due to Combination of blanching of stems and addition of ascorbic acid, citric acid showed an enhancive effect
in preventing colour change by indicating the lowest score changed similar result found by (Lin Chun Mao et
al. 2007) during the study of preservation of sugarcane juice.
While in control sample the sensory colour score for 0 days 7 and after 15 days it was found 3 because of
Browning was observed in the control with a rapid decrease treatment Fresh sugarcane juice appeared olivegreen and showed clear signs of degreening during processing and storage. Visually, juice extracted from
unblanched stems was a little darker in color than that from blanched stems. A change of color score during the
storage as compared to control sample has been showed in Fig 4.110

639

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 4.50 Color score for sugarcane juice for sensory evaluation

15

30

45

60

75

90

105

120

135

150

165

180

Expt. 6

7.1

7.1

6.9

6.84

6.62

6.51

6.24

6.05

5.92

5.71

5.43

5.31

5.3

Expt. 13

8.2

8.34

7.85

7.68

7.54

7.32

6.87

6.53

6.21

5.87

5.52

5.1

Expt.14

7.8

7.8

7.74

7.71

7.68

7.65

7.6

7.59

7.54

7.51

7.47

7.38

7.3

Expt. 15

8.5

8.5

8.41

8.39

8.28

8.21

8.19

8.13

8.02

7.96

7.93

7.89

7.8

Control

3.5

No. Days

9
8
7

Color Score

Expt. 6

Expt. 13

Expt.14
Expt. 15

Control
2

1
0
0

15

30

45

60

75

90 105 120 135 150 165 180

Storage Period in Days


Fig 4.110 Changes in color score of sugarcane juice during storage

640

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Flavour
The initial flavour score for the sugarcane juice sample of Expt. 6, Expt.13, Expt. 14, and Expt.15 was ranged
from 7.2 to 8.1 for 0 days after the treatment while control sample have flavour score 6 The flavour decreased
significantly (P<0.01) during storage of sugarcane juice between 0 to 180 days. It was found very little effect
due to Combination of blanching of stems and addition of ascorbic acid, citric acid showed an enhancive effect
in preventing colour change by indicating the lowest score changed similar result found by (Lin Chun Mao et
al. 2007) during the study of preservation of sugarcane juice.
While in control sample the flavour score was decreased from 6 to 2.8 after 15 days. This decrease could be due
to due to high level of acid that reacts with the product unpleasant volatile odour and could be due to the slight
fermentation of juice and gas production. There has been significant decline in flavour score of sugarcane juice
similar result found by Reddy (2004) stated that the loss of volatile aromatic substances responsible for flavour
Also presence of preservatives had lead to significant changes. A change of flavour score during the storage as
compared to control sample has been showed in Fig 4.111

Table 4.51 Color score for sugarcane juice for sensory evaluation

15

30

45

60

75

90

105

120

135

150

165

180

7.2

7.15

7.02

6.95

6.91

6.87

6.75

6.41

6.38

6.25

5.87

5.52

5.1

Expt. 13

6.97

6.91

6.87

6.82

6.71

6.65

6.14

5.98

5.74

5.24

5.12

Expt.14

8.1

7.97

7.84

7.71

7.58

7.45

7.32

7.19

7.06

6.93

6.8

6.67

6.5

Expt. 15

8.1

8.03

7.96

7.89

7.82

7.75

7.68

7.61

7.54

7.47

7.4

7.33

7.2

Control

2.8

No. Days
Expt. 6

641

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

9
8

Flavour Score

7
6

Expt. 6

Expt. 13

Expt.14

Expt. 15

Control

1
0
0

15

30

45

60

75

90 105 120 135 150 165 180

Storage Period in Days

Fig 4.111 Changes in flavour score of sugarcane juice during storage

Taste
The initial taste score for the sugarcane juice sample of Expt. 6, Expt.13, Expt. 14, and Expt.15 was ranged
from 7.3 to 9.2 for 0 days after the treatment while control sample have taste score 7. The taste decreased
significantly (P<0.01) during storage of sugarcane juice between 0 to 180 days. It was found very little effect
due to Combination of blanching of stems and addition of ascorbic acid, citric acid showed an enhancive effect
in preventing colour change by indicating the lowest score changed similar result found by (Lin Chun Mao et
al. 2007) during the study of preservation of sugarcane juice.
While in control sample the taste score was decreased from 7 to 3.4 after 15 days this decrease could be due to
the loss of volatile aromatic substances responsible for taste and due to decreases in pH the juice became more
acidic as stated by Reddy (2004). Also presence of preservatives had lead to significant changes. A change of
taste score during the storage as compared to control sample has been showed in Fig 4.112
642

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 4.52 Taste core for sugarcane juice for sensory evaluation

15

30

45

60

75

90

105

120

135

150

165

180

Expt. 6

7.3

7.15

6.85

6.7

6.55

6.4

6.25

6.1

5.95

5.8

5.65

5.3

Expt. 13

8.7

8.45

8.2

7.95

7.7

7.45

7.2

6.95

6.7

6.45

6.2

5.95

4.6

Expt.14

8.1

8.05

7.95

7.9

7.85

7.8

7.75

7.7

7.65

7.6

7.55

7.1

Expt. 15

9.2

9.13

9.06

8.99

8.92

8.85

8.78

8.71

8.64

8.57

8.5

8.43

7.6

Control

3.4

No. Days

10
9
8
7
Expt. 6

Taste score

Expt. 13

Expt.14

Expt. 15
Control

3
2
1
0
0

15

30

45

60

75

90 105 120 135 150 165 180

Storage Period in Days


Fig 4.112 Changes in taste score of sugarcane juice during storage
643

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Appearance
The initial taste score for the sugarcane juice sample of Expt. 6, Expt.13, Expt. 14, and Expt.15 was ranged
from 7.2 to 8.5 for 0 days after the treatment while control sample have taste score 6.5. The taste decreased
significantly (P<0.01) during storage of sugarcane juice between 0 to 180 days. It was found very little effect
due to Combination of blanching of stems and addition of ascorbic acid, citric acid showed an enhancive effect
in preventing colour change by indicating the lowest score changed similar result found by (Lin Chun Mao et
al. 2007) during the study of preservation of sugarcane juice.
While in control sample at 0 days the score of appearance was 6.5 and it was found 2.85 after 15 days of storage
because of browning occurred in sugarcane juice due to increasing PPO activity and invert sugar the colour of
sugarcane juice become darker it was decreased its appearance score similar result found by (Lin Chun Mao et
al. 2007). A change of appearance score during the storage as compared to control sample has been showed in
Fig 4.113

Table 4.53 Appearance score for sugarcane juice for sensory evaluation

15

30

45

60

75

90

105

120

135

150

165

180

Expt. 6

7.2

7.09

6.98

6.87

6.76

6.65

6.54

6.43

6.32

6.21

6.1

5.99

4.52

Expt. 13

7.5

7.29

7.08

6.87

6.66

6.45

6.24

6.03

5.82

5.61

5.4

5.19

4.9

Expt.14

7.9

7.77

7.64

7.51

7.38

7.25

7.12

6.99

6.86

6.73

6.6

6.47

6.31

Expt. 15

8.5

8.47

8.44

8.41

8.38

8.35

8.32

8.29

8.26

8.23

8.2

8.17

7.8

Control

6.5

2.85

No. Days

644

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

9
8
7
6

Appearance

Expt. 6
5

Expt. 13
Expt.14

Expt. 15
3

Control

2
1
0

15

30

45

60

75

90 105 120 135 150 165 180

Storage Period in Days


Fig 4.113 Changes in Appearance score of sugarcane juice during storage

Overall acceptability
due to little changes in colour, taste, flavour and its appearance although change was very minute due to the
treatment of sugarcane juice.
While in control sample the overall acceptability was 7 and it was decreased to 3 due to the score of control
sample was declined significantly during storage owing to oxidative reaction to deteriorate the scores of colour,
flavour, appearance as well as taste. These findings were accordance with (Chauhan et al. 2002)
This decrease could be due to due to high level of acid that reacts with the product unpleasant volatile odour and
could be due to the slight fermentation of juice and gas production. There has been significant decline in taste
score of sugarcane juice similar result found by Reddy (2004) stated that the loss of volatile aromatic
645

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

substances responsible for taste Also presence of preservatives had lead to significant changes. A change of
overall acceptability during the storage as compared to control sample has been showed in Fig 4.114
Table 4.54 Overall Acceptability score for sugarcane juice for sensory evaluation

15

30

45

60

75

90

105

120

135

150

165

180

Expt. 6

7.2

7.08

6.96

6.84

6.72

6.6

6.48

6.36

6.24

6.12

5.88

5.125

Expt. 13

7.56

7.38

7.2

7.02

6.84

6.66

6.48

6.3

6.12

5.94

5.76

5.58

4.9

Expt.14

8.52

8.45

8.38

8.31

8.24

8.17

8.1

8.03

7.96

7.89

7.82

7.75

7.125

Expt. 15

8.92

8.82

8.72

8.62

8.52

8.42

8.32

8.22

8.12

8.02

7.92

7.82

7.675

Control

6.5

3.12

No. Days

10
9

Overall Acceptability

8
7
Expt. 6

Expt. 13

Expt.14

Expt. 15

Control

2
1
0
0

15

30

45

60

75

90 105 120 135 150 165 180

Storage Period in Days


Fig 4.114 Changes in Overall Acceptability of sugarcane juice during storage

646

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
1) Azra Yasmin, Shahid Masood and Hamida Abid(2010). Biochemical analysis and sensory evaluation of naturally
preserved sugarcane juice. Pak. J. Biochem. Mol. Biol. 43(3):144-145
2) Barocci, S., Re, L., Capotani, C., Vivani, C., Ricci, M., Rinaldi, L., (1999). Effects if some extracts on the acetyl-choline
release at the mouse neuromuscular joint. Pharmacological Research, 39, 239245
3) Choi, M. H., Kim, G. H., & Lee, H. S. (2002). Effects of ascorbic acid retention on juice colour and pigment stability in
blood orange (Citrus sinensis) juice during refrigerated storage. Food Research International, 35, 753759
4) Hanan Yassin M. Qudsieh, Salmah Yusof, Azizah Osman, Russly Abdul Rahman (2002). Effect of maturity on
chlorophyll, tannin, color, and polyphenol oxidase (PPO) activity of sugarcane juice (Saccharum officinarum Var. Yellow
Cane). J. Agric. & Food Chem., 50(6):1615-1618.
5) El-Abasy, M., Motobu, M., Na, K. J., Sameshina, T., Koge, K., Onodera, T (2002). Immunostimulating and growth
promoting effects of sugarcane extracts (SCE) in chickens. Journal of Veterinary Medical Science, 64, 10611063.
6) Frazier CW, Westhoff CD (1995). Food microbiology. Tata McGraw-Hill Publishing Company Limited, New Delhi,pp.
187-195.
7) J Karthikeyan and S S Samipillai (2010), Sugarcane in therapeutics, Journal of Herbal Medicine and Toxicology, 4(1), 9
14.
8) Karmakar, Richa, Ghosh, Amit Kumar and Gangopadhyay, Hiranmoy (2011).Effect of pre-treatment on physicochemical characteristics of sugarcane juice. Sugar Tech., 13(1):4750.
9) Kidmose, U., & Martens, H. J. (1999). Changes in texture, microstructure and nutritional quality of carrot slices during
blanching and freezing. Journal of Science Food and Agriculture, 79, 17471753.
10) Krishnakumar T., Thamilselvi C. and Devadas C.T. (2013).Effect of delayed extraction and storage on quality of
sugarcane juice. African Journal of Agriculture Research., 8(10) : 930-935
11) Kumar Pankaj, Singh S.K., Singh B. (2009). Standardization of Methodology for the Preparation of Beverages and Studies
During Preservation. Progressive Agriculture 9(1) 98-103.
12) Laksameethanasan, P. (2011) Clarification of sugarcane juice for syrup production.
Program of Food Science and
Technology, Faculty of Science and Technology Nakhon Pathom Rajabhat University, Nakhon Pathom, 73000, Thailand.
13) Leistner, L., and M. Gorris. (1995). Food preservation by hurdle technology. Trends in
Food Science and Technology6: 49.
14) Lin Chun Mao, Yong Quan Xu and Fei Que (2007). Maintaining the quality of sugarcane juice with blanching and
ascorbic acid. Food Chem., 104 (2) : 740745
15) Lo, D. Y., Chen, T. H., Chien, M. S., Koge, K., Hosono, A.,Kaminogawa, S., (2005). Effects of sugarcane extract
onmodulation of immunity in pigs. Journal of Veterinary Medical Science, 67(6), 591597.
16) Mishra, Bibhuti B., Gautam, Satyendra and Sharma, Arun (2011). Shelf life extension of sugarcane juice using
preservatives and gamma radiation processing. J. Food Sci., 76(8) : M573-M578.
17) Margherita, R., & Giussani, E. (2003). Effect of fruit blanching on phenolics and radical scavenging activity of highbush
blueberry juice. Food Research International, 36, 9991005.
18) Mathur, R. B. L. (1999). Handbook of Cane Sugar Technology, Second Revised and Enlarged Edition, Oxford and IBH
Publishing Co. Pvt. Ltd.

19) Ozoglu, H., & Bayindirli, A. (2002). Inhibition of enzymatic browning in cloudy apple juice with anti browning agents.
Food Control, 13, 213221.
20) Prasad, K. and Nath, N. (2002).Effect of pre-treatments andclarificants on sugarcane juice characteristics. Asian J.
Chem.,14 (2) : 723731.
21) Qudsieh, H.Y.M., Yusof, S., Osman, A. and Rahman, R.A. (2002).Effect of maturity on chlorophyll, tannin, color and
polyphenoloxidase (PPO) activity of sugarcane juice (Saccharum officinarumvar, Yellow Cane). J. Agric. Food Chem.,50:
1615-1618.
22) Ram Kumar, Jha, Alok, Singh, Chandan Kumar and Singh, Kanchan (2012). Optimization of process and
physicochemical properties of ready-to-serve (RTS) beverage of cane. Juice with Curd. Sugar Tech., 14(4):405411

647

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

23) Richa Karmakar, Amit Kumar Ghosh, Hiranmoy Gangopadhyay (2011). Effect of Pretreatments on Physico-Chemical
Characteristics of Sugarcane Juice. Sugar Tech 13(1)
47-50
24) Weerachet Jittanit, Somsak Wiriyaputtipong, Hathainid Charoenpornworanam, and Sirichai Songsermpong (2009).
Effects of Varieties, Heat Pretreatment and UHT Conditions on the Sugarcane Juice Quality. Chiang Mai journal of sciences
38(1): 116-125
25) Yusof, S., Shian, L.S. and Osman, A. (1999). Changes in quality of sugar cane juice upon delayed extraction and storage.
Food Chem., 68 : 395-401.
26) Yusof, S., Shian, L.S., and Osman, A. (2000). Changes in quality of sugar-cane juice upon delayed extraction and storage.
Food Chem., 68 (4) : 395-401.
27) Siswoyoa, T.A., Ika Oktavianawatia, D.U. Murdiyantob, and B. Sugihartoa (2007). Changes of sucrose content and
invertase activity during sugarcane stem storage. Indonesian Journal of Agricultural Science8: 7581.
28) Sneh, Sankhla, Chaturvedi, Anurag, Kuna, Aparna and Dhanlakshmi, K. (2012). Preservation of sugarcane juice using
hurdle technology. Sugar Tech., 14(1):263
29) Singh I, Solomon S, Shrivastava AK, Singh RK, Singh J (2006). Postharvest quality deterioration of cane juice: physiobiochemical indicators. Sugarcane Technol. 8(2&3):128-13

648

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Active and Reactive Power control using UPFC


Vaishali Kuralkar
Asst Professor , vaishali.kuralkar@gmail.com ,9850124737

Abstract FACTS devices have the capability to control voltage, impedance and the phase angle in transmission circuit and hence
control the power flow. Among the converter based FACTS devices Unified Power Flow Controller (UPFC) is considered in this
paper. Static and dynamic analysis of the standard 5 bus system is done in MATLAB .The result of network with and without using
UPFC is compared in terms of active and reactive power flows in the line and at the bus to analyze the performance of the device .

Keywords FACTS, deregulation , Newton Raphson , Power flow, Reactive power, Active power , UPFC, MATLAB
INTRODUCTION

The deregulation of power utilities and the power oscillations in the interconnected grids have created many obstructions for the
generation and also for the installation of new transmission lines . As power transfer grows the system becomes more complex , less
secure and large power flows with inadequate control , excessive reactive power and large dynamic swings leading to inefficient
utilization of interconnected grid . The ability of the transmission system to transmit power becomes impaired by one or more of the
following steady state and dynamic limitations: (a) angular stability, (b) voltage magnitude, (c) thermal limits, (d) transient stability,
and (e) dynamic stability . [3]
The technology such as Flexible AC Transmission System (FACTS) , can help to find the solution . The need for new power flow
controllers capable of increasing transmission capability and controlling power flows will certainly increase.[1]
The universal and most flexible FACTS device is the Unified Power Flow Controller (UPFC). UPFC is the combination of three
compensators characteristic; i.e. impedance, voltage magnitude and phase angle, that are able to produce a more complete
compensation.[2]This device is actually a combination of two FACTs device which are STATCOM (Static Synchronous
Compensator) and SSSC (Static Series Synchronous Compensator). SSSC is used to add controlled voltage magnitude and phase
angle in series with the line, while shunt converter STATCOM is used to provide reactive power to the ac system, beside that, it will
provide the dc power required for both inverter. The reactive power can be compensated either by improving the receiving voltage or
by reducing the line reactance.
UPFC should be installed to control the voltage, as well as to control the active and reactive power flow through the transmission line.
However, the right transmission line to be injected by UPFC and the effect of injection will only be known by doing the analysis using
MATLAB and PSCAD software. This paper presents the power flow control for the standard five bus and fourteen bus system with
and without the FACTS DEVICES You can put the page in this format as it is and do not change any of this properties. You can copy
and past here and format accordingly to the default front. It will be easy and time consuming for you.
2. POWER FLOW CONTROL
The power transmission line can be represented by a two-bus system k and m in ordinary form . The active power transmitted
betweens bus nodes k and m is given by:
P = V m * V k Sin (k m)
X
Where V m and V k are the voltages at the nodes, (k - m) the angle between the voltages and , X the line impedance. The power
flow can be controlled by altering the voltages at a node, the impedance between the nodes and the angle between the end voltages[5].
The reactive power is given by :
Q = V k 2 - V m * V k Cos (k - m)
X

4.POWER FLOW MODEL OF UPFC


The equivalent circuit consists of two coordinated synchronous voltage sources should represent the UPFC adequately for the purpose
of fundamental frequency steady state analysis. Such an equivalent circuit is shown in the figure.
649

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Equivalent circuit of UPFC


The UPFC voltage sources are :

Where VvR and vRare the controllable magnitude (VvRmin VvR VvRmax) and phase angle (0 vR 2) of the voltage source
representing the shunt converter. The magnitude VcR and phase angle cR of the voltage source representing the series converter are
controlled between limits (VcRmin VcRVcRmax) and (0 cR 2), respectively. The phase angle of the series injected voltage
determines the mode of power flow control [5], [12]. If cR is in phase with the nodal voltage angle k, the UPFC regulates the
terminal voltage. If cR is in quadrature with k, it controls active power flow, acting as a phase shifter. If cR is in quadrature with
line current angle then it controls active power flow, acting as a variable series compensator. At any other value of cR, the UPFC
operates as a combination of voltage regulator, variable series compensator, and phase shifter. The magnitude of the series injected
voltage determines the amount of power flow to be controlled. Assuming lossless converter values, the active power supplied to the
shunt converter, PvR, equals the active power demanded by the series converter, PcR; i.e.
Furthermore, if the
coupling transformers are assumed to contain no resistance then the active power at bus k matches the active power at bus m.
Accordingly,

. The UPFC power equations are combined with those of the AC network.

3. TEST SYSTEM

The standard five bus system is taken for the analysis .


A.CASE I
Initially the system is analysed without any FACTS devices.
I.BUS RESULTS WITHOUT FACTS DEVICES

650

:
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

II.

BUS _ NO

01

02

03

04

05

V_MAG:

1.06

0.9871

0.9836

1.01

0.9721

V _ ANG

-0.464

-0.962

-2.061

-5.773

LINE RESULTS :
LINE NO

P (p.u)

Q(p.u)

01

0.1340

1.2118

02

0.1522

0.2122

03

-2.2139

0.5220

04

-2.1820

0.3555

05

-6.2785

2.9302

06

-22.9455

10.9021

07

-3.5902

5.5066

B.CASE II
The five bus system is modified to include one UPFC to compensate the transmission line linking bus 2 and bus 3 .The UPFC shunt
converter is set to regulate the nodal voltage magnitude at bus 2 at 1 p.u.
UPFC DATA :
The starting values of UPFC shunt converter are :
voltage magnitude : 1 p.u
Phase degrees : 0 degrees
For series converter :voltage magnitude : 0.04 p.u
Phase degrees : 87.3 degrees
I.

BUS RESULTS WITH UPFC BETWEEN BUS 2 AND 3


BUS _ NO

01

02

03

04

05

V_MAG:

1.06

0.9998

0.9901

1.0037

0.9746

V _ ANG

-0.92985

-1.9276

-4.1288

-11.564

II.LINE RESULTS

651

LINE NO

P (p.u)

Q(p.u)

01

0.2719

1.0112

02

0.3028

0.2468

03

-4.0745

2.0098

04

-4.0005

1.8088

05

-7.5400

9.9569

06

-30.1488

35.7096

07

2.7342

6.7163

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

UPFC increases the amount of reactive power supplied at the bus 2 . There is increase in the active power also ,due to the demand of
the UPFC series converter.

4. CONCLUSION
This paper presented the simulation methods required for study of the steady state operation of electrical systems with FACTS device
UPFC .
The power flow for the five bus system was analysed with and without FACTS devices .
The sample 5 bus network is modified to include one UPFC to compensate the transmission line no. 6 linking bus 2 and bus 3. The
UPFC shunt controller is set to regulate the nodal voltage magnitude at bus 2 at 1 p.u. There is large amount of increase in the active
power as well as the reactive power. The steady state model of UPFC is analysed and evaluated in Newton-Raphson algorithm. The
static analysis shows that UPFC is able to control not only the voltage but also the impedance and phase angle which affect the power
flow in the transmission line.
REFERENCES:
1.

Edvina Uzunovic Claudio, A. Ca~nizares John Reeve , EMTP Studies of UPFC Power Oscillation Damping

2.
3.

North American Power Symposium (NAPS), San Luis Obispo, California, October 1999.
Nashiren.F. Mailah , Senan M. Bashi , Single Phase Unified Power Flow Controller (UPFC): Simulation and Construction.
European Journal of Scientific Research ISSN 1450-216X Vol.30 No.4 (2009), pp.677-684

4.

Pavlos S. Georgilakis1,a and Peter G. Vernados, Flexible AC Transmission System Controllers: An Evaluation

5.
6.

Materials Science Forum Vol. 670 (2011) pp 399-406


Alireza Seifi, Sasan Gholami and Amin Shabanpour , Power Flow Study and Comparison of FACTS: Series (SSSC), Shunt
(STATCOM), and Shunt-Series (UPFC).

7.
8.

The Pacific Journal of Science and Technology


Narain G. Hingorani and Laszlo Gyugyi, Understanding FACTS, Wiley India publication.

9.

K.R Padiyar FACTS Controllers in Power Transmission and Distribution

10. Lei, D. Jiang and D. Retzmann, "Stability improvement in power systems withnon-linear TCSC control strategies", ETEP,
vol. 10, No. 6, November/December.2000, pp. 339-345.
11. SaminaElyasMubeen, R. K. Nema, and GayatriAgnihotri, Power Flow Control with UPFC in PowerTransmission System.
12. Adepoju, G. A. 1, Komolafe, O.A. , Analysis and Modeling of Static Synchronous Compensator (STATCOM): A
comparison of Power Injection and Current Injection Models in Power Flow Study ,
13. International Journal of Advanced Science and Technology Vol. 36, November, 2011 , 652
14. Ashwin Kumar Sahoo, S.S. Dash, T. Thyagarajan,Modeling of STATCOM and UPFC for Power System Steady State
Operation
AndControl, IET-UK International Conference on Information and Communication Technology in Electrical
Sciences (ICTES 2007), Dec. 20-22, 2007 Pp.458-463
15. ] Y. Guo, D.J. Hill, Y. Wang. "Global transient stability and voltage regulation for power systems", IEEETransactions on
Power Systems.vol. 16, no. 4. November 2001.

652

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Congestion Management in Deregulated Power System using


Facts Controller
Ms. Namrata Rao
Assistant Professor, m.namrata2751@gmail.com, 9604301436

Abstract Congestion in the transmission lines is one of the technical problems that appear particularly in the deregulated
environment. There are two types of congestion management methodologies to relieve it. One is non-cost free methods and another is
cost-free methods, among them later method relieves the congestion technically whereas the former is related with the economics. In
this paper congestion is relieved using cost free methods. Using FACTS devices, congestion can be reduced without disturbing the
economic matters. STATCOM and UPFC are two mainly emerging FACTS devices and they are used in this paper to reduce the
congestion. Above method is tested on 5-bus system and it can be extended to any practical system. FACTS devices can be an
alternative to reduce the flows in heavily loaded lines, resulting in an increased power capability, low system loss, improved stability
of the network, by controlling the power flows in the network. Modeling, simulation and analysis of 5 bus system in MATLAB
environment is proposed in this paper. Comparison with and without FACTS devices is done to control the power flow and obtain the
power system steady state operation. The same system is again analyzed under dynamic conditions and the performance of these
devices is observed.

Keywords FACTS, Congestion, UPFC , STATCOM, Power FlowDynamic analysis ,Steady state Analysis
INTRODUCTION

Growth in load demand and the push to change the generation sources to smaller plant utilizing renewable energy sources along with
uncertainty of transaction is likely to strain existing power system. This will lead to transmission system functioning closer to their
operating limits and caused increased congestion .Therefore ensuring the transmission system is flexible enough to meet new and less
predictable power supply and demand condition in competitive electricity market will be a real challenge. In India the power sector
was mainly under the government ownership (>95% distribution and ~98% generation) under various states and centaral government
utilities, till 1991. The remarkable growth of physical infrastructure was facilitated by four main policies: 1) centralized supply and
grid expansion 2) large support from government budgets, 3) development of sector based on indigenous resources.
In mid 1990s Orissa began a process of fundamental restructuring and deregulation of the state power sector. Thereby effective
means for congestion management has become an increasingly important issue, especially for deregulated system. New enabling
technologies that can maintain the stability and reliability of power system while handling large volume of transmission are able to
provide solution .One example of such technology is the Flexible AC Transmission Sysytem .The ability of FACTS controller to
support and control power flows in system networks is well known [1-3]. And it is anticipated that the application of FACTS
controller will grow in future power system .The UPFC and STATCOM are the example of second and third generation type of
FACTS controller, based on power electronics switches. UPFC has the advantage of controlling both active and reactive power flow
simultaneously over STATCOM. The first aspect is the flexible power system operation according to the power flow control
capability of FACTS devices. The other aspect is the improvement of transient and steady-state stability of power systems. FACTS
devices are the right equipment to meet these challenges [7].

The first aspect is the flexible power system operation according to the power flow control capability of FACTS devices.
The other aspect is the improvement of transient and steady-state stability of power systems. FACTS devices are the right
equipment to meet these challenges [7].

653

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

I.

FLEXIBLE AC TRANSMISSION SYSTEM

Flexible AC Transmission System (Facts) is a new integrated concept based on power electronic switching converters and
dynamic controllers to enhance the system utilization and power transfer capacity as well as the stability, security,
reliability and power quality of AC system interconnections.

II.

UPFC STRUCTURE, OPERATION AND CONTROL

Two main blocks of UPFC are Shunt inverter and series inverter.

Fig 1: Block diagram of UPFC


A) Shunt Inverter
The shunt inverter is operated in such a way as to draw a controlled current from the line. One component of this current
is automatically determined by the requirement to balance the real power of the series inverter. The remaining current
component is reactive and can be set to any desired reference level (inductive or capacitive) within the capability of the
inverter.[1]

B) Series Inverter
The series inverter controls the magnitude and angle of the voltage injected in series with the line. This voltage injection is
always intended to influence the flow of power on the line; its working is similar to that of SSSC.

654

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

III . STATCOM: STRUCTURE, OPERATION AND CONTROL

Fig 3: working of STATCOM


Fig2 :Block diagram of STATCOM

IV. Power Flow model of UPFC:

Fig 4: Equivalent circuit of UPFC


The equivalent circuit consists of two coordinated synchronous voltage sources should represent the UPFC adequately for
the purpose of fundamental frequency steady state analysis [1]. Such an equivalent circuit is shown in Fig 4.
The UPFC voltage sources are:
..1
.2
655

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

whereVvR and vR are the controllable magnitude (VvRmin VvR VvRmax) and phase angle (0 vR 2) of the voltagesource
representing the shunt converter. The magnitude VcRand phase angle cR of the voltage source representing the series
converter are controlled between limits (VcRmin VcRVcRmax) and (0 cR 2), respectively. The phase angle ofthe series
injected voltage determines the mode of power flow control [1], [4]. If cR is in phase with the nodal voltage angle k, the
UPFC regulates the terminal voltage. If cR is in quadrature with k, it controls active power flow, acting as aphase shifter.
If cR is in quadrature with line current angle then it controls active power flow, acting as a variable series compensator.
At any other value of cR, the UPFC operates as a combination of voltage regulator, variable series compensator, and
phase shifter. The magnitude of the series injected voltage determines the amount of power flow to be controlled .Based
on the equivalent circuit shown in Fig 4 the active and reactive power equations are,

Equations for series converter:

656

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Equations for shunt converter:

Assuming lossless converter values, the active power supplied to the shunt converter, PvR, equals the active
power demanded by the series converter, PcR; i.e. Pvr+ Pcr = 0. Furthermore, if the coupling transformers are
assumed to contain no resistance then the active power at bus k matches the active power at bus m.
Accordingly, . Pvr + Pcr = Pk +Pm= 0. The UPFC power equations are combined with those of the AC network.
V. Power flow model of STATCOM:

Fig 5:Equivalent circuit of STATCOM


The Static synchronous compensator (STATCOM) is represented by a synchronous voltage source with minimum and
maximum voltage magnitude limits [12]. The bus at which STATCOM is connected is represented as a PV bus, which
may change to a PQ bus in the events of limits being violated. In such case, the generated or absorbed reactive power
would correspond to the violated limit. The power flow equations for the STATCOM are derived below from the first
principles and assuming the following voltage source representation [2].

The following are the active and reactive power equations for the converter at bus k,

657

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Based on the power flow models given above for STATCOM and UPFC the analysis simulation and modeling of the system is done.

VI.

STATIC ANALYSIS OF THE SYSTEM

The objectives of this Paper are to:


i) Simulate 5 bus power system network using MATLAB software .
ii) Model UPFC and STATCOM in5 bus power system network and determine the power flow .
iii) Perform the steady-state analysis of the 5 bus power system network before and after UPFC and STATCOM are applied.

Table 1: Bus results with and without FACTS devices


BUS
NO.

Without

WITH

WITH

FACTS

STATCOM

UPFC

DEVICES(pu)

658

1.06

1.06

1.06

0.9871

1.013

0.9998

0.9836

0.9946

0.9901

1.01

1.002

1.0037

0.9721

0.9753

0.9746

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 2: Line result without FACTS devices


LINE NO

P (p.u)

Q(p.u)

01

0.1340

1.2118

02

0.1522

0.2122

03

-2.2139

0.5220

04

-2.1820

0.3555

05

-6.2785

2.9302

06

-2.29455

1.09021

07

-3.5902

5.5066

The simulation yields the power flow for lines and bus active and reactive powers which are tabulated above .From the power flow
results for the 5-bus system, it can be observed that the voltage magnitudes at bus 2, bus 3 and bus 5 are lower than 1.0 p.u.So , these
are the potential buses where FACTS devices can be included .The active power in line 6 is 22.9455 p.u and the reactive power is
10.9021 p.u .
Table 3: Line result with STATCOM at bus 2
LINE NO

P (p.u)

Q(p.u)

01

1.4068

0.8461

02

0.1492

0.2457

03

5.3052

5.6402

04

5.3496

5.3818

05

7.8564

7.2254

06

3.06750

3.09922

07

-3.6099

5.3440

It is very clear from the comparison of table 2 and table 3,that the nodal voltage is maintained at 1.013 at bus 2 by STATCOM and
the phase angle is also improved to -4.7529(degrees) from 0.464(degrees) . The active power is also increased from 22.9455 (p.u) to
30.6750 (p.u)
The installation of the STATCOM resulted in improved network voltage profile (Table 4.3).The slack generator reduces its reactive
power generation by 5.9% compared with the base case .The reactive power absorbed by the bus 4 generator increased by 25% of the
base case. In general, more reactive power is available in the network when compared with the base case due to the installation of
STATCOM .
Table 4: Line result with UPFC at bus 2

659

Line no.
01

P
0.2719

Q
1.0112

02

0.3028

0.2468

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

03

-4.0745

2.0098

04

-4.0005

1.8088

05

-7.5400

9.9569

06

-3.01488

3.57096

07

2.7342

6.7163

UPFC increases the amount of reactive power supplied at the bus 2 to 35.7096 (p.u) which very high as compared to 30.9922 (p.u)
with STATCOM and10.9021 (p.u) without any FACTS devices. There is increase in the active power also due to the demand of the
UPFC series converter
Conclusion:
This paper has proposed cost free congestion management methods required for smooth operation of deregulated power system. It
gives the remedy for congestion by enhancing active power flow capability of transmission line. Simulation methods required for
study of the steady state as well as dynamic operation of electrical systems with FACTS devices UPFC and STATCOM is analyzed in
the paper. The power flow for the five bus system was analysed with and without FACTS devices. The power flow indicates that
there is nearly 5.9 % increase in the reactive power absorption compared with the base case when STATCOM is included in bus 2.
The largest reactive power flow takes place in the transmission line connecting bus 2 to bus 3, which is 3.09922 p.u. The direction of
reactive power flow remains unchanged.
The sample 5 bus network is modified to include one UPFC to compensate the transmission line no. 6 linking bus 2 and bus 3. The
UPFC shunt controller is set to regulate the nodal voltage magnitude at bus 2 at 1 p.u. There is large amount of increase in the active
power as well as the reactive power. The steady state models of STATCOM and UPFC are analyzed and evaluated in NewtonRaphson algorithm.

REFERENCES:

[1]Narain G. Hingoraniand Laszlo Gyugyi ,Understanding FACTS by, Wiley India publication.
[2]AlirezaSeifi ,SasanGholami ,and Amin Shabanpour , Power Flow Study and Comparison of FACTS: Series (SSSC), Shunt
(STATCOM), and

[3] Y. H. Song, A. T. Johns , Flexible AC Transmission Systems (FACTS), IEE Press, London, 1999. ISBN 0-85296771-3.
[4] Y. Guo, D.J. Hill, Y. Wang. "Global transient stability and voltage regulation for power systems", IEEETransactions
on Power Systems.vol. 16, no. 4. November 2001.
[5] J.Y. Liu; Y.H. Song; P.A. Mehta, "Strategies for handling UPFC constraints in steady-state power flow and voltage
control".IEEE Transactions on Power Systems.VOL. 15, May 2000, pp. 566571.
[6] S. Gerbex, R. Cherkaoui, and A. J. Germond, "Optimal location of multi-type FACTS devices in a power system by
means of genetic algorithms," IEEE Trans.Power Systems, vol. 16, August. 2001, pp. 537-544.
[7] D. Povh, D. Retzmann Siemens, Erlangen, Germany Development of facts for transmission systems.
[8] E. Acha, V G Agelidis, O Anaya-Lara, and T J E Miller Power Electronic Control in Electrical Systems ELSEVIER,
2002.
[9]K.RPadiyar, FACTS Controllers in Power Transmission and Distribution, New Age International Publishers .
660

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[10]Ashwin Kumar Sahoo, S.S. Dash, T. Thyagarajan,Modeling of STATCOM and UPFC for Power System Steady
State Operation And Control, IET-UK International Conference on Information and Communication Technology in
Electrical Sciences (ICTES 2007), Dec. 20-22, 2007 Pp.458-463

661

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Static and Dynamic Analysis of Railway Track Sleeper


D.Kishore Kumar1, K.Sambasivarao2
1

M.Tech Scholar (Machine Dynamics), Swami Vivekananda Engineering College, Bobbili


Assistant Professor, Mechanical Engineering Dept., Swami Vivekananda Engineering College, Bobbili

Abstract The increased speed and weight of modern trains puts the components of a railway track in a highly dynamic load
situation. Irregularities of the wheels and rails induce substantial dynamic loads that have to be accounted for in the procedure of
designing the railway track. The concrete sleeper is an essential part of the track structure and there is a growing demand for
improved analytical tools for concrete sleepers under dynamic loading. A typical finite element model of the concrete sleeper is
established that focuses on analyzing the behavior of a concrete sleeper during a train passage.
A finite element model of a pre-stressed concrete sleeper is established for static load conditions. The studies done with the sleeper
model showed that the train load has great impact at the rail seat positions at the two ends of the sleeper maintained at broad gauge
distance i.e. 1.76m. The natural frequencies obtained for a sleeper in free-free condition are lesser compared to the natural
frequencies obtained for sleeper in-situ condition. The analysis also showed that the interaction between the sleeper and the
underlying ballast is of great importance for the dynamic behavior of the sleeper.

Keywords dynamic load, concrete sleeper, prestressed, finite element model, natural frequency, ballast, static load
INTRODUCTION

The railway track was originally developed in order to exceed the load-carrying capacity of the roads. In the
development of the railway track a component acting and looking like the sleeper of today was invented. Bonnett (1996) specifies
the functions that are required of a modern sleeper used in railway tracks today [4]:

Spread wheel loads to ballast


Hold rails to gauge and inclination
Transmit lateral and longitudinal forces
Insulate rails electrically
Provide a base for rail seats and fastenings
The functions described above show that the sleeper has an important role in the track system. It is thereby
obvious that the sleeper has to be analyzed in an accurate way.
Several physical phenomena occur when dynamic (impact) loads are applied to concrete sleepers that do not
occur under static conditions. For instance, the dynamic loads introduce stress waves in the sleeper, the acceleration of the sleeper
introduces inertial forces, and the high strain rate changes the material properties of the concrete. Furthermore, the ballast
pressure supporting the sleeper in the track will change with time in a dynamic load situation [24].
Future railway traffic will certainly be even faster than the trains of today, and at the same time the demanded
load capacity of the trains will probably increase. This implies that the demands on the concrete sleeper will increase and the need
for detailed, reliable analytical tools will be of greater importance in the near future.

OBJECTIVE OF THE WORK


The objective of this work is to conduct detailed analysis on the concrete sleepers for both static and dynamic
load situations. For dynamic load situations, the dynamic effects caused by the impact loads need to be considered. The analysis
results should be well documented and available to the industry; hence, a commercial program is preferable.
A finite element model of a pre-stressed concrete sleeper should be established by use of existing element types
and material models. The finite element model of the concrete sleeper should also be extended to involve the sleepers
surroundings. A comparison of the FE-analysis results with ballast and without ballast provides insight to the dynamic effects on
the sleeper.
662
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

LITERATURE REVIEW
The combination of rails, fitted on sleepers with a suitable fastening system and resting on ballast and subgrade is
called the railway track or permanent way [26]. Sometimes temporary tracks are also laid for conveyance of earth and materials
during construction works[7]. The name permanent way is given to distinguish the final layout of track from these temporary
tracks. Fig. 1 below shows a typical cross section of a permanent way on an embankment.

Fig.1. Typical cross-section of a Permanent way on embankment


In a permanent way the rails are joined in series by fish plates and bolts and then they are fixed to sleepers by
different types of fastenings. The sleepers properly spaced, resting on the ballast are suitably packed with ballast. The layer of
ballast rests on the prepared subgrade called the formation.
The rails act as girders to transmit the wheel load to the sleepers. The sleepers hold the rails in proper position
with respect to proper tilt, gauge and level, and transmit the load from rails to the ballast. The ballast distributes the load over the
formation and holds the sleepers in position. On curved tracks, super elevation is maintained by ballast and the formation is
leveled. Minimum ballast cushion is maintained at the inner rail, while the outer rail gets kept more ballast cushion. Additional
quantity of ballast is provided on the outer cess of each track for which the base width of the ballast is kept more than for a
straight track [26].
Rails:
The rails on the track act as steel girders for the purpose of carrying axle loads. They are made of high carbon
steel to withstand wear and tear. Flat-footed rails are mostly used in railway track. The functions of rails are as given below:

Rails provide a hard, smooth and unchanging surface for passage of heavy moving loads with a minimum friction
between the steel rails and steel wheels.
Rails bear the stresses developed due to heavy vertical loads, lateral and braking forces and thermal stresses.
The rail material used is such that it gives minimum wear to avoid replacement charges and failures of rails due to
wear.
Rails transmit the loads to sleepers and consequently reduce pressure on ballast and formation below.
The rails of larger length are preferred to smaller length of rails, because they give more strength and economy for a railway
track. The weakest point of a track is the joint between two rails. Lesser the number of joints, lesser would be the number of fish
plates and this would lead to lesser maintenance cost, smoother running of trains and more comfort to the passengers. Moreover
the more number of joints increase wear and tear of the vehicle components, including wheels [26].
663

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Rail Fasteners:
Probably one of the most spectacular developments that permanent ways went through since the beginning of the
railways can be seen in the improvement of fastening systems. This development became more rapid in the last decades,
especially in the last few years. It can be explained by the increasing demands of railway transport due to the competitive
situation between transport
means. The demands railways have to satisfy are the reduction of travel time, punctuality and
comfort. These improvements are realized in practice by high-speed tracks and generally by continuously increasing speed. But
these demands of railway traffic increase the severity of requirements a permanent way has to meet [11].
One of the most important aspects transportations are judged by is protection of the environment. Basically it can
be stated that railway is an environment friendly means of transport. In spite of this fact the damaging influences that are mostly
noise- and vibration nuisance have to be decreased [23]. These considerations also justify the development of special solutions of
permanent way elements.
All the requirements mentioned above revive the need of development of new fastening systems that basically
differ from the traditional types. The most important function of fastening systems is to provide strong and flexible connection
between rail and its supporting structure that can be sleeper or slab. In addition to this main function fastenings have to meet other
requirements that are in some cases contradictory [11].
Operations and maintenance requirements can be as important as the track strength considerations, because they
address real issues of concern for maintenance personnel who have ultimate responsibility in using rail fastening components in
the field. Among the intangible considerations are: fastener life, maintainability, and, where needed, electrical isolation [31, 30].
The final category of fastener characteristics is one that addresses the overall cost of the track system. It is a
matter of particular importance to private freight railroads. Any criterion, then, developed for both tangible and intangible
performance characteristic s must be evaluated in light of total system costs. Further, these must be life cycle costs taken within
the railway operating environment as against simple first costs [23].
Sleepers:
Sleepers are members generally laid transverse to the rails on which the rails are supported and fixed, to transfer
the loads from rails to the ballast and subgrade below [26]. Sleepers perform the following functions [26]:
i) To hold the rails to correct gauge (exact in straight and flat curves, loose in sharp curves and tight in diamond crossings).
ii) To hold the rails in proper level transverse tilt i.e., level in turnouts, crossovers, etc., and at 1 in 20 tilt in straight tracks,
so as to provide a firm and even support to rails.
iii) To act as elastic medium in between the ballast and rails.
iv) To distribute the load from the rails to the index area of ballast underlying it or to the girders in case of bridges.
v) To support the rails at a proper level in straight tracks and at proper super elevation on curves.
vi) Sleepers also add to the longitudinal and lateral stability of the permanent track on the whole.
vii) They also provide means to rectify track geometry during service life.
For good performance of sleepers to fulfill the above functions or objectives an ideal sleeper should possess the following
characteristics [26]:
i) The sleepers to be used should be economical, i.e., they should have minimum possible initial and maintenance costs.
ii) The fittings of the sleepers should be such that they can be easily adjusted during maintenance operations such as easy
lifting, packing, removal and replacement.
iii) The weight of sleepers should not be too heavy or excessively light, i.e., they should have moderate weight, for ease of
handling.
iv) The design of sleepers should be such that the gauge, alignment of track and levels of the rails can be easily adjusted and
maintained.

664

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

v) The bearing area of sleepers below the rail seat and over the ballast should be enough to resist the crushing due to rail
seat and crushing of the ballast underneath the sleeper.
vi) The sleeper design should be such as to facilitate easy removal and replacement of ballast.
vii) The sleepers should be capable of resisting shocks and vibrations due to passage of heavy loads of high-speed limits.
viii)The design of the sleepers should be such that they are not damaged during packing processes.
ix) The insulation of rails should be possible for track circuiting, if required, through sleepers.
Classifications of sleepers:
Sleepers can be classified according to the material used in their construction in the following way [26]:
1) Wooden sleepers
2) Metal sleepers
a) Cast iron sleepers
b) Steel sleepers
3) Concrete sleepers
a) Reinforced concrete sleepers
b) Prestressed concrete sleepers
A detailed note based on concrete sleepers is presented in the preceding sections.
Design Considerations for Concrete Sleepers:
As noted above, the railway was originally developed to have higher load-carrying capacities than the roads. One
of the pioneers in the history of railways was G.Stephenson, who built his first steam locomotive in 1813. In the development of
the railway a component acting and looking like the sleeper of today was invented quite soon. The demands on the sleepers have
increased with the improvement of the railway [25]. In the early railways, the natural choice of material for the sleeper was
often wood. One reason for starting to use reinforced concrete sleepers was to get a great reduction in the overall cost of track
maintenance [1].
A reference body of experiments with reinforced concrete sleepers arose as far back as 1880, and reinforced
concrete sleepers were used quite extensively in the 1920s and 1930s in countries such as Italy and India [5].The use of
reinforced concrete sleepers increased the structural stiffness and developed unique problems that are not associated with wooden
sleepers, such as flexural cracks which could lead to deterioration of the sleepers [19]. This fact, together with shortage of goodquality timber during World War II blockades, forced in the development and use of prestressed concrete sleepers. Since then, the
use of prestressed concrete sleepers has increased and become standard in the railway tracks of several countries. The
development of prestressed concrete sleepers solved the early problems associated with the ordinary reinforced concrete sleepers
[27].
The study of loads on sleepers in tracks had low priority from the 1940s to the 1980s.Early work in the UK
showed that very high loads occurred at rail joints, and also due to the balance weight on steam locomotive wheels. In the 1980s
rail seat cracking of sleepers, i.e. cracks at the base of the sleeper in the position of the rail, was found in the Northeast Corridor,
USA. This led to new studies of loads on sleepers [7].
The source of the overloading was found to be out-of-round wheels generating dynamic loads, and further
research revealed additional factors involved in the dynamic load generation [30, 31]. Different influencing parameters regarding
the generation and magnitude of the dynamic loads have been identified during the years, as per Ahlbeck (1980) [2], Grassie and
Cox (1984) [12], Buekett et al. (1987) [6], Igwemezie and Saeed Mirza (1989) [17, 24], Dahlberg and Nielsen (1991) [8], Wakui
and Okuda (1991) [28]. The reported parameters are vehicle speed, wheel imperfections, random differences of levels between
rails, type and condition of the suspension system, rail joints, and corrugation of the railhead [14, 15, 16, 17]. A literature survey
to describe the state of the art in research concerning out-of-round wheels has been written by Johansson and Nielsen (1998) [18].
According to Wakui and Okuda (1991) the largest dynamic load induced to the track may be one caused by
irregularities of the wheel such as wheelflats. Wheelflats are caused by the wheels of a vehicle becoming locked during braking,
665

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and sliding along the track. The friction created by this grinds a flat spot on the wheel, see Newton and Clark (1979) [20]. The
wheelflats causes impact loads, i.e. loads generated by a stroke with short duration. The duration of the impact load caused by
wheelflats is in the range of 1-3 microseconds, according to Xiang et al. (1994) [29], and its magnitude can be several times
larger then the static load caused by the weight of the train. Research carried out by Harrison and Moody (1982) [16] and Dean et
al. (1982) showed that cracks found in the prestressed concrete sleepers were strongly connected with the presence of wheelflats.
Design considerations for prestressed concrete sleepers:
Results from many research projects show that the design philosophies used in different countries fail to capture
all necessary considerations [5]. The traditional philosophy is to use a static design system, where the static nominal axle load,
FStat, is increased by a dynamic factor, Dyn, as shown in Figure 2. Railway authorities often assume a uniform distribution of
the ballast pressure, p, beneath the sleeper, or other simplified pressure distributions [9, 10]. However, this approach does not
consider the dynamic effects in the sleeper, caused by the impact loads. The impact loads introduce stress waves and rebound
reactions that are essential to account for, if the dynamic behavior of a sleeper is to be analyzed correctly [24].

Fig.2. Static design system of a sleeper

Fig.3.Sleeper cross section as per RDSO/T-2496[32]

DESIGN AND MODELLING OF CONCRETE SLEEPERS


Sleeper Section:
The sleeper considered in the present work is of RDSO (Research Division and Standards Organization) T-2496
type pre-stressed concrete sleeper. The cross section of the sleeper varies in shape and size from the rail seat end to the centre of
the sleeper [32]. The sleeper is symmetric in shape and size about its centre portion. The total length of the sleeper as per RDSO
is 2600 mm. At the rail seat the top width of the sleeper end 150 mm where as the bottom width is 249.8 mm. The bottom width
at the rail seat varies from 249.8 mm to 211.8 mm till a height of 98.4 mm from the seat bottom. At the centre of the sleeper the
cross sectional width varies from 220 mm at bottom to 150 mm at the top with a height of 180 mm. The central distance between
the rail seats, i.e. broad gauge distance between the two rails laterally is 1.761 m. The cross sectional area of sleeper as per RDSO
specifications is shown in Figure 3.
Material Properties:
Prestressed concrete is basically concrete in which internal stresses of a suitable magnitude and distribution are
introduced so that the stresses resulting from the external loads are counteracted to a desired degree. Prestress is defined as a
method of applying pre-compression to control the stresses resulting due to external loads below the neutral axis of the beam
tension developed due to external load which is more than the permissible limits of the plain concrete. The pre-compression
applied (may be axial or eccentric) will induce the compressive stress below the neutral axis or as a whole of the beam c/s.
Resulting either no tension or compression.
For the present work it is assumed that the prestressed concrete sleeper acts as an elastic beam member. As per
IS1343:1980, Its density is taken as 2400 kg/mm3, Young's modulus is taken as 30 GPa, Poissons ratio is taken as 0.3 and its
ultimate strength is taken as 40 MPa.
666

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Modeling of the concrete sleeper:


To model the sleeper, initially the geometric model of the sleeper is built in ANSYS12.1 and then the finite
element model is built. In order to model the rather complicated geometry of the sleeper, key points are input in absolute
coordinate form and these key points are created at one end, i.e. rail seat end of the concrete sleeper and also at the centre of the
sleeper where the cross section varies. Further areas have been created by using key points at rails seat end cross section and also
at the centre of the sleeper. Adjoining areas have also been created between the two cross sections. These areas have been glued
using Boolean operations and then a volume is created using all the six areas. At this stage a half sleeper model is created. The
half sleeper volumetric model is then reflected about the centre cross sectional plane to create the other half of the sleeper and
both the volumes are added using Boolean operations. In this way, full sleeper volumetric model is built in ANSYS 12.1. The
Line model describing the cross section areas and also the sleeper volumetric model built are shown in Fig.4. and Fig.5.
respectively.

Fig.4. Line Model of the Sleeper


FE Model of the Sleeper:

Fig.5. Volumetric Model of the Sleeper

The volumetric model of the mono block sleeper built in ANSYS is then discretized with 20 node brick elements
of SOLID 186 element type to build the Finite element model of the sleeper for further analysis. A total of 2940 elements and
1647 nodes have been used in the FE mesh of the sleeper model. The FE model has been utilized for studies of influential
parameters and thereby a better knowledge concerning the structural behavior of a concrete sleeper is gained. The obtained FE
meshed model of the sleeper is shown in Fig.6.

Fig.6. FE Model of the Sleeper

Fig.7. Constrained Sleeper Model

STATIC ANALYSIS OF CONCRETE SLEEPERS


Boundary conditions:
667

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In general, the sleeper is supported by ballast which acts as elastic cushion for the sleeper between the rails and
the embankment. Hence, for static stress analysis we consider that the sleeper is constrained at its bottom and also at its tapered
cross section ends. The surfaces areas of the sleeper which are covered in and around by the ballast are hence constrained in the
solution module of ANSYS.
Loading conditions:
The sleepers are rigidly fastened to the rails at a distance of 1.761 m which is called as broad gauge as per the
RDSO specifications for the Indian Railways. The locations on the top surface of the sleeper where the I- sectioned rails are
positioned are called as rail seats. As per RDSO specifications when the moving train travels, the operating conditions consider
that a single sleeper takes a load of 15000 kg which is transferred by the rails on to the rail seat locations of the sleeper. Hence,
considering a hypothetical load case of 15000 kg vertical load being transferred to a single sleeper, the identified nodes 823 and
824 which are the rail seat locations are given a load of 73575 kN each. The constrained FE Model of the sleeper along with the
loads applied are shown in Fig.7.
Static Stress Analysis results:
On carrying out the static analysis, deflections and stresses are obtained for the sleeper model. It can be observed
that the vertical displacement is maximum at the rail seat with a value of 21.165 mm. The maximum equivalent stress value is
4839 N/mm2 which can be observed at the rail seat. Since the elastomeric pad is linked between the rail and sleeper along with
fasteners the behavior at rail seat is tolerable. The stress and displacement plots are shown from Fig.8 to Fig.11.

Fig.8. Maximum Deflection at rail seat

Fig.9. Vertical Deflection of sleeper

Fig.10. Von-mises Stress

Fig.11. Maximum Von-mises Stress

DYNAMIC RESPONSE OF SLEEPER


For analyzing and understanding the dynamic response of the sleeper two different criteria are considered in the
present work. In the first criteria, the sleeper's response under its self weight without considering any constraints is analyzed and
its fundamental frequencies and mode shapes are obtained. This condition of sleeper is called free-free condition. In the second
criteria, the condition of the ballast supporting the sleeper at its bottom surface and adjoining side surfaces at the rail seat end is
considered similar to the boundary conditions considered in static analysis. This condition is called In-situ condition.
Modal Analysis of Sleeper in free-free condition:
668

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Eigen frequency modal analysis is performed on the sleeper in a free-free condition without considering any
constraints on the sleeper. In ANSYS, Block Lanczos method in the modal analysis is used for obtaining the first twenty natural
frequencies and mode shapes of the sleeper model. Block Lanczos method successfully gives results for symmetric structures.
The lowest fundamental frequency obtained in this condition is 3.3871 Hz with a maximum displacement of 0.122344 mm at the
rail seat ends. The mode shapes obtained in free-free condition of the sleeper are shown in Fig.12and Fig.13.

Fig.12.Mode 1 of Sleeper in Free-Free criteria

Fig.13.Mode 2 of Sleeper in Free-Free criteria

Modal Analysis of Sleeper in In-situ condition:


Eigen frequency modal analysis is performed on the sleeper in a In-situ condition by constraining the bottom
surface and adjacent surfaces of the sleeper. In ANSYS, Block Lanczos method in the modal analysis is used for obtaining the
first twenty natural frequencies and mode shapes. Block Lanczos method successfully gives results for symmetric structures. The
lowest fundamental frequency obtained in this condition is 261.65 Hz with a maximum displacement of 0.308046 mm near the
centre of gravity of the top surface of the sleeper. The mode shapes obtained in the In-situ condition of the sleeper are shown
from Fig.14 to Fig.15.
The natural frequencies obtained for the sleeper model as in free-free and in-situ conditions are presented in the
table 1. It can be observed that in the in-situ condition which is the practical condition for sleeper in the railway track, the
frequencies are substantially higher and signifies that the ballast cushions the sleeper effectively.

Fig.14. Mode 1 of Sleeper in In-Situ criteria


Fig.15. Mode 2 of Sleeper in In-Situ criteria
Table 1. Natural frequencies of sleeper in Free- Free and In-Situ Conditions
FREQUENCY(Hz)
FREQUENCY(Hz)
MODE No.
FREE- FREE
IN-SITU
1
3.3871
261.65
2
3.4861
265.16
3
10.011
266.90
4
10.092
278.34
5
14.753
283.68
669

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

18.676
18.827
20.418
29.070
29.554
34.631
41.046
41.402
43.078
48.503
54.384
54.923
64.194
65.819
68.121

288.44
288.65
290.10
299.36
299.80
309.17
310.37
311.96
314.70
322.85
328.35
330.52
335.80
344.71
346.83

CONCLUSIONS
Analytical results from the established finite element model of a concrete sleeper showed expected behavior static load conditions.
The structural response up to the maximum load capacity of the sleeper, with yielding of the reinforcement, is well captured in the finite
element analysis. This analysis showed that the finite element program was well suited for modeling concrete sleepers. The hypothetical
load case considered in the present work shows that the rail seat end locations of the sleeper are affected. The finite element model of the
sleeper and the constraints in the form of underlying ballast included showed fairly good agreement with the global track model for
downward displacement of the rail from its original position. The sleeper model revealed a complex interaction between the sleeper and
the underlying ballast, and showed that impacts occur between the sleeper and the underlying ballast as the wheel of the train passes the
sleeper.
Vibration characteristics of railway concrete sleepers are crucial for the development of a realistic dynamic model of railway track
as well as the concrete sleeper itself, which are capable of predicting its dynamic responses. The results of the simulated modal analysis
for prestressed concrete sleepers under different boundary conditions are presented in this work. Two types of conditions are considered,
and it is observed that in the free-free condition the natural frequencies are much lower to that of the in-situ condition which considers
the ballast support condition. It is found that the resonant frequencies associated with the lower mode of vibration of prestressed
concrete sleepers are considerably affected by the support boundary conditions. The dominant effect of the ballast support is placed on
the modal damping in the ballast-sleeper interaction. In addition, the mode shapes, which can indicate the deteriorated state of concrete
sleepers, were affected by the ballast conditions. In summary, the in-situ condition had a remarkable influence on the natural frequency,
modal damping, and vibration mode shape of prestressed concrete sleepers, especially in the low frequency range and flexural
deformation.

REFERENCES:
[1] Ager J. W. A. (1968): Prestressed concrete railway sleepers. Proceedings of FIP Symposium Mass produced prestressed precast
elements, Madrid, June 3-4, 1968, pp. 81-91.
[2] Ahlbeck D. R. (1980): An investigation of impact loads due to wheel flats and rail joints, ASME, November 1980, 10 pp
[3] Bathe K-J. (1996): Finite element procedures. Prentice-Hall, Englewood Cliffs, New Jersey, 1996, 1037 pp.
[4] Bonnett C. F. (1996): Practical Railway Engineering. Imperial College Press,
cop.1996, London, 207 pp
[5] Buekett J. (1983): Concrete sleepers, Railway Industry Association, First Track Sector Course held at Walford U.K., 1983, pp.
4/1-1 7.
[6] Buekett J., Firth D., Surtees J. R. (1987): Concrete ties in modern track. Railway Track and Structures (RT&S), Vol. 83, No. 8,
August 1987, pp. 32, 34-35, 37.
[7] Buekett J. (1991): Ensuring concrete sleepers are fit for service. International Symposium on precast concrete railway sleepers,
Madrid, 1991, pp. 89-99.
[8] Dahlberg T., Nielsen J. (1991): Dynamic behavior of free-free and in situ concrete railway sleepers. International Symposium on
precast concrete railway sleepers, Madrid, 1991, pp. 393-410.
670

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[9] Engstrm B. (1992): Ductility of tie connections in precast structures, Division of Concrete Structures, Chalmers University of
Technology, Gteborg, Sweden, October 1992, 448 pp.
[10] Engstrm B. (1996): Berkning av frspnda betongkonstruktioner, Division of Concrete Structures, Chalmers University of
Technology, Gteborg, Sweden, January 1992, Kompendium 96:1, 186 pp.
[11] Eszter LUDVIGH. (March 31, 2001): Elastic Behaviour of Continuously Embedded Rail Systems, Department of Highway and
Railway Engineering Budapest University of Technology and Economics, H1521 Budapest, Hungary
[12] Grassie.S.L, Cox.S.J.(1985): The dynamic response of railway track with unsupported sleepers. Proc. Inst. Mech. Engrs., Vol.199,
Part D, No. 2, 1985, pp.123-135.
[13] Grassie.S.L, Kalousek J. (1993): Rail corrugation: characteristics, causes and treatments. Proceedings of the Institution of
Mechanical Engineers, Part F: Journal of Rail and Rapid Transit no. 207(1), pp. 57-68.
[14] Gustavson R. (1999): Concrete sleeper subjected to static loading: An experimental study, Division of Concrete Structures,
Chalmers University of Technology, Gteborg, Sweden, Report 99:4, 32 pp.
[15] Gylltoft K. (1978): Utmattningsprovningar av betongsliprar. Division of Structural Engineering, Lule University of Technology,
Sweden, Report R103:1978, 91pp.
[16] Harrison H., Moody H. (1982): Correlation analysis of concrete cross tie track performance. Proceedings, Second International
Heavy Haul Railway Conference, Colorado Springs, U.S.A., September 1982, pp. 425-431.
[17] Igwemezie J. O., Saeed Mirza M. (1989): Impact load distribution in concrete bridge ties. Journal of Structural Engineering, Vol.
115, No. 3, March 1989, pp. 526-542.
[18] Johansson A., Nielsen J. (1998): Out-of-round railway wheels A literature survey. Report F210, CHARMEC, Solid Mechanics,
Chalmers University of Technology, Gteborg, Sweden, August 1998, 46 pp.
[19] Mindess S., Yan C., Venuti W. J. (1991): Impact resistance of prestressed concrete railroad sleepers. International Symposium on
precast concrete railway sleepers, Madrid, 1991, pp. 487-504.
[20] Newton S. G., Clark R. A. (1979): An investigation into the dynamic effects on the track of wheelflats on railway vehicles.
Journal of Mechanical Engineering Science, Vol. 21, No. 4, 1979, pp. 287-297.
[21] Nielsen J. C. O. (1993): Train/Track Interaction, Coupling of moving and stationary dynamic systems Theoretical and
experimental analysis of railway structures considering wheel and track imperfections. Division of Solid Mechanics, Chalmers
University of Technology, Gteborg, Sweden, November 1993, 97 pp.
[22] Oscarsson J. (1999): Dynamic train/track/ballast interaction, Linear and state dependent track models. Department of Solid
Mechanics, Chalmers University of Technology, Gteborg, Sweden, February 1999, 62 pp.
[23] Pandrol Inc., Economic cost-benefit analysis of fastener systems, 1983.
[24] RIKARD GUSTAVSON, Static and Dynamic Finite Element Analyses of Concrete Sleepers, Division of Concrete Structures,
Department of Structural Engineering, Chalmers University of Technology.
[25] RILEM 50-FMC Committee (1985): Determination of the fracture energy of mortar and concrete by means of three-point bend
test on notched beams, Materials and Structures, Vol. 18, pp. 285-290.
[26] S.C.Saxena and S.P.Arora, A Text Book of Railway Engineering.
[27] Sjdahl O. (1995): Sliprar av betong. Tekniska bestmmelser. Freskrift rrande Banverkets sliprar. Banverket. HK. TMS. BVF
522.32, 6pp.
[28] Wakui H., Okuda H. (1991): Effects of wheel-flats on concrete sleepers. International Symposium on precast concrete railway
sleepers, Madrid, 1991, pp. 287-299.
[29] Xiang Y., Wang N., Mindess S. (1994): Effect of loading rate and support conditions on the mode of failure of prestressed
railroad ties subjected to impact loading. Cement and Concrete Research, Vol. 24, No. 7, 1994, pp. 1286-1298.
[30] Zarembski, A. M, Performance Characteristics for Wood Tie Fasteners, Bulletin of the AREA. Bulletin 697, October 1984.
[31] Zarembski, A- M., Performance Characteristics for Concrete Tie Fasteners, Concrete Tie Systems for the 1980's, Proceedings
of the Prestressed Concrete Tie Workshop, November 1983.
[32] Tech Diary-2008., Institution of Permanent Way(Engineers), RDSO, Lucknow.

671

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Survey on Data Mining Methods for Malware Detection


Ms. Shital Balkrishna Kuber
ME (II), Computer,
Vidya Pratishthans College Of Engg,
Baramati, Maharashtra, India
kuber182@gmail.com

AbstractMalware is any type of malicious software that has the capability to enter into system without authorization of the users.
Thus malware detection is the important issue in the computer security.
Signature based detection is more popular method to detect the malware attack but main drawback of this method is that it is
not used to detect the Zero-day attack. We need to update the database regularly and human experts are needed to create the new
signature. The drawbacks of Signature based malware detection is minimized by using heuristic method. Heuristic method is used to
detect zero-day attacks. There are various methods used to detect the malware like n-gram method, Finite state automaton method,
Control Flow Graph method, N-gram analysis at byte level etc. These methods having their various advantages and disadvantages.
This study enlightens the various methods used to detect malwares.

Keywords malware, n-gram, opcode, Signature-based detection, Anomaly-based detection, Specification-based detection.
Disassemble process.

INTRODUCTION

Malware is communal term for any malicious software which enters system without authorization of the users. Modern
communication infrastructures are highly susceptible to many types of malwares attacks. Due to malicious attacks cause several
damage to private users, governmental organizations and commercial companies. The explosion in high-speed internet connections
facilitates malware to propagate and infect computer system very rapidly. Once the malware system finds its way into the system, it
scans the system and find out the vulnerabilities of operating system. Then perform inadvertent actions on the system finally slowing
down the overall performance of the system. In every year the malwares are increasing in an alarming rate. Therefore malware
detection becomes a vital issue in today's computer systems.
Malware detection techniques
Malware detection technique is the technique used to detect or identify the malware. Generally, malware detection technique
can be categorized into three types Signature-based Anomaly-based and Specification-based detection.
Signature-based malware detection
It maintains the database of signature and detects malware by comparing pattern against the database. It shall require some
amounts of system resources to detect the malware also this technique can detect the known malware accurately. The disadvantage of
this technique is it not effective against the Zero-day attack so it cannot detect the new, unknown malware as no signature available
for such type of malware. Data mining and machine learning techniques are used to overcome this limitation of signature based
detection.
Most of the antivirus tools are based on signature based detection techniques. These signatures are created by examining the
disassembled code of malware binary files. Various disassemblers and debuggers like IDA Pro, ollydbg, WinDb32 are available which
help in disassembling the executables. Disassembled code is analyzed and features are extracted. These features are used in
constructing the signature of particular malware family.
Heuristic-based malware detection
It is also called as anomaly based detection. Here mainly the goal is to analyze the behavior of known or unknown malwares.
Behavioral parameters include various factors such as source/ destination address of malwares, different types of attachments and
other measurable statistical features. It usually occurs in two phase:
1. Training (learning) phase
2. Detection (monitoring) phase.
During the training the behavior of the system is observed in the absence of attack and machine learning technique is used to
create a profile of such normal behavior. In detection phase, this profile is compared against the current behavior, and deviations are
672

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

flagged as potential attacks. A key advantage of anomaly based detection is its ability to detect zero-day attacks. Zero-day attacks are
attacks that previously unknown to the malware detector.
Specification based detection
Specification-based detection is a derivative of anomaly based detection that tries to defeat the typical high false alarm rate
associated with the anomaly-based detection. Specification-based detection relies on program specifications that describe the intended
behavior of security-critical programs. It monitors executions program involve and detecting deviation of their behavior from the
specification, rather than detecting the occurrence of specific attack patterns.
This technique is similar to anomaly detection where they detect the attacks as deviate from normal. The difference is that
instead of relying on machine learning techniques, it will be based on manually developed specifications that capture legitimate
system behavior. It can be used to monitor network components or network services that are relevant to security, Domain Name
Service, Network File Sharing and routers.

LITERATURE SURVEY
D. Bilar [1] investigated opcode frequency distributions as a means to identify and differentiate malware. They discuss a malware
detection mechanism through statistical analysis of opcode distribution. His results shows that most recurrently occurring opcodes are
not a good indicator of malware like move, push, call etc. While, less recurrently occurring opcodes are a good indicator of malware
like add, sub, ja, adc etc.
Advantages
1. His Technique gives a preliminary assessment of its usefulness for malware detection.
2. This Technique gives better accuracy for differentiation of modern (polymorphic and metamorphic) malware.
Disadvantage
In this technique, the dynamic approach is not taken into the consideration.
D. Bilar [2] analyzes the call graph structure of 120 malware and 200 non malicious executable files. He treat each
executable file as a graph of graphs. This follows the intuition that in any procedural language, the source code is structured into
functions; these functions can be viewed as a flowchart, i.e. a directed graph. These functions call each other, thus creating a larger
graph where each node is a function and the edges are calls-to relations between the functions. This larger graph is called as the
callgraph. The structure of callgraph is recovered by disassembling the executable into individual instructions. He distinguishes
between short and far branch instructions: Short branches do not save a return address while far branches do. Intuitively, short
branches are generally used to pass control around within one function of the program, while far branches are used to call other
functions. He statically generates the CFG of benign and malicious code.
He compared the basic block count for benign and malicious code. Bilar concluded that malware tends to have lower basic
block count. The CFG of malicious file have less interaction, fewer branches and limited functionality. On the other hand, the benign
files tend to have more block count with complex interaction.
Advantage
He proposed the new approach i.e. CFG construction for detecting the malware.
Disadvantage
There is a space overhead for storing the information of CFG.
R. Sekar et al. [3] implemented a Finite State Automaton (FSA) approach. FSA-learning is computationally costly, or that the
space usage of the FSA may be excessive. They present a new approach in this paper that overcomes these difficulties. Their approach
builds a compact FSA in a fully automatic and skilled manner, and without requiring access to source code for programs. They
compared the FSA approach with n-gram analysis method.
Advantages
1. They found that the false positive rate of the FSA algorithm is low than the n-gram approach.
2. The space and runtime overhead of FSA learning is minimal.
3. FSA approach can detect a wide range of malware attacks.
4. The training periods needed for FSA based approach are shorter.
5. FSA-technique can capture both small term and lengthy term temporal relationships among system calls, and thus perform more
precise detection.
6. The FSA uses only a constant time per system call during the learning as well as detection period. This factor leads to low
overheads for intrusion detection.

673

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Disadvantage
FSA algorithm does not preserve the order of system calls made from libraries.
Wei-jen Li et al. [4] describe N-gram (N=1) analysis, at byte level, to compose models derived from learning the file types
that the system intends to handle. Li et al. perform an N-gram analysis at byte level (N=1) on PDF files with embedded malware.
Advantages
1. This technique proved an effective technique for detecting malicious PDF files.
2. This technique detects the malware embedded at the beginning or end of a file.

Disadvantages
1. This technique is failed to detect malware embedded in the middle of the file.
2. Li et al. focused on n-gram analysis only with n=1.but does not perform analysis on n=2, n=3 and so on.
Santos et al. [5] demonstrated that n-gram signatures based approach to detect unknown malware. They found that for n=2,
the detection rate is low, for n=4, the detection rate is maximum. In this paper they use a new methodology for malware detection
based on the use of n-grams for file signatures creation. They tackle the issue of dealing with false positives using a parameter named
d to control how strict the method behaves to classify the instance as malware or benign software in order to avoid false positives.
Santos et. al [6] proposed the use of a single-class learning method for unknown malware detection based on opcode
sequences. This method is based on examining the frequencies of the appearance of opcode sequences to build a machine-learning
classifier using only one set of labeled instances within a specific class of either malware or genuine software. They performed an
empirical study that shows that this method can reduce the effort of labeling software while maintaining high accuracy.
Advantage
Single-class learning needs several instances that belong to a specific class to be labeled.
Therefore, Single-class learning method can reduce the cost of unknown malware detection.
Disadvantage
This method cannot be able to detect packed executable.
Shabtai et al. [7] used static analysis to study the effectiveness of malware detection. For this purpose they used different ngram size (N=1 to 6) with various classifiers. Shabtai's findings showed that N=2 performed best. To detect the unknown malicious
code they used opcode n-gram patterns as feature extraction technique, feature selection method and learning algorithm. They use
OpCode n-gram patterns, generated by disassembling the executable files of both benign and malware files.
Advantages
1. They found that malware detection rate is high for n=2 while previous study Santos et. al found that for n=4 the detection rate is
high. This is new investigation regarding the n-gram size.
2. The class imbalance problem is taken into consideration.
Disadvantages
1. Generally in textual domain TFIDF is more successful representation for the retrieval and categorization purposes but they found
that TFIDF representation introduces additional computational challenges in the maintenance of the collection.
2. TFIDF representation has no added value over the TF representation.

CONCLUSION
Malware has a long history of evolutionary development as the war between the anti-malware researchers and the malware
writers has progressed. This study presents the different malware detection methods in data mining. This Survey focus on various
approaches used in current antivirus system like signature based malware detection and heuristic based malware detection. This study
also enlighten the different methods like n-gram analysis at byte level, single-class learning method, Finite State Automaton (FSA)
method used to detect the malwares.

REFERENCES:
[1] D. Bilar, Opcodes as predictor for malware, Int. J. Electron. Security Digital Forensics, vol. 1, no. 2, pp. 156 - 168, 2007. D.
Bilar, Callgraph properties of executables and generative mechanisms, AI Commun., Special Issue on Network Anal. in Natural
Sci.and Eng., vol. 20, no. 4, pp. 231-243, 2007.
674

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[2] D. Bilar, Callgraph properties of executables and generative mechanisms, AI Commun., Special Issue on Network Anal. in
Natural Sci.and Eng., vol. 20, no. 4, pp. 231-243, 2007.
[3] R. Sekar, M. Bendre, D. Bollineni, and Bollineni, R. Needham and M.Abadi, Eds., A fast automaton-based method for detecting
anomalous program behaviors, in Proc. 2001 IEEE Symp. Security and Privacy, IEEE Comput. Soc., Los Alamitos, CA, USA, 2001,
pp. 144-155.
[4] Wei-Jen Li, W. L. K. Wang, S. Stolfo, and B. Herzog, Fileprints: Identifying file types by n-gram analysis, in Proc. 6th IEEE
Inform. Assurance Workshop, Jun. 2005, pp. 64-71.
[5] I. Santos, F. Brezo, J. Nieves, Y. K. Penya, B. Sanz, C. Laorden, and P. G. Bringas Opcode-sequence-based malware detection,
in Proc.2nd Int. Symp. Eng. Secure Software and Syst. (ESSoS), Pisa, Italy, Feb.3-4, 2010, vol. LNCS 5965, pp. 35-43.
[6] I. Santos, F. Brezo, B. Sanz, C. Laorden, and Y. P. G. Bringas, Using opcode sequences in single-class learning to detect
unknown malware, IET Inform. Security, vol. 5, no. 4, pp. 220227, 2011.
[7] A. Shabtai, R. Moskovitch, C. Feher, S. Dolev, and Y. Elovici, Detecting unknown malicious code by applying classification
techniques on opcode patterns, Security Informatics, vol. 1, pp. 1-22, 2012.
[8] Vinod P., V. Laxmi, M. S. Gaur , Survey on Malware Detection Methods.
[9] R. Moskovitch, C. Feher, N. Tzachar, E. Berger,M.Gitelman, S.Dolev,and Y. Elovici, Unknown malcode detection using opcode
representation, in Proc. 1st Eur Conf. Intell. and Security Informatics (EuroISI08), 2008, pp. 204-215

675

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Study of Infrequent itemset mining Techniques


Ms.KalyaniTukaramBhandwalkar,Ms.MansiBhonsle
ME(II),Computer,
G.H.Raisoni College OfEngg,
Ahmednagar, Maharashtra,India
kalyanibhandwalkar@gmail.com

AbstractPattern mining has become an important task in data mining. Frequent itemsets find application in a number of real-life
contexts. Most of the past work has been on finding frequent itemsets, but also infrequent itemset mining has demonstrated its utility
in web mining, bioinformatics, fraud detection and other fields. The infrequent itemset mining problem is discovering itemsets whose
frequency of occurrence in the analyzed data is less than or equal to a maximum threshold. This paper provide a general overview of
the different papers related to infrequent patterns and gives the knowledge on different algorithms proposed for mining infrequent
patterns which are basis for future research in the field of pattern mining.

Keywords Frequent Itemsets, Infrequent Itemsets, Patterns, DataMining,FpgrowthAlgorithm ,Apriori Algorithm,Pattern Mining
INTRODUCTION

Data Mining extracts knowledge from large databases to discover existing and newer patterns. Data mining is the technique of
automatic finding of hidden valuable patterns and relationships from huge volume of data stored in databases in order to help make
better business decisions. Discovering useful patterns hidden in a database plays an essential role in several data mining tasks.
Frequent itemsets find application in a number of real- life contexts. Buying a PC first, then a digital camera, and then a memory card,
if it occurs frequently in a shopping history database, is a frequent pattern. The market basket analysis is one of the applications of the
frequent itemset mining .It analyses customer buying habits by finding associations between the different items that customers place in
their shopping baskets. For instance, if customers are buying soap, how likely are they going to also buy washing power on the
same trip to the supermarket? Such information can lead to increased sales by helping retailers do selective marketing and arrange
their shelf space. Most of the past work has been on finding frequent itemsets, but also infrequent itemset mining has demonstrated its
utility in different areas.
Patterns that are rarely found in database are considered to be uninteresting and are eliminated using the support measure.
Such patterns are known as infrequent patterns. An infrequent pattern is an itemset or a rule whose support is less than the minsup
threshold. Infrequent patterns are likely to be of great interest as they relate to rare but crucial cases. Examples of applications where
mining rare itemsets include identifying relatively rare diseases, predicting equipment failure, and finding associations between
infrequently purchased items. In the market basket domain, indirect associations can be used to find competing items, such as desktop
computers and laptops, which states that people whom buys desktop computers wont buy laptops. Infrequent patterns can be used to
detect errors. For example, if {Fire = Yes} is frequent, but {Fire = Yes, Alarm = On} is infrequent, then the alarm system probably is
faulting. Also ,in the study of finding a better treatment approach for a special disease, researchers would like spend more time on
studying an abnormal case rather than reading the millions of records of healthy people To detect such unusual situations, the
expected support of a pattern must be determined, so that, if a pattern turns out to have a considerably lower support than expected, it
is declared as an interesting infrequent pattern.

LITERATURE SURVEY
R. Agrawal et al in [28] proposed Apriori algorithm, which is used to obtain frequent itemsets from the database.The itemsets
which appear frequently in the transactions are called frequent itemsets. MINIT (MINimal Infrequent Itemsets), which is the first
algorithm designed specifically for mining minimal infrequent itemsets (MIIs)[2]. A minimal infrequent item-set is an infrequent
item-set that do not have a subset of items which forms an infrequent item-set. MINIT is both minimal and non-minimal (unweighted)
infrequent itemset mining from unweighteddata.It is based on SUDA2 algorithm. Also showed that the minimal infrequent itemset
problem is NP-complete problem.
676

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Author in [10] has purposed a way to find out weights of items and weights of transactions without pre-assigned weights in
the database. HITS model is used to derive the weights of transactions from a database with only binary attributes which is based on
Link Ranking Analysis. Based on these weights, w-support is defined to give the significance of item sets which differs from the
traditional support in taking the quality of transactions into consideration. An Apriori-like algorithm is proposed to extract association
rules whose w-support and w-confidence are above some given thresholds.
Mehdi Addaet. al. described ARANIM algorithm for Apriori Rare and Non-Present Item-set Mining to mine rare and nonpresent itemsets in [7]. Non present items are used to detect what is missing in defective process. The proposed approach is Apriorilike and the mining idea behind it is that if the item-set lattice representing the itemset space in classical Apriori approaches is
traversed in a bottom-up manner, equivalent properties to the Apriori exploration of frequent item-sets is provided to discover rare
item-sets[7]. Also author proposed an approach based on rare patterns to detect suspicious uses and behaviors in the context of a Web
application.
[1] is used to find the infrequent patterns and non-present patterns but it does not considers any pruning strategy. It is better
to implement any pruning strategy to improve the complexity of the proposed method. Paper [22] proposed Talky-G and Walky-G
algorithms.It uses a depth-first strategy for traversal.IFP_min algorithm described in [3] uses concept of residual tree to reduce
computation time. The IFP_min algorithm recursively mines the minimally infrequent itemsets (MIIs) by dividing the IFP-tree into
two sub-trees: projected tree and residual tree[3]. Pattern-growth based algorithms are computationally faster on dense datasets.
Apriori-rare is a modification of the Apriori algorithm used to mine frequent itemsets..To retrieve all rare itemsets from
minimal rare itemset (mRIs), a prototype algorithm called A Rare Itemset Miner Algorithm (Arima) was proposed in [8].
Arimagenerates the set of all rare itemsets, splits into two sets: the set of rare itemsets having a zero support and the set of rare
itemsets with non-zero support. If an itemset is rare then any extension of that itemset will result a rare itemset [8].
Paper [24] presented WSFI (Weighted Support Frequent Itemsets) algorithm where user can specify the weight for each item.
A WSFP-Tree store compressed important information about frequent patterns which is an extended FP-tree. It mines the frequent
itemsets in only one scan from the data stream .
In [4] proposed FPGrowth- like algorithms i.e. IWI and MIWI for discovering infrequent itemsets by using weights for
differentiating between relevant items and not within each transaction.The IWI-support measure is defined as a weighted frequency of
occurrence of an itemset in the analyzed data. Occurrence weights are derived from the weights associated with items in each
transaction by applying a given cost function. They mainly focuses on IWI- support-min measure and IWIsupport-max measure which
are described as : (i) The IWI-support-min measure, which relies on a minimum cost function, i.e., the occurrence of an itemset in a
given transaction is weighted by the weight of its least interesting item, (ii) The IWIsupport-max measure, which relies on a maximum
cost function, i.e., the occurrence of an itemset in a given transaction is weighted by the weight of the most interesting item. As per the
analysis of [5] MIWI is the most effective algorithm, which computes in very lesscomputing time, improves the efficiency of
performance when the database is large, computes the weighted transaction among the existing algorithms.

CONCLUSION
Frequent Itemset Mining has attracted plenty of attention but much less attention has been given to mining Infrequent
Itemsets.This paper surveys different research papers that proposed various algorithms which are basis for future research in the field
of pattern mining. This paper explains different application areas where the infrequent patterns are used. Identifying infrequent
patterns efficiently from large datasets and the interesting patterns from the discovered patterns are the challenging tasks in the field of
infrequent itemset mining.

REFERENCES:

[1] SujathaKamepalli,RajasekharaRaoKurra and Sundara Krishna Y. K.,Apriori Based: Mining Infrequent and Non-Present
Item Sets from Transactional Data Bases, International Journal of Electrical & Computer Science IJECS-IJENS Vol:14
No:03
677

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[2] Haglin D. and Manning A. (2007). On Minimal Infrequent Itemset Mining. In 2007 International Conference on Data
Mining (DMIN'07), Las Vegas, June 25-28, 2007, pp. 141- 147.

[3] A. Gupta, A. Mittal, and A. Bhattacharya, Minimally Infrequent Itemset Mining Using Pattern-Growth Paradigm and
Residual Trees, Proc. Intl Conf. Management of Data (COMAD), pp. 57-68, 2011.

[4] Luca Cagliero and Paolo Garza Infrequent Weighted Itemset Mining using Frequent Pattern Growth, IEEE Transactions
on Knowledge and Data Engineering, pp. 1- 14, 2013.

[5] SakthiNathiarasan, Kalaiyarasi, Manikandan , Literature Review on Infrequent Itemset Mining Algorithms International
Journal of Advanced Research in Computer and Communication Engineering Vol. 3, Issue 8, August 2014

[6] INFREQUENT PATTERNS, CHAPTER- 5 Introduction to data mining, Pang-Ning Tan, Michael Steinbach, Vipin Kumar
Pearson Education, book.

[7] Mehdi Adda, Lei Wu, Sharon White(2012), Yi Feng Pattern detection with rare item-set mining International Journal on
Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.1, No.1, August 2012.

[8] L.Szathmary,A.Napoli, P.Valtchev, Towards rare itemset mining ,in: Proceedings of the 19th IEEE Interational Conference
on Tools with Artificial Intelligence , 2007, Volume-1, pp.305-312
[9] JyothiPillai, O.P.Vyas, Overview of Itemset Utility Mining and its Applications International Journal of Computer
Applications (0975 8887) Volume 5 No.11, August 2010
[10] K.Sun and F.Bai,"Mining Weighted Association Rules Without Preassigned Weights", IEEE Transactions on Knowledge
and Data Engineering, Vol. 20, No. 4, April 2008, pages 489-495
[11] B.Barber, H.J Hamilton , Extracting share frequency itemsets with infrequent subsets, Data Mining and Knowledge
Discovery 7(2) (2003)153-185.
[12] V. Podpecan , N. Lavrac, I. Kononenkom, A fast algorithm for mining utility-frequent itemsets,in workshop on ConstraintBased Mining and Learning at ECML/PKDD ,2007, pp. 9-20.
[13] G. Cong, A.K.H. Tung, X. Xu, F. Pan, and J. Yang, Farmer: Finding Interesting Rule Groups in Microarray Datasets,
Proc. ACM SIGMOD Intl Conf. Management of Data (SIGMOD 04), 2004.
[14] W. Wang, J. Yang, and P.S. Yu, Efficient Mining of Weighted Association Rules (WAR), Proc. Sixth ACM SIGKDD
Intl Conf. Knowledge Discovery and data Mining (KDD 00), pp. 270-274, 2000.
[15] F. Tao, F. Murtagh, and M. Farid, Weighted Association Rule Mining Using Weighted Support and Significance
Framework, Proc. nineth ACM SIGKDD Intl Conf. Knowledge Discovery and Data Mining (KDD 03), pp. 661-666,
2003.
[16] JyothiPillai and O.P.Vyas ,TRANSACTION PROFITABILITY USING HURI ALGORITHM [TPHURI], International
Journal of Business Information Systems Strategies (IJBISS) Volume 2, Number 1,November 2013
[17] Barber B., Hamilton, H. J. Extracting share frequent itemsets with infrequent subsets, Data Mining and Knowledge
Discovery, 7(2) (2003), pp 153-185.
[18] J. Yang and J. Logan. A data mining and survey study on diseases associated with paraesophageal hernia. In AMIA Annual
Symposium Proceedings, pages 829833, 2006.
[19] Pang-Ning Tan, Michael Steinbach, Vipin Kumar Introduction to data mining, Pearson Education, book.
[20] Jiawei Han Hong Cheng Dong Xin Xifeng Yan. Frequent pattern mining: current status and future directions.
[21] Ling Zhou, Stephen Yau, Efficient association rule mining among both frequent and infrequent items Computers and
Mathematics with Applications 54 (2007) 737749.
[22] Laszlo Szathmary1, Petko Valtchev2, Amedeo Napoli3, and Robert Godin2 2012. Efficient Vertical Mining of Minimal
Rare Itemsets ISBN 978,84{695{5252{0, pp. 269{280, 2012.
[23] Diti Gupta and Abhishek Singh Chauhan Mining Association Rules from Infrequent Itemsets: A Survey, International
Journal of Innovative Research in Science, Engineering and Technology, ISSN: 2319-8753, Vol. 2, Issue 10, pp. 5801
5808, 2013
[24] Younghee Kim, Wonyoung Kim and Ungmo Kim Mining Frequent Itemsets with Normalized Weight in Continuous Data
Streams, Journal of Information Processing Systems, Vol.6, No.1, March 2010
[25] Laszlo Szathmary, PetkoValtchev, Amedeo Napoli, Finding Minimal Rare Itemsets and Rare Association Rules,
Proceedings of the 4th International Conference on Knowledge Science, Engineering and Management (KSEM 2010) 6291
(2010) 1627.
[26] JyothiPillai, O.P.Vyas, CSHURI Modified HURI algorithm for Customer Segmentation and Transaction Profitability,
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012, pp
79-89.
[27] J. Han, J. Pei, and Y. Yin, Mining frequent patterns without candidate generation, Procedings of ACM SIGMOD
International Conference on Management of Data, ACM Press, Dallas, Texas, pp. 1-12, May 2000.
678

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[28] R. Agrawal, T. Imielinski, and A. Swami, Mining association rules between sets of items in large databases, in
Proceedings of the 1993 International Conference on Management of Data (SIGMOD 93), May 1993, pp. 207216.

[29] en.wikipedia.org/wiki/Data_mining
[30] en.wikipedia.org/wiki/Association_rule_learning

679

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Autoconfiguration of OCTANE Nodes in MANET to Reduce flooding


Dipika M. Pimpre
Department of Computer Science
University, Amravati
Prof. S.P. Mankar
Department of Information Technology., University
Amravati

MANET is infrastructure less network and the routing protocol in MANET is not designed specifically with
dynamic, self-starting behavior required for wireless networks. Each and every node in MANET acts as a forward and
receiver node. Performance of most of the protocols is not encouraging in a highly dynamic interconnection topology.
Most routing protocols in MANETs rely on a flooding mechanism to broadcast data and control packets over the entire
network for establishing routes between source destination pair. However, the basic nature and characteristics flooding
mechanism causes a large number of packets propagation in MANETs. This will eventually overload the network and
traffic is congested affecting the overall performance of network. So, there is a need of such a protocol which not only
reduces the flooding but also reduces the routing overhead. In this paper, a reliable broadcast approach for MANET is
proposed, which improves the transmission rate. This work proposes a methodology where certain nodes are assumed to
be high energy transmission nodes known as Octane Routing Node (ORN)/ High Power Routing (HPR) nodes, or Tower
Node (TN) in the network which are utilized for routing. The route is established only through these nodes which are
capable of communicating to long distance. Since the proposed approach reduces the flooding, we have considered
functionality of the proposed approach with AODV variants. The effect of network density on the overhead and collision
rate is considered for performance evaluation. The performance is compared with the AODV variants found that the
proposed approach outperforms all the variants
Abstract:

Keywords:

Ad hoc networks, computer network management, rebroadcast protocol, broadcast storm problem, Network
Simulator (Version 2), cluster, , network connectivity

I. INTRODAUCTION
In the recent trends high mobility of nodes in mobile ad hoc Networks (MANETs), there exist link breakages which lead
to Frequent path failures and route discoveries .This type of serious -s Failure cannot be neglected. In this route discovery
We try to implement the Broadcasting is a fundamental and effective data dissemination mechanism, where a
mobile node blindly rebroadcasts the first. Received route request packets unless it has a route to the destination, and thus
it causes the broadcast storm problem. In this we propose a neighbor coverage-based probabilistic rebroadcast protocol for
reducing routing overhead in MANETs. In order to effectively exploit the neighbor cove-rage knowledge, we propose
a novel rebroadcast delay to deter-mine the rebroadcast order, and then we can obtain themore accurate additional
coverage ratio by sensing neighbor covrerag e knowledge. We also define a connectivity factor to provide The node
density adaptation. By
combining
the additionalcoverage ratio and
connectivity
factor, we
set a
reasonablerebroadcast probability. Our approach combines the advantages of the neighbor coverage knowledge and the
probabilistic Mechanism, which can significantly decrease the number of retrains missions so as to reduce the routing
overhead, and can also Improve the routing performance. Autonomous addressing protocols require a distributed and
self-managed mechanism toavoid address collisions in a dynamic network with fadingchannels, frequent partitions,
680

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and joining/leave nodes. The software used is Network Simulator (Version 2), widely knownas NS2,is simply an eventdriven simulation tool that has proveduseful in studying the dynamic nature of communication networ ks. Simulation of
wired as well as wireless network functions and protocols (e.g., routing algorithms, TCP, UDP) can be done using NS2.
In general, NS2 provides users with a way of specifing such network protocols and simulating their corresponding
behaviours .Due to its flexibility and modular nature, NS2 has gained constant popularity in the networking research
community since its birth in 1989. Ever since, several revolutions and revisions have marked the growing maturity of
the tool, thanks to substantial contributions from the players in the field.
Since 1995 the Defence Advanced Research Projects Agency (DARPA) sup-ported development of NS through the
Virtual Inter Network Testbed (VINT) project [1][2]. We'll use NS2 for this project. NS2 is a discrete event simulator
written in C++, with an OTcl interpreter shell as the user interface that allows the input model files (Tcl scripts) to be
executed. Most network elements in NS2 simulator are developed as classes, in object-oriented fashion. The simulator
supports a class hierarchy in C++, and a very similar class hierarchy in OTcl. The root of this class hierarchy is the
TclObject in OTcl. Users create new simulator objects through the OTcl interpreter, and then these objects are mirrored
by corresponding objects in the class hierarchy in C++. NS2 provides substantial support for simulation of TCP, routing
algorithms, queuing algorithms, and multicast protocols over wired and wireless (local and satellite) networks, etc. It is
freely distributed, and all source code is available. In clustering procedure, a representative of each sub domain(cluster) is
elected as a cluster head (CH) and a node which serves as intermediate for inter-cluster communication is called
gateway. Remaining members are called ordinary nodes. The boundaries of a cluster are defined by the transmission area
of its CH.
II.

BACKGROUND

In recent years, wireless technologies and applications have received a lot of attention [Sheu2002]. Owing to rapidly
emerging wireless computing, users can communicate with each other through network connectivity without being
tethered off of a wired network [Crow1997]. An ad hoc wireless local area network (WLAN) consists of a group of
mobile hosts creating a temporary network without the help of any pre-existed infrastructure or centralized administration
[Sheu2002, Johnson1994, Frodigh2000, Perkins2000]. Since the nodes in an ad hoc network are able to serve as routers
and hosts, they can forward packets on behalf of other nodes and run user applications [Frodigh2000]. That is, when two
nodes are within the wireless transmission range of each other, they can communicate with each other directly. However,
if they are out of their transmission range to each other, intermediate nodes can forward a packet for the two nodes to
communicate with. Since ad hoc networks do not need any help of centralized infrastructure, wireless applications are
becoming more and more popular where wired networking is not available or not economically feasible [Sheu2002]. In
general, a wireless ad hoc network that consists of mobile nodes and communicates over radio without the aid of any
infrastructure is popularly called MANET (Mobile Ad-hoc NETwork) [Gunes2002].
Geographic ad hoc networks using position-based routing are targeted to handle large networks containing many nodes.
Position-based routing algorithms can employ single path, multi path, or flooding. Flooding protocols are usually
restricted directional, such as DREAM [3] and LAR [4]; the flooding is done only in a section of the network, which is
selected based on the source and destination node location. Multipath protocols such as c-GEDIR [5] attempt to forward
the message along several routes toward its destination in order to increase the probability of finding a feasible path.
Single path protocols, on the other hand, aim for good resource consumption to throughput ratio. Most common among
the single path protocols are those basedon greedy algorithms. The greediness criteria can be distance, minimum number
of hops, power (best usage of battery resources), etc. Greedy routing algorithm [6] is a memory less algorithms (only
requires information about destination).When using greedy forwarding, a node selects for the next hop, the node that is
closest to destination (including itself). It is easy to come up with examples where this algorithm does not converge, due
to local minima that occur in regions void of neighbors.IJCA Special Issue on Mobile Ad-hoc NetworksMANETs, 2010
681

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

121Position based routing eliminate the limitation of topology based routing. It requires the information about the
physical position of the participating nodes. Each node must be aware of its own location and location of the participating
nodes. One distinct advantage is no establishment or maintenance of path required, and it is suitable for highly dynamic
large networks.A major issue in greedy routing algorithms is how to proceed when a concave node is reached, i.e.,A
Concave node is a node that has no neighbor that can make a greedy progress toward some destination (for the greedy
routing algorithm in use). The simplest solution is to allow the routing algorithm to forward the packet to the best
matching neighbor, excluding the sender itself. Such a solution can guarantee the packet delivery but can result in routing
loops in algorithms that are otherwise loop free. Other solutions require switching to a recovery algorithm that guarantees
packet delivery. Since position-based routing uses local information for forwarding decisions, a concave node cannot be
predicted in advance, based on the position of its neighbor nodes. Even using the information of the neighborhood cannot
prevent reaching concave nodes, though can improve decisions made during the algorithm.

III.

RELATED WORK

The lack of servers hinders the use of centralized addressing schemes in ad hoc networks. In simple distributed
addressing schemes, however, it is hard to avoid duplicated addresses because a random choice of an address by each
node would result in a high collision probability, as demonstrated by the birthday paradox [7].The IETF Zeroconf working
group pro-poses a hardware-based addressing scheme [8], which assigns an IPv6 network address to a node based on the
device MAC address. Nevertheless, if the number of bits in the address suffix is smaller than number of bits in the MAC
address, which is always true for IPv4 addresses, this solution must be adapted by hashing the MAC address to fit in the
address suffix. Hashing the MAC address, however, is similar to a random add rest choice and does not guarantee a
collision-free address allocation. A few extensions to the Duplicate Address Detection (DAD) protocol use Hello
messages and partition identifiers to handle network partitions [5], [9]. These identifiers are random numbers that identify
each network partition. A group of nodes changes its partition identifier whenever it identifies a partition or when
partitions merge. Fan and Subramanian propose a protocol based on DAD to solve address collisions in the presence of
network merging events. This protocol considers that two partitions are merging when a node receives a Hello message
with a partition identifier different from its own identifier or when the neighbour set of any node changes [5].Other
proposals use routing information to work around the addressing problem. Weak DAD [10], for instance, routes packets
correctly even if there is an address collision Other more complex protocols were proposed to improve the performance of
network merging detection and address reallocation [6], [11].This technology of neighbor coverage knowledge includes
additional coverage ratio and connectivity factor less redundant rebroadcast, the proposed protocol mitigates the network
collision and contention , so as to increase the packet delivery ratio and decrease the average end-to-end delay. Wireless
distributed micro sensor systems will enable there liable monitoring of a variety of environments for both civil and
military applications.

In This we look at communication protocols, which can have significant impact on the overall energy dissipation of these
networks. Based On our findings that the conventional protocols of direct transmission, minimum-transmission-energy,
multihop routing And static clustering may not be optimal for sensor networks[3].Clustering protocols have been
investigated in the context of routing protocols [23], [14], [22], [15], [24],or independent of routing [16], [17], [13], [18],
[19], [20]. In this work, we present a general distributed clustering approach that considers a hybrid of energy and
communication cost. Based on this approach, we present the HEED (Hybrid, Energy-Efficient, Distributed) clustering
protocol. HEED has four primary objectives [21]: (i) prolonging network lifetime by distributing energy consumption, (ii)
682

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

terminating the clustering process within a constant number of iterations, (iii) minimizing control overhead (to be linear in
the number of nodes), and (iv) producing well-distributed cluster heads. Our clustering app roach does not make
assumptions about the distribution of nodes, or about node capabilities, location awareness. The approach only assumes
that sensor nodes can control their transmission power level.

IV. Flooding
Broadcasting has been used widely in wired and wireless networks to understand the data and topology information.
There are various routing protocols in MANETs rely on a flooding mechanism to broadcast data and control packets over
the entire network for establishing routes between source destination pair. The simplest way of broadcasting a packet to
all nodes in the network is basic flooding or blind flooding which allows each node to retransmit a packet to its neighbors,
in case it has not received broadcast packet during earlier transmission. The rebroadcasting process continues until all
nodes in the network have received a copy of the packet. Since, topology packets pass through every possible path in
parallel, it is assured that the flooding can always find the shortest path between various source and destination
combinations.
However, the basic nature and characteristics flooding mechanism causes a large number of packets propagation in
MANETs. This will eventually overload the network and traffic is congested, which is depicted in Figure 1.

Figure 1- Sample Flooding Scenario

In Figure 1, the centre node is the source node; nodes in the first inner circle are one-hop neighbours and the nodes in the
outer circle are two-hop neighbours. While S, transmit out the packet, all the one-hop neighbours broadcast copies of the
packet to all its two-hop neighbours of S at the same time. As a result, there is a heavy redundant rebroadcasting, which
means same packet is being received more than once by some nodes, contention and collision that are referred to as the
broadcast storm problem. There are various methods have been proposed for achieving efficient broadcasting to solve the
broadcast storm problem. In general, these broadcast protocols can be categorized into three classes such as probabilitybased methods, area-based methods and neighbour knowledge methods. The probability-based methods are similar to
683

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

basic flooding, except that each node rebroadcasts packets with a predetermined probability. This mechanism is found to
be suitable in dense networks while multiple nodes with similar neighbour coverage. However, the effect of this approach
is encouraging only in the sparse network. In area-based methods, the rebroadcast process depends on the distance
between itself and the source node. While the distance between them is longer than a predefined threshold, the packet is
rebroadcasted, so that a larger additional area can be reached. However, area-based methods do not consider whether
some nodes actually exist within that additional area that leads to inefficient broadcasting. The neighbor knowledge
methods are further classified as neighbor-designated methods and self-pruning methods. While a node in the neighbordesignated methods transmits packet with a specification to denote, which one of its one-hop neighbors should forward
the packet and in self-pruning methods, the receiving node will decide whether the or not to transmit the packet by itself.

V. Simulation Environment
In order to evaluate the performance of the proposed NCPR
protocol, we compare it with some other protocols using the NS-2 simulator. Broadcasting is a fundamental and effective
data dissemination mechanism for many applications in MANETs. In this paper, we just study one of the applications:
route request in route discovery. In order to compare the routing performance of the proposed NCPR protocol, we choose
the Dynamic probablist -ic Route Discovery [24][26] protocol which is an optimization scheme for reducing the overhead
of RREQ packet incurred in route discovery in the recent literature, and the conventional AODV protocol.
We evaluate the performance of routing protocols using the following performance metrics:
MAC collision rate: the average number of packets (including RREQ, route reply (RREP), RERR, and
CBR data packets) dropped resulting from the collisions at the MAC layer per second.
Normalized routing overhead : The ratio of the total packet size of control packets (include RREQ,
RREP, RERR, and Hello) to the total packet size of data packets delivered to the destinations. For the
control packets sent over multiple hops, each single hop is counted as one transmission. To preserve fairness,
we use the size of RREQ packets instead of the number of RREQ packets, because the DPR and NCPR protocols
include a neighbor list in the RREQ packet and its size is bigger than that of the original AODV.
Packet delivery ratio: the ratio of the number of data packets
successfully received by
destinations to the number of data packets generated by the CBR sources.

the

CBR

Average end-to-end delay: the average delay of successfully delivered CBR packets from source to
destination node. It includes all possible delays from the CBR sources to destinations.

What Problem Remained Unsolved


Consider a normal AODV route discovery process. For example, if the node S starts a route discovery process by
broadcasting a RREQ message, then all the neighbours of S will receive the request and process the request. If a
neighbouring node knows the route, then it will send a reply otherwise, it will forward the RREQ message by rebroadcasting it again. In fact, all the nodes in the network will receive that RREQ message. If the message will reach the
destination D, then D will send a RREP message.

684

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Based on above discussion, it is noticed that most of the above mentioned protocols is applicable in multipoint MANET
and all of them tries to minimize the number of messages. However, to achieve this it is observed that lot of energy is
consumed or special hardware is required. Thus, it is imperative that a protocol is required to reduce the number of
messages during broadcasting to avoid flooding.

1. AIM
To design a method which will reduce the routing overhead in both route discovery and route maintenance, by reducing
the flooding in route discovery process and to avoid broadcast storm problem.

2. OBJECTIVE
The objective is to implement a method for improving the performance of Mobile Ad hoc Network (MANET) routing
protocols under highly mobile time sensitive communication scenario. The performance is improved by identifying
certain nodes as HPR nodes which involve in routing and the rest of the normal nodes which receive the routing packets
are not allowed to process those requests. HPR nodes can be assumed as higher capability nodes which are having
sufficient battery power and they may be deployed as HPR nodes and behave as HPR nodes during the entire life of the
network. Only HPR node will be used for routing or route discovery if destination is not in neighbor list of source.
So, ultimate objectives here will be implementation of method to reduce routing overhead by using HPR nodes and design
of header of HPR node.

3. SCOPE
Since there is no routing overhead for the normal nodes in the network, the end-to-end delay will be reduced very much.
A route cannot be established through any arbitrary node in the network; hence the security in communication increases.
Since the route is established only through HPR nodes, the other nearby normal nodes which will receive the routing
packets will not process those requests and reduce the message overhead in a typical on-demand routing protocol.

4. LIMITATIONS

1.
2.
3.
4.

Dividing the normal nodes for using HPR nodes is necessary, how to group these nodes is an uncovered issue.
How many HPR nodes to be used in one group. Using more HPR nodes will increase the network cost.
Large change in HPR nodes header may lead to make changes in normal nodes too.
If HPR node fails whole group associated with that HPR node will suffer failure of communication path.

VI. Plan for development & Implementation process:

We proposed a probabilistic rebroadcast protocol based onNeighbor coverage to reduce the routing overhead in MANETs.
This neighbor coverage knowledge includes additional coverageratio and connectivity factor. We proposed a new scheme to
685
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Dynamically find the path through the Energy Efficient powere-d node which is used to determine the cover-age knowledge
knowledge. Simulation results show that the proposed protocolgenerates less rebroadcast traffic than the flooding and some
other optimized scheme in literatures. Because of less redundantrebroadcast, the proposed protocol mitigates the network collisi
-on and contention, so as to increase the packet delivery ratioand decrease the average end-to-end delay. The simulation results
also show that the proposed protocol has good performance when the network is in high density or the traffic is in heavy load.

Wireless distributed micro sensor systems will enable the reliable monitoring of a variety of environ- ments for both civil and military
applications. we look at communication protocols, which can have significant impact on the overall energy dissipation of these
networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop
routing, and static clustering may not be optimal for sensor networks

VII. Proposed scheme:


In proposed routing scheme, as shown in Figure 4, the HPR nodes only will be allowed to forward the RREP and RREQ
messages. In other words, between S and D, a route can be established only through HPR nodes. Since the normal nodes
will not rebroadcast the RREQ or forward RREP messages, it will reduce a lot of overhead as well as transmission power.
Since the HPR nodes are capable of passing messages to longer distances, it will reduce the overall path length. The
reduction in path length will reduce the end to end delay. Further, the normal nodes will only need to transmit up to the
next nearest HPR node where the transmission (tx) power is reduced according to that distance, which reflects in the
overall power consumption and reflects in reducing the routing overhead.

Figure 4. The proposed AODV-HPR method


686

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The tx power to transmit packets from a HPR node to another HPR node will be constant and the established link will not
be affected by a little mobility.

A. HPR Node Selection


HPR nodes can be assumed as higher capability nodes which are having sufficient battery power and they may be
deployed as HPR nodes and behave as HPR nodes during the entire life of the network. On the other hand, even the status
of a node can be changed as HPR node or normal node in a random dynamic fashion for balanced power consumption in
all the nodes in a normal network of similar capability nodes. Anyway, a HPR node can transmit or allowed transmit to
higher distance than normal nodes. HPR nodes can also be a source or destination node but anyway, a route can be
established only through HPR nodes.
B. Advantages of AODV-HPR
Since there is no routing overhead for the normal nodes in the network, the end-to-end delay will be reduced very much.
A route cannot be established through any arbitrary node in the network; hence the security in communication increases.
In a typical MANET, mobility causes link failures and results in increased overhead and reduced performance. In the
proposed AODV_HPR, the HPR nodes uses little bit of higher energy, so that it is resistant to mobility to some extent.
Since the HPR nodes are capable of communicating to high distance, little bit of mobility in individual nodes will not
cause frequent link failures. Since the route is established only through HPR nodes, the other nearby normal nodes which
will receive the routing packets will not process those requests and reduce the message overhead in a typical on-demand
routing protocol.

POSSIBLE OUTCOME:
It is been predicted that AODV HPR will give maximum throughput also less number of dropped packet. The AODVHPR will generate minimum number routing/control messages in the network while transmitting data packet from source
node to destination indirectly reducing routing overhead and end to end delay

VIII. CONCLUSION:
The performance of proactive and reactive protocols is always questionable if used in a highly mobile short time
communication scenario. Here, the routing protocols require certain time for achieving stable performance due to the
periodic route discovery and maintenance mechanisms in their inherent design. In this paper, the performance of AODV is
improved by identifying certain nodes as HPR nodes which involve in routing and the rest of the normal nodes which
receive the routing packets are not allowed to process those requests acting as only the simple neighboring nodes. HPR
nodes can be assumed as higher capability nodes which are having sufficient battery power and they may be deployed as
HPR nodes and behave as HPR nodes during the entire life of the network. A HPR node can transmit or allowed to
transmit to higher distance than normal nodes. HPR nodes can also be a source or destination node but, a route can be
established only through HPR nodes. The modified AODV termed as AODV_HPR.

687

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IX. FUTURE SCOPE:


In the future, combining the additional coverage ratio and connectivity factor, and set a reasonable rebroadcast probability
with AODV-HPR may reduce the routing overhead more efficiently. Moreover, the AODV-HPR will work regularly
updated and instantly available to all learners [29]. It also provides collaborative learning which promotes collaborative
learning thus resulting in a more engaging and richer learning experience, Scalability where content can be delivered to a
small or large number of learners with little effort [30]
X. ACKNOWLEDGNT

The authors would like to take this opportunity to show our appreciation to all our colleagues for their support
throughout the period of this research. Without such help, this work would not be completed or published in one of the
international journals around the word.

R EF E RE N C ES:
[1] The Network Simulator Wiki. [Online]. Available: http://nsnam.isi.edu/nsnam/index.php.
[2] The Network Simulator ns-2. [Online]. Available: http://www.isi.edu/nsnam/ns.
[3] Energy-Efficient Communication Protocol for Wireless Micro sensor Networks
[4] SORT: A Self-ORganizing Trust Model for Peer-to-Peer Systems.
[5] Z. Fan and S. Subramani, An address auto configuration protocol for IPv6 hosts in a mobile ad hoc
network,Comput.Commun., vol. 28, no. 4, pp. 339350, Mar. 2005.
[6] S.Nesargiand R. Prakash ,MANET conf: Configuration of hosts in a mobile ad hoc network, in Proc. 21st Annu.
IEEE INFOCOM, Jun. 2002, vol. 2, pp. 10591068.
[7] B. Parno, A. Perrig, and V. Gligor, Distributed detection of node replication attacks in sensor networks, in Proc.
IEEE Symp. Security Privacy, May 2005, pp. 4963.
[8] S. Thomson and T. Narten, IPv6 stateless address autoconfiguration,RFC 2462, 1998.
[9] M. Fazio, M. Villari, and A. Puliafito, IP address autoconfiguration in ad hoc networks: Design, implementation and
measurements,Comput. Netw, vol. 50, no. 7, pp. 898920, 2006.
[10] N. H. Vaidya, Weak duplicate address detection in mobile ad hoc net-works, in Proc. 3rd ACM MobiHoc, 2002,
pp. 206216.
[11] H. Kim, S. C. Kim, M. Yu, J. K. Song, and P. Mah, DAP: Dynamic address assignment protocol in mobile ad-hoc
networks, inProc. IEEE ISCE, Jun. 2007, pp. 16
[12]An Efficient and Robust Addressing Protocol for Node Autoconfiguration in Ad Hoc Networks.
[13] S. Banerjee and S. Khuller, A Clustering Scheme for Hierarchical Control in Multi-hop Wireless Networks, in
Proceedings of IEEE INFOCOM, April 2001.
688

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[14] B. McDonald and T. Znati, Design and Performance of a Distributed Dynamic Clustering Algorithm for Ad-Hoc
Networks, in Annual Simulation Symposium, 2001.34
[15] M. Gerla, T. J. Kwon, and G. Pei, On Demand Routing in Large Ad Hoc Wireless Networks with Passive
Clustering, in Proceeding of WCNC, 2000.
[16] S. Basagni, Distributed Clustering Algorithm for Ad-hoc Networks, in International Symposium on Parallel
Architectures, Algorithms, and Networks (I-SPAN), 1999.
[17] M. Chatterjee, S. K. Das, and D. Turgut, WCA: A Weighted Clustering Algorithm for Mobile Ad Hoc Networks,
Cluster Computing,pp. 193204, 2002.
[18] T. J. Kwon and M. Gerla, Clustering with Power Control, in Proceeding of MilCOM99, 1999.
[19] S. Bandyopadhyay and E. Coyle, An Energy-Efficient Hierarchical Clustering Algorithm for Wireless Sensor
Networks, in Proceedings of IEEE INFOCOM, April 2003.
[20] A. D. Amis, R. Prakash, T. H. P. Vuong, and D. T. Huynh, Max-Min D-Cluster Formation in Wireless Ad Hoc
Networks, in Proceedings of IEEE INFOCOM, March 2000.
[21] O. Younis and S. Fahmy, Distributed Clustering in Ad-hoc Sensor Networks: A Hybrid, Energy-Efficient
Approach, in Proceedings of IEEE INFOCOM, March 2004, an extended version appears in IEEE Transactions on
Mobile Computing, vol. 3, issue 4, Oct-Dec, 2004.
[22] C. R. Lin and M. Gerla, Adaptive Clustering for Mobile Wireless Networks, in IEEE J. Select. Areas Commun,
September 1997.
[23]V. Kawadia and P. R. Kumar, Power Control and Clustering in Ad Hoc Networks, in Proceedings of IEEE
INFOCOM, April 2003.
[24] W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, An Application-Specific Protocol Architecture for Wireless
Microsensor Networks,IEEE Transactions on Wireless Communications, vol. 1, no. 4, pp. 660670, October 2002.
[25] C. Perkins, E. Belding-Royer, and S. Das, Ad Hoc On-Demand Distance Vector (AODV) Routing, IETF
RFC 3561, 2003.
[26] D. Johnson Y. Hu, and D. Maltz, The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks (DSR) for
IPv4,IETF RFC 4728, vol. 15, pp. 153-181, 2007.
[27] H. AlAamri, M. Abolhasan, and T. Wysocki, On Optimizing Route Discovery in Absence
Previous Route Information MANETs, Proc. IEEE Vehicular Technology Conf. (VTC), pp. 1-5,2009.

of

[28]X. Wu, H.R. Sadjadpour, and J.J. Garcia-Lunaa-acevAceves, Rout- ing Overhead as a Function of
Node Mobility: Modeling Framework and implications on Proactive Routing, Proc. IEEE Intl Conf.
Mobile Ad Hoc and Sensor Systems (MASS 07), pp. 1-9, 2007.
[29] S.Y. Ni, Y.C. Tseng, Y.S. Chen, and J.P. Sheu, The Broadcast Storm Problem in a Mobile Ad Hoc
Network, Proc. ACM/IEEE MobiCom, pp. 151-162, 1999.
689

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[30] A. Mohammed, M. Ould-Khaoua, L.M. Mackenzie, C. Perkins, and J.D. Abdulai, Probabilistic
Counter-Based Route Discovery for Mobile Ad Hoc Networks, Proc. Intl Conf. Wireless Comm. and Mobile
Computing: Connecting the World Wirelessly (IWCMC 09), pp. 1335-1339, 2009.
[31]B. Williams and T. Camp, Comparison of Broadcasting Techniques for Mobile Ad Hoc Networks
,Proc.ACM Mobi Hoc, pp. 194-205, 2002.
[32]J. Kim, Q.Zhang, and D.P. Agrawal, Probabilistic Broadcasting Based on Coverage Area and Neighbor
Confirmation in Mobile Ad Hoc Networks, Proc. IEEE.
[33] J.D. Abdulai, M. Ould-Khaoua, and L.M. Mackenzie, Improving Probabilistic Route Discovery in Mobile
Ad Hoc Networks, Proc. IEEE Conf. Local Computer networks Networks, pp. 739-746, 2007.
[34] Z. Haas, J.Y. Halpern, and L. Li, Gossip-Based Ad Hoc Routing, Proc. IEEE INFOCOM, vol. 21, pp.
1707-1716, 2002.
[35] W. Peng and X. Lu, On the Reduction of Broadcast Redundancy in Mobile Ad Hoc Networks, Proc.
ACM MobiHoc, pp. 129-130,2000.

690

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Effect of Turning Parameters on Power Consumption in EN 24 Alloy Steel


using Different Cutting Tools
Richard Geo*, Jose Sheril Dcotha **
*M

**

.Tech research scholar Department of Mechanical Engineering, SCMS School of Engineering and Technology, India,

richardgeo121@gmail.com
Assistant Professor Department of Mechanical Engineering, SCMS School of Engineering and Technology, India

Abstract In this paper the effect of machining parameters (cutting speed, feed rate, depth of cut) on power consumption of the tool
during turning of EN-24 alloy steel was studied. Tools considered in this experimental work are HSS and tungsten carbide tool.
Comparison of power consumed by the tools was done. Mathematical models for power consumption of the tools was created by
using SPSS software from the experimentally measured power readings. The R2 value obtained from the regression is around 95
percentage for carbide tool and 93 percentage for HSS tool which indicates that the model developed is good fit. The power consumed
by both tools are measured by measuring the forces acting on the cutting tool using a lathe tool dynamometer with a digital display for
measuring the forces acting on three axis. From the model it was found that cutting speed is the most important factor that influences
power consumed by the tool and feed rate has less influence.

Keywords Force measurement, Tool power prediction model, Comparison of tools.

INTRODUCTION

Power consumed by a single point cutting tool is an important factor to be considered in turning operation. The study of
power consumed by the tool helps to find out the life of the tool for maximum productivity, helps to select the capacity of the motor
required for the machine and it also helps for designing machine components. Power consumed by the tool can be measured by using
two methods. First method of measuring power consumed by the tool is by using a watt meter connected to the motor of the lathe tool.
In this method during machining operation the watt meter shows power consumed by the tool at different cutting condition. This
method has some drawbacks that certain amount of work done by the motor is wasted in the form of mechanical losses in the
transmission system so using this method for power consumption we cant create a universal model for power consumption of the tool.
Second method is by measuring the cutting forces acting on the tool during turning operation. For measuring the forces a lathe tool
dynamometer is used. A lathe tool dynamometer is a device that can measure forces acting on cutting tool in 3 axis (F X, FY, and FZ)
axis. Among these forces the component of force which has highest value is used to calculate the power consumption of the tool.
Power consumed by the tool is a function of cutting force and cutting velocity. The power consumed is given by P = F * V. where P is
power in kilowatts, F is force in newton and V is cutting speed in meter per minute. Experiments are conducted using Box-Behnken
design. Experimentally obtained datas are used to create mathematical models for power consumption for both tools.
EXPERIMENTATION

In this experimental work the power consumed by the tool was measured during turning of EN 24 steel alloy by HSS tool and
with tungsten carbide inserts by measuring the force acting on the tool using a lathe tool dynamometer. Turning was performed on a
precision lathe (NAGMATI-175) in Mechanical Engineering Department.
A
PROCESS VARIABLES AND THEIR LEVELS
Turning operation was conducted on a sample EN 24 work piece of 60 mm diameter and 40 mm length using precision lathe in order
to find out the maximum allowable range of cutting parameters (cutting speed, feed rate, and depth of cut) that can be used. Cutting
parameters are classified in to three levels.

691

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table: 1 Cutting parameters and their levels


PARAMETERS
SYMBOLS LEVEL -1 LEVEL 0 LEVEL 1

NO

Cutting speed (rpm)

54

135

215

Feed rate(mm\rev)

1.5

Depth of cut (mm)

0.5

0.75

DESIGN OF EXPERIMENT

Experiments have been carried out using Box-Behnken design which was found by devised by George E. P. Box and Donald
Behnken. The Box-Behnken design does not contain an embedded factorial design it is an independent quadratic design. In this design
the treatment combinations are at the corners of the process space, face centre and at the body centre. These designs require 3 levels of
each factor and are rotatable (or near rotatable). Compared to the central composite designs these designs have limited capability for
orthogonal blocking [3].
Table: 2 Factorial combinations
Factorial combination

692

SL. NO

(V)

(F)

(D)

-1

-1

-1

-1

-1

-1

-1

-1

10

-1

-1

-1

11

12

-1

13

-1

-1

14

-1

15

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

TOOL FORCE AND POWER CONSUMPTION MEASUREMENTS

The forces acting on the tool is measured during turning of EN 24 steel alloy with HSS tool and tungsten carbide inserts
using a lathe tool dynamometer with digital display unit. Among all forces main the main force is identified and is used to calculate
the power required to perform the machining operation. Power is the function of main cutting force and the cutting velocity. The
equation for the power is: P = F * V. Where P is the power in watt, V is the cutting speed in m/min and F is the main cutting force in
N.
D

CARBIDE TOOL FORCE AND POWER CONSUMPTION READINGS

Table 3 Carbide tool force and power consumption readings

693

Exp
NO

Cutting
speed(rpm)

Feed
rate(mm/rev)

Depth of
cut(mm)

Velocity
(m/min)

MRR

Force
Z (N)

Power
(KW)

Model
power(kw)

% Error

54

10.1736

339.12

421.4

4.28715504

4.352

1.490004

54

0.5

10.1736

169.56

333.2

3.38984352

3.18

-6.59885

215

0.5

40.506

675.1

539

21.832734

22.661

3.655028

215

40.506

1350.2

735

29.77191

23.833

-24.9189

54

10.1736

169.56

343

3.4895448

3.138

-11.2028

54

0.5

10.1736

84.78

264.6

2.69193456

2.1936

-22.7177

215

0.5

40.506

337.55

411.6

16.6722696

20.698

19.44985

215

40.506

675.1

558.6

22.6266516

21.87

-3.45977

54

1.5

0.75

10.1736

190.755

372.4

3.78864864

2.7845

-36.0621

10

135

0.75

25.434

635.85

480.2

12.2134068

13.567

9.9771

11

135

1.5

0.5

25.434

317.925

392

9.970128

11.9995

16.91214

12

215

1.5

0.75

40.506

759.4875

588

23.817528

22.2655

-6.97055

13

135

0.75

25.434

317.925

401.8

10.2193812

11.604

11.93225

14

135

1.5

25.434

635.85

499.8

12.7119132

13.1715

3.489252

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

15

135

1.5

0.75

25.434

476.8875

460.6

11.7149004

12.5855

6.917481

MRR VS POWER CONSUMPTION OF CARBIDE TOOL

35

POWER (KW)

30
25
20

RPM 54

15

RPM 135

10

RPM 215

5
0

MRR(mm3/sec)
Figure No: 1 MRR vs. power consumption of carbide tool
The experimentally measured power consumption readings is used to plot the graph between material removal rate and power
consumed by the tool. From the graph it was found that as the MRR increases power consumption values also increases. Thus it is
noticed that the power consumed is a function of MRR and thus the value of MRR can be used to predict the value of power
consumed.
POWER CONSUMED VS RPM OF CARBIDE TOOL
35

POWER (KW)

30
25
20

FEED 2

15

FEED 1.5
10

FEED 1

5
0
54

135

215

SPINDLE SPEED (RPM)

Figure No: 2 RPM vs. power consumption of carbide tool

694

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The experimentally measured power consumed readings is used to plot power consumption vs rpm graph at three separate
feed levels. Comparing the slop of lines of various feed parameters it was found that power consumed by the tool increases with
increase in rpm. It was also found that at constant rpm highest power consumption was observed for highest value of feed rate.

POWER CONSUMPTION VS FEED RATE OF CARBIDE TOOL


35

POWER (KW)

30
25
20
RPM 54

15

RPM 135

10

RPM 215

5
0
1

1.5

FEED (mm/rev)

Figure No: 3 Feed vs. power consumption of carbide tool


The experimentally measured power consumed readings is used to plot power consumed vs feed graph at three separate rpm
levels. Comparing the slop of lines of various rpm parameters it was found that power consumed by tool increases with the increase in
feed rate. It was also found that at constant feed rate highest power consumption was observed for highest value of rpm.

POWER CONSUMPTION VS DEPTH OF CUT OF CARBIDE TOOL


35

POWER (KW)

30
25
20
RPM 54

15

RPM 135
10

RPM 215

5
0
0.5

0.75

DEPTH OF CUT (mm)

Figure No: 4 Depth of cut vs. power consumption of carbide tool


The experimentally measured power consumed readings is used to plot power consumed vs depth of cut graph at three
separate rpm levels. Comparing the slop of lines of various rpm parameters it was found that power consumed by tool increases with
695

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the increase in depth of cut. It was also found that at constant depth of cut rate highest power consumption was observed for highest
value of rpm.
Multiple regression analysis was conducted on experimentally measured power values using SPSS software. Mathematical
models are developed in terms of machining parameters. The values of cutting parameters are substituted in the mathematical model
and corresponding power values are noted. Percentage error was calculated using experimental values and model values in order to
find out the variation.
Table: 4 Regression analysis of Carbide tool power consumption
Model Summary
Model

R Square

Adjusted R Square

Std. Error of the


Estimate

.976

.953

.940

2.11518922

Sum of Squares

df

Mean Square

Sig.

Regression

1000.120

333.373

74.513

.000

Residual

49.214

11

4.474

Total

1049.335

14

ANOVA
Model
1

Coefficients
Model

Unstandardized Coefficients

Standardized
Coefficients

Sig.

Std.
Error

Beta

(Constant)

-13.857

3.099

-4.472

.001

Cutting
speed(rpm)

.121

.008

.947

14.507

.000

Feed
rate(mm/rev)

3.159

1.338

.154

2.361

.038

Depth of cut(mm)

7.332

2.676

.179

2.740

.019

Multiple regression coefficient of the first order power prediction model is approximately 0.95 (R 2 = 95%) indicates a good
model fit. ANOVA was performed to find the statistical significance of process. ANOVA table also gives values of sum of squares,
mean squares, degree of freedom and F values. Examination of t values in this table indicates that the variables, cutting speed, feed
696

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

rate, depth of cut are significant at 95% confidence level. From the result it was found that power consumed by carbide tool increases
with increase in RPM, feed rate and depth of cut. However the most important factor that effects power consumed is cutting speed
then second important factor is depth of cut followed by feed rate. The experimental results were used to develop the mathematical
models.
Mathematical model of power consumed by carbide tool,
P

=-13.857-A*0.121+B*3.159+C*7.332
CARBIDE

Where A= RPM, B= Feed rate (mm\rev), C= Depth of cut (mm)

HSS TOOL FORCE AND POWER CONSUMPTION READINGS


Table No: 5 HSS tool force and power consumption readings

Exp
NO

Cutting
speed(rpm)

Feed
rate(mm/rev)

Depth of
cut(mm)

Force
Z (N)

Velocity
(m/min)

Power
(KW)

Model
power(KW)

MRR

% Error

54

352.8

10.1736

3.589246

4.507

339.12

20.36286

54

0.5

284.2

10.1736

2.891337

2.615

169.56

-10.5674

215

0.5

588

40.506

23.81753

20.647

675.1

-15.3559

215

803.6

40.506

32.55062

29.34

1350.2

-10.9428

54

303.8

10.1736

3.09074

2.856

169.56

-8.21918

54

0.5

176.4

10.1736

1.794623

1.9648

84.78

8.661287

215

0.5

352.8

40.506

14.29052

18.996

337.55

24.77092

215

637

40.506

25.80232

20.888

675.1

-23.527

54

1.5

0.75

323.4

10.1736

3.290142

2.7355

190.755

-20.2757

10

135

0.75

509.6

25.434

12.96117

12.633

635.85

-2.59769

11

135

1.5

0.5

431.2

25.434

10.96714

10.8615

317.925

-0.97262

12

215

1.5

0.75

588

40.506

23.81753

20.7675

759.4875

-14.6865

13

135

0.75

460.6

25.434

11.7149

10.982

317.925

-6.67365

14

135

1.5

529.2

25.434

13.45967

12.7535

635.85

-5.53709

697

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

15

135

1.5

0.75

441

25.434

11.21639

11.8075

476.8875

5.006191

MRR VS POWER CONSUMPTION OF HSS TOOL


35

POWER (KW)

30
25
20
15

RPM 54

10

RPM 135
RPM 215

5
0

MRR(mm3/sec)

Figure No: 5 MRR vs. power consumption of HSS tool


The experimentally measured power consumption readings is used to plot the graph between material removal rate and power
consumed by the tool. From the graph it was found that as the MRR increases power consumption values also increases. Thus it is
noticed that the power consumed is a function of MRR and thus the value of MRR can be used to predict the value of power
consumed.
POWER CONSUMED VS RPM OF HSS TOOL
35

POWER (KW)

30

25
20
FEED 2

15

FEED 1.5

10

FEED 1

5
0
54

135

215

SPINDLE SPEED (RPM)

Figure No: 6 RPM vs. power consumption of HSS tool


698

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The experimentally measured power consumed readings is used to plot power consumption vs rpm graph at three separate
feed levels. Comparing the slop of lines of various feed parameters it was found that power consumed by the tool increases with
increase in rpm. It was also found that at constant rpm highest power consumption was observed for highest value of feed rate.
POWER CONSUMED VS FEED RATE OF HSS TOOL
35
POWER (KW)

30
25
20
RPM 54

15

RPM 135

10

RPM 215

5
0
1

1.5

FEED (mm/rev)

Figure No: 7 Feed vs. power consumption of HSS tool


The experimentally measured power consumed readings is used to plot power consumed vs feed graph at three separate rpm
levels. Comparing the slop of lines of various rpm parameters it was found that power consumed by tool increases with the increase in
feed rate. It was also found that at constant feed rate highest power consumption was observed for highest value of rpm.
POWER CONSUMED VS DEPTH OF CUT OF HSS TOOL
35
POWER (KW)

30

25
20
15

RPM 54

10

RPM 135

RPM 215

0
0.5

0.75

DEPTH OF CUT (mm)

Figure No: 8 Depth of cut vs. power consumption of HSS tool


The experimentally measured power consumed readings is used to plot power consumed vs depth of cut graph at three
separate rpm levels. Comparing the slop of lines of various rpm parameters it was found that power consumed by tool increases with
the increase in depth of cut. It was also found that at constant depth of cut rate highest power consumption was observed for highest
value of rpm.
Multiple regression analysis was conducted on experimentally measured power values using SPSS software. Mathematical
models are developed in terms of machining parameters. The values of cutting parameters are substituted in the mathematical model
699

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and corresponding power values are noted. Percentage error was calculated using experimental values and model values in order to
find out the variation.
Table No: 6 Regression analysis of HSS tool power consumption
Model Summary
Model

R Square

Adjusted R Square

Std. Error of the


Estimate

.966

.933

.915

2.8140226

ANOVA
Model
1

Sum of Squares

df

Mean Square

Sig.

Regression

1212.680

404.227

51.047

.000

Residual

87.106

11

7.919

Total

1299.786

14

Coefficients
Model

Unstandardized Coefficients

Standardized
Coefficients

Sig.

-4.318

.001

Std.
Error

Beta

(Constant)

-17.802

4.123

Cutting speed(rpm)

.131

.011

.926

11.866

.000

Feed rate(mm/rev)

3.823

1.780

.168

2.148

.055

Depth of cut(mm)

9.893

3.559

.217

2.779

.018

Multiple regression coefficient of the first order power prediction model is approximately 0.93 (R 2 = 93%) indicates a good
model fit. ANOVA was performed to find the statistical significance of process. ANOVA table also gives values of sum of squares,
mean squares, degree of freedom and F values. Examination of t values in this table indicates that the variables, cutting speed, feed
rate, depth of cut are significant at 95% confidence level. From the result it was found that power consumed by HSS tool increases
with increase in RPM, feed rate and depth of cut. However the most important factor that effects power consumed is cutting speed
then second important factor is depth of cut followed by feed rate. The experimental results were used to develop the mathematical
models.
Mathematical model of power consumed by HSS tool,
700

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

=-17.802-A*0.131+B*3.823+C*9.893
HSS

Where A= RPM, B= Feed rate (mm\rev), C= Depth of cut (mm)

COMPARISON OF POWER CONSUMED BY TOOLS


POWER VS FEED

POWER VS RPM
30

14
12

POWER (KW)

POWER (KW)

25
20
HSS
TOOL

15
10
5

CARBID
E TOOL

10
8

HSS TOOL

6
4

CARBIDE
TOOL

0
54

135

215

RPM

1.5

FEED (mm/rev)

(a)

(b)

POWER (KW)

POWER VS DEPTH OF CUT


14
12
10
8
6
4
2
0

HSS TOOL
CARBIDE TOOL

0.5

0.75

DEPTH OF CUT (mm)


(c)
Figure No: 9 Average power consumed with varying (a) cutting speed, (b) feed and (c) depth of cut
From the graph it can be seen that the average power consumed is lower for carbide tool in comparison to HSS tool during turning of
EN- 24 alloy steel. It can be seen that the average power consumed get affected mostly by cutting speed followed by depth of cut.
ACKNOWLEDGMENT
The authors gratefully acknowledge the support provided by SCMS College who helped us for completing this work into a success.
CONCLUSION
In this experimental work the power consumed by the HSS tool and tungsten carbide tool during turning of EN- 24 alloy steel
was studied. Based on the experimental data mathematical models are developed by multiple regression model using SPSS software.
701
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The model developed for power prediction produces smaller errors and it shows good results, since multiple regression coefficient of
the first order power prediction model of carbide tool is approximately 0.95 (R 2= 95%) and first order power prediction model of HSS
tool is approximately 0.93 (R2 = 93%) .Therefore the proposed model can be utilized to predict the corresponding power consumed by
HSS and Carbide tool during machining EN-24 steel rod at different parameters in turning. The established equation clearly revealed
that the rpm is the main influencing factor power consumption of tool and feed rate has the lowest influencing parameter. From the
comparison of the tools it was found that during turning of EN- 24 steel rod with both tools the HSS tool consumes more power than
the carbide tool.

REFERENCES:
[1] D.V.V. Krishan Prasad Influence of Cutting Parameters on Turning Process Using Anova Analysis Research Journal of
Engineering Sciences Vol. 2(9), 1-6, September 2013.
[2] G. Barrow, ANN, CIRP 22,203-211.1973
[3] George Box, Donald Behnken, "Some new three level designs for the study of quantitative variables", Technometrics, Volume 2,
pages 455475, 1960.
[4] BoxBehnken designs from a handbook on engineering statistics at NIS
[5] Harsh Y Valera, Sanket N Bhavsar Experimental Investigation of Surface Roughness and Power Consumption in Turning
Operation of EN 31 Alloy Steel Procedia Technology Volume 14, 2014.
[6] Hari Singh and Pradeep Kumar Mathematical models of tool life and surface roughness for turning operation through response
surface methodology Journal of Scientific and Industrial Research Volume 66, March 2007.
[7] L. B. Abhang and M. Hameedullah Power Prediction Model for Turning EN-31 Steel Using Response Surface Methodology
Journal of Engineering Science and Technology Review 3 (1) (2010) 116-122.
[8] L.B.Abhang and M. Hameedullah Chip-Tool Interface Temperature Prediction Model for Turning Process International Journal
of Engineering Science and Technology Vol. 2(4), 2010, 382-393.
[9] M. Adinarayana, G. Prasanthi2, G. Krishnaiah Parametric analysis and multi objective optimization of cutting parameters in
turning operation of AISI 4340 alloy steel with CVD cutting tool International Journal of Research in Engineering and
Technology Volume 03, Issue 02 , Feb-2014.
[10] M Adinarayana , G Prasanthi and G Krishnaiah Optimization for surface roughness, MRR, power consumption in turning of
EN24 alloy steel using genetic algorithm International Journal of Mechanical Engineering and Robotics Research Volume. 3,
No. 1, January 2014.
[11] Milan Kumar Das, Kaushik Kumar, Tapan Kr. Barman and Prasanta SahooOptimization of Surface Roughness and MRR in
Electrochemical Machining of EN31 Tool Steel using Grey-Taguchi Approach Procedia Materials Science Volume 6 ,2014.
[12] Raman Kumar1, Jaspreet Singh Rai and Navneet Singh Virk Analysis the effects of process parameters in EN24 alloy steel
during CNC turning by using MADM International Journal of Innovative Research in Science, Engineering and Technology
Volume. 2, Issue 7, July 2013.
[13] Rahul Davis, Jitendra Singh Madhukar, Vikash Singh Rana, Prince Singh Optimization of Cutting Parameters in Dry Turning
Operation of EN24 Steel International Journal of Emerging Technology and Advanced Engineering Volume 2, Issue 10, October
2012.
[14] S. H. Rathod, Mohd. Razik Finite Element Analysis of Single Point Cutting Tool International OPEN ACCESS Journal of
Modern Engineering Research Vol. 4, Iss.3, Mar. 2014.
[15] Satish Chinchanikar, S.K. Choudhury Effect of Tool Coating and Cutting Parameters during Turning Hardened AISI 4340 Steel
Procedia Materials Science Volume 6, 2014

702

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Military Technologies in the Korean War: A Historical Overview


LubnaAhsan, Assistant Professor, Hamdard University, Karachi-PAKISTAN. (lubnaahsan12@gmail.com)
Dr. Syed Shahab Uddin, Assistant Professor, FUUAST, Karachi-PAKISTAN. (shahabhashmi2012@gmail.com)
Faisal Javaid, Lecturer, FUUAST, Karachi-PAKISTAN (faisaljavaid2008@gmail.com)

Abstract: This paper aims to present an overview of military technology used during the Korean War. The paper begins with an
overview and background of the Korean War specially formation of both side military, Casualty statistics and war strategy of Air and
ground forces,this paper also examine the role of UN forces in Korean war and military technologies which was used by UN forces in
the Korean War.
Keywords:Korean People, Soviet Union, United States, military, China

INTRODUCTION:
Korean War is also known as Chosun and inter-Korean War.The Armistice Agreement was signed on 25
July 1950 among the Democratic Peoples Republic of Koreas two opposing ideologies. However, this
agreement was supported by a number of countries of both Koreas varying degrees of involvement in the
war.On July 27, 1953, Korean Armistice Agreement, due to the war the two sides signed an armistice rather
than a peace agreement, technically speaking, the war is not over yet, still in the UN war and North Korea state.
May 27, 2009, North Korea unilaterally declared no longer abides by the 1953 armistice agreement laid down
years1.
The war broke out in June 25, 1950. The Korean Peoples Army in June 28, 1950 captured the South
Korean capital, Seoul, and moved south to attack. The UN forces compressed to Pusan perimeter defense
circles. United Nations forces on September 15, 1950 in front of the landing after time, reversing the situation
of the war, forcing the Korean Peoples Army moved northwards. United Nations forces on September 28, 1950
re-accounted Seoul and crossed the military demarcation line and the Korean War began.On October 19, 1950
United Nations troops occupied the North Korean capital Pyongyang.
U.S. troops from the Inchon landing, October 3, 1950, Zhou Enlai emergency meeting with Indian
Ambassador Panikkar, said that if the U.S. military rather than South Korean troops crossed the 38th parallel,
China will be forced to intervene in the Korean War. By the end of October 1950, part of the UN military forces
had advanced to the Yalu River. The Taiwan issue was shelved after a few months, the government requested
the Korean Peoples Republic of China in the Soviet Union, under the promise to help the situation, on October
19, 1950 to send Chinese Peoples Volunteers into the Korean war, the second and third, respectively. In the
battle the Pyongyang and Seoul forced the UN forces that retreated to 38 degrees south latitude line2.
Korean War: The War Formation of Both Military:
South Korean army lacked weapons, tanks and air force, heavy artillery, and military officers during
World War II, mostly in Japan or Manchukuo military school background.The president Park Chung-hee,
served as Commander, Chief of Staff of BaishanHua, etc. According to The U.S. Secretary of State Dean
Acheson the Korean Peninsula was excluded from the scope of the U.S. defense.
703

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

From 1946 onwards the Soviet Union trained thousands of North Korean military officers. Each division
is also equipped with about fifteen Soviet advisers, plus a large number of international citizens who
participated in the Anti-Japanese War and the Civil War. There was a wealth of practical experience in Korea.
North Korean soldiers and ethnic nationality, the strength of the Korean Peoples Army is truly unmatched in
Korea. Before the outbreak of war, the DPRK and the ROK military balance was: 7:1, 13:1 semiautomatic
rifles, troops 2:1, 2:1 artillery, machine guns, tanks 6.5:1, 6:1 aircraft. The Korean Peoples Army respects
position of absolute dominance3.
(a) FIRST BATTLE
Evening of 19 October 1950, the commander to Peng, the Chinese Peoples Volunteers from Anton
(now called Dandong), estuaries (ieKuantan County town of Austin estuary), Ser Ann (now known as the
Collective Security) and other various locations secret Sino-Korean border spend the Yalu River, October 25
raid. Coalition forces did not expect to cross the military demarcation line into the case of North Koreas attack
on coalition forces and coalition forces had also not received any military intelligence has crossed the Yalu
River.
(b) SECOND BATTLE
Although the first battle of Waterloo, MacArthur insisted to send troops only symbolic, but he also
admitted that the United Nations was totally destroyed danger. It is suggested that it would be massive
bombing northeast region. However it is clear that the Truman administration in the United States in World War
II just after the war ended immediately and the Peoples Republic will likely trigger a Third World War 4.
(c) THIRD BATTLE
December 31, 1950, the Korean army launched a third campaign, to promote the military demarcation
line 50 miles south of Seoul, was the 50th Peoples Volunteers Army and the Korean Peoples Army First Army
occupation. Truman generates a lot of conflict with the government coalition commander in front of
MacArthurs opinion. Truman with China or the Soviet Union wanted to avoid a direct conflict, do not want to
trigger World War III. MacArthur places priority on military victory; a lot of action in the Korean peninsula has
not been applied Washingtons approval, some even contrary to Washingtons decision. MacArthurs is very
dangerous nuclear era.
(d) FOURTH BATTLE
Fourth Battle of the Korean Peoples Volunteers launched too hastily. The volunteers were, therefore,
the first setback since entry into combat. Volunteers had to abandon Incheon and Seoul, across the board was
forced to retreat more than 100 kilometers to the north of the withdrawal of the military demarcation line. The
fourth campaign ended in failure5.
(e) FIFTH CAMPAIGN
In April, the Korean battlefield advantage backward UN troops. April 22, 1951, the Chinese Peoples
Volunteers launched the fifth campaign, to prayer offensive end of the 29th United Nations forces began to
launch a second spring offensive, pressing in Cheorwon, Yeoncheon, 63 volunteers began by military by the
water clinging to the mountain. The second United Nations forces entered the military demarcation line; the
704

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

volunteers were forced to retreat across the board after about 40 kilometers barely lived coalition to stop the
attack, the U.S.
Korean War: After the War:
At 10:00 on July 27 1953, the two sides signed at Panmunjom, the Korean Armistice Agreement and
supplementary agreement on temporary truce,the cease-fire agreement. The end result of the negotiations is
near the military demarcation line in at 22:00 on July 27 1953 the entire line of actual control of both north and
south of the demilitarized zone established two kilometers wide. In 1954, Soviet officials and representatives of
States in the war on the Korean peninsula, held talks in Geneva, Switzerland. But the negotiations did not reach
a permanent peace plan failed to resolve the North Korean reunification issue, until today fifty years later, the
Korean peninsula is still divided the two countries: the Democratic Peoples Republic of Korea6.
Casualty Statistics:
March 23, 1953, the first batch of repatriated to the United States killed servicemen, according to
Chinese statistics, Chinas Volunteers during the Korean War casualties 50 million people, killed 171,687
people died, 22 people injured (net repeatedly wounded passengers), totaling 39 million casualties, the total
number of casualties in the Korean army of about 63 million people. See the Korean War Memorial. UN forces,
the U.S. military casualties of about 14 million were reported. U.S. forces killed 36,570 people, injured 100,000
remaining passengers, several wounded after deducting the number of wounded is unknown. According to
South Korean statistics, 415,004 South Korean soldiers and civilians dead people, 137,899 people dead soldiers,
injured 450,742 people, 24,495 people missing, captured 8,343 people in total 621,479 people, South Korean
prisoners of war casualties that the United States missing digits corresponding total 154,881 people (South
Korean Defense Ministry website. 1988 published Chinese Peoples Volunteers in the Korean War military
history, the statistics of the DPRK troops were wiped out 109 million people (including the Korean Peoples
Army to fight independently annihilated 13.6 million), of which U.S. 39 million people, Han Jun 66 million
people, the other servants Army (UN) were more than 20,000 people7.
Technology in War:
May 27 to June 23, volunteers launched a second offensive in the attack in the original volunteers to
fight U.S. troops based in South Korea plan to combat military based. 19 Volunteers Corps, 9th Regiment, 20th
Corps and the Korean Peoples Army Corps have the following troops for the defense of South Koreas 51
offensive operations support points 65 times, a total of 41,203 UN troops wiped out people, volunteers
casualties 19,354 people. July 13 to 27 to start the third attack, which Jincheng campaign, the attack to combat
troops based in South Korea. The campaign volunteers 9, 19, 20 for South Korea Peoples Army Corps and
Army combat 45 times. In the July 13 to 16, launched the offensive to the defensive, just three days from the
time they put a positive front to promote the 192.6 km ahead. UN troops deployed eight divisions back in the
next time, more than 1,000 times, but under the circumstances to pay a huge price before the 27th Battle of the
armistice signed to end the occupation Jincheng only a giant in the room Kitayama position8.
COMBAT:
Early 1950, the Korean War, the Air Force spent the Far East 44 squadrons of 657 war planes. KPA only
20 fighters, soon lost combat capability. August, Stalin sent Soviet Air Force 138 aircraft stationed in
ShenyangSoviet Secret Intervention.
705

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

November 1, 1950, in advance of the Soviet Air Force stationed in Shenyang MiG-15 over the Yalu
River in North Korea and the U.S. Air Force for the first time at war. That same month, the Soviet Union
decided to send war fighter, with two 64th Air Division established a separate Air Force fighter, MiG-15 jet
fighters were to undertake to defend the Yalu River and less than 75 kilometers south of the border line of the
strategic objectives and lines of communication task. At this stage, the U.S. war planes (including combat
aircraft and non-combat aircraft such as transports, contact, etc.) has reached 14 Battalion, more than 1,100
aircraft, machine number are numerous, but most of propeller aircraft, jet Just straight-wing F-80 fighter, the
performance is far better than the Soviet Air Force MiG-15. Nov. 8, Vandenberg Air Force commander ordered
the dispatch of F-84E and F-86A each one wing to the Korean Peninsula war. 4th Fighter Wing F-86
transportation by sea, most stationed is the only one squadron stationed near Gimpo Airport in Seoul in
December 15 for the first time to perform tasks.
Soviet Air Force MiG-15 considered a threat to U.S. bombers to make November the U.S. Air Force
bombed strategic targets six Yalu bridges and 10 cities in North Korea failed to achieve. U.S. Air Force that is
due to geopolitical constraints, so that bombers could not fly into the Chinese territory, and anti-aircraft fire
from the territory, causing distress to perform the task, but still most of the successful completion of the task.
In December, the newly formed Peoples Liberation Army Air Force stationed in Dandong, until
January 1951 was the first time to participate in combat. In late 1950 and early 1951, in the northwestern
peninsula, from the Yalu River to the south Chungchon airspace between the Soviet Air Force MiG cause
considerable threat to coalition aircraft. American pilots began to MiG Alley call this region (called the Chinese
media, this translated into MiG Alley) that would expand into the airspace like the back alleys in the melee9.
MiG Alley is the most mysterious power of the Soviet pilots, Stalin ordered Defense Minister
VasilevskyMarshal is responsible for sending the Chinese aviation divisions; all dressed in Soviet combatants
Chinese Peoples Volunteers uniforms. Although Stalin strict confidentiality requirements, but in fact, since the
Soviet Union to join the coalition combat ranks, and soon knew from listening to radio communications among
Soviet intervention, but the entire coalition forces during the Korean War also choose the attitude of silence in
order to avoid the war expanded. At that time the Air Force Chief of Staff Hoyt Vandenberg after returning
from the Far East to inspect publicly declared: They has become one of the worlds major air forces
deliberately holding high the strength of air force, mistaken identity, in order to avoid exposure to a large
number of war Soviet pilots truth10.
Although the Soviet Union has been involved in the Air Force, but the Air Force declared in the Korean
War Volunteers effectively play their role. According to Peoples Republic of Chinas statistics, the volunteers
sent off Air Force in 2457 war, the CPC approved 26,491 sorties, there are 212 pilots shot down and wounded
over UN aircraft. Volunteers throughout the Air Force claimed that UN troops were shot down 330 aircraft, the
Air Force was shot down 231 volunteers aircraft, and 116 pilots were killed. The United States declared total
information 647 F-86 deployments to Korea Theater, a total loss of 231, which identifies 73 combat losses,
unexplained 34 to strike the other losses, including failure causes.
CONCLUSION:
In a nut shell, although the United States did not reach the target of Korean unification, but reached
defend Japan and led to NATO cooperation objectives. Encourage the United States to take the Korean War
Cold War containment policy for the United States to further expand the defensive perimeter paving Asia.
These policies eventually led to the Cold War the United States tried to stop the fall of communism in the hands
706

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

of the Vietnamese. The United States has 54,260 people were killed in the Korean War. Served during the
Korean War, the U.S. Chairman of the Joint Chiefs of Staff of Admiral Bradley said that if according to
MacArthurs strategic plan to extend the war in Korea to the bombing of the Chinese Manchuria and blockade
the coast, it will be at the wrong time the wrong place, the wrong enemy and fought a wrong war. Experienced
the Vietnam War the Americans after the baptism, the war has been almost forgotten, therefore the Korean War
has been called the forgotten war.

REFERENCES:
1. Andrew J, N. and Andrew, S. (2014). 91 Foreign Affairs 2012 How China Sees America: The Sum of
Beijings
Fears
Essay.
[online]
Heinonline.org.
Available
at:
http://heinonline.org/HOL/LandingPage?handle=hein.journals/fora91&div=92&id=&page= [Accessed 27 Jul.
2014].
2.Bennett, B. and Lind, J. (2011). The collapse of North Korea: military missions and requirements.
International Security, 36(2), pp.84--119.
3.Clarke, R. and Knake, R. (2010).Cyber war. 1st ed. New York: Ecco.
4.Hallion, R. (2011). The Naval Air War in Korea. 1st ed. Tuscaloosa: University of Alabama Press.
5. Hughes, C. (2013). Japans re-emergence as a normal military power. 1st ed. Oxford: Oxford University
Press for the International Institute of Strategic Studies.
6.Sandler, S. (2014).The Korean War. 1st ed. Hoboken: Taylor and Francis.
7. Schrofl, J., Rajaee, B. and Muhr, D. (2011).Hybrid and cyber war as consequences of the asymmetry. 1st
ed. Frankfurt am Main: Peter Lang.
8.Weaver, A. (2011). Tourism and the military: Pleasure and the War Economy. Annals of Tourism Research,
38(2), pp.672--689.
9.Shin, J. (2013).The economics of the latecomers. 1st ed. London: Routledge.
10. Slantchev, B. (2013). Military Threats: The Costs of Coercion and the Price of Peace. Cambridge
University Press. London

707

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RELIABLE INTEGRITY SECURITY AUDITING USING


ISOLATION PRESERVING MODEL FOR MULTI CLOUD
ENVIRONMENTS
Mrs.S.SABITHA, M.Sc., M.Phil..,

N. KARTHIKA,

ASSISTANT PROFESSOR,
M.PHIL FULL-TIME RESEARCH SCHOLAR,
tibusabi@gmail.com
karthika196cse@gmail.com 9750206137
DEPARTMENT OF COMPUTER SCIENCE AND APPLICATIONS,
VIVEKANANDHA COLLEGE OF ARTS AND SCIENCES FOR WOMEN, TAMILNADU, INDIA
Abstract Cloud computing is provide a dynamically scalable resources provisioned as a service over the webpage. The third-party,
on-demand, self-service, pay-per-use, and seamlessly scalable computing resources and services offered by the cloud environment
promise to reduce capital as well as operational expenditures for hardware and software. Various distinct architectures are introduced
and discussed according to their security and privacy capabilities and prospects. It provides four distinct models in form of abstracted
multicloud architectures. These developed multi cloud architectures allow to categorize the available schemes and to analyze them
according to their security benefits. An assessment of the different methods 1) Replication of applications, 2) Partition of application
System into tiers, 3) Partition of application logic into fragments and 4) Partition of application data into fragments is given in
particular. In addition, enabling public audit ability for cloud storage is of critical importance so that users can resort to a third party
auditor (TPA) to check the integrity of outsourced data and be worry-free. This paper proposes a secure cloud storage system
supporting Isolation-preserving public auditing. It further extends the result to enable the TPA to perform audits for multiple users
simultaneously and efficiently.

Keywords Cloud Computing, Muticloud, Integrity, Isolation Preserving Auditing, TPA


I.

INTRODUCTION

Cloud computing creates a large number of security issues and challenges. A list of security threats to cloud computing is presented in
[8]. These issues range from the required trust in the cloud provider and attacks on cloud interfaces to misusing the cloud services for
attacks on other systems. The main problem that the cloud computing paradigm implicitly contains is that of secure outsourcing of
sensitive as well as critical data and processes. When considering using a cloud service, the user must be aware of the fact that all data
given to the cloud provider leave the own control and protection sphere. Hence, a strong trust relationship between the cloud provider
and the cloud user is considered a general prerequisite in cloud computing. Depending on the political context this trust may touch
legal obligations. An attacker that has access to the cloud storage component is able to take snapshots or alter data in the storage. This
might be done once, multiple times, or continuously. An attacker that also has access to the processing logic of the cloud can also
modify the functions and their input and output data.

708

Replication of applications allows to receive multiple results from one operation performed in distinct clouds and to compare
them within the own premise. This enables the user to get evidence on the integrity of the result.
Partition of application System into tiers allows separating the logic from the data. This gives additional protection against data
leakage due to flaws in the application logic.
Partition of application logic into fragments allows distributing the application logic to distinct clouds. This has two benefits.
First, no cloud provider learns the complete application logic. Second, no cloud provider learns the overall calculated result of the
application. Thus, this leads to data and application confidentiality.
Partition of application data into fragments allows distributing fine-grained fragments of the data to distinct clouds. None of
the involved cloud providers gains access to all the data, which safeguards the datas confidentiality.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The existing system utilizes the technique of public key based homomorphism linear authenticator (or HLA for short), which
enables Third Party Auditor to perform the auditing without demanding the local copy of data and thus drastically reduces the
communication and computation overhead as compared to the straightforward data auditing approaches. By integrating the HLA with
random masking, the protocol guarantees that the TPA could not learn any knowledge about the data content stored in the cloud server
during the efficient auditing process. The aggregation and algebraic properties of the authenticator further benefit our design for the
batch auditing. Various prime numbers are assigned as tags for each segment of file which is stored in server. Each segment is having
two prime numbers each of which belongs to a different prime order. The third party auditor knows the prime numbers in a rando m
manner. During verification, the third party auditor sends the numbers as random challenge and if the numbers are matched with tags
then the file integrity is said to be verified.
The rest of this paper is organized as follows. Section 2 presents the related work followed by the main contribution multi cloud
security as well as the problem definition in Section 3. Section 4 gives a brief introduction to Isolation Preserving Auditing while and
explains the proposed approach. Finally, Section 5 presents the evaluation of the algorithm followed by the conclusions and future
work described in Section 6 and Section 7.

I.

RELATED WORK
The cloud computing paradigm has been hailed for its promise of enormous cost-saving potential. In spite of this euphoria,

the consequences regarding a migration to the cloud need to be thoroughly considered. Amongst many obstacles present, the highest
weight is assigned to the issues arising within security. Cloud security is discussions to date mostly focus on the fact that customers
must completely trust their cloud providers with respect to the confidentiality and integrity of their data, as well as computation
faultlessness. However, another important area is often overlooked: if the Cloud control interface is compromised, the attacker gains
immense potency over the customer's data. This attack vector is a novelty as the result of the control interface (alongside with
virtualization techniques) being a new feature of the Cloud Computing paradigm, as NIST lists On-demand self-service and Broad
network access as essential characteristics of Cloud Computing systems [1]. The main goal of this paper [2] is the investigation and
evaluation of security and privacy threats caused by the unawareness of users in the cloud. Although the methods and techniques
described in this paper are applicable to arbitrary IaaS providers, they focused on one of the major cloud providers However, to
actually agree on a specific SLA a user first has to assess his organizational risks related to security and resilience [3]. Current
solutions that restrict the provision of sensible services to dedicated private, hybrid or so called national clouds do not go far enough
as they reduce the user's flexibility when scaling in or out and still force him to trust the cloud provider. Furthermore, private clouds
intensify the vendor lock-in problem. Last but not least, there is no support for deciding which services and data could be safely
migrated to which cloud. Instead they demanded new methods and technical support to put the user in a position to benefit from the
advantages of cloud computing without giving up the sovereignty over his data and applications. In their current work, they followed a
system oriented approach focusing on technical means to achieve this goal.
They identified security as a major obstacle that prevents someone to transfer his resources into the cloud. In order to make
sound business decisions and to maintain or obtain security certifications, cloud customers need assurance that providers are following
sound security practices and behave according to agreed SLAs [4]. Thus, their overall goal is the development of a flexible open
source cloud platform that integrates all necessary components for the development of user-controlled and -monitored secure cloud
709
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

environments [5].

II.

MAIN CONTRIBUTIONS
The proposed system includes all the existing system approach which covers multiple cloud service provider environments.

In addition, size blocks of data are being processed with varying size nature in different cloud locations having same copy of data. The
data blocks is stored and retrieved in different cloud locations based on the storage and computational capability. Thus the proposed
system explores such issue to provide the support of variable-length block verification. Likewise, the privacy level for all cloud
providers is analyzed by trusted authority and security degree and performance is quantified for encryption algorithms. The following
main objective is proposed system.

To replicate the applications that allows to receive multiple results from one operation performed in distinct clouds and to
compare them within the own premise. This enables the user to get evidence on the integrity of the result.
To partition the application System into tiers that allows separating the logic from the data. This gives additional protection
against data leakage due to flaws in the application logic.
To partition the application logic into fragments that allows distributing the application logic to distinct clouds. This has two
benefits. First, no cloud provider learns the complete application logic. Second, no cloud provider learns the overall
calculated result of the application. Thus, this leads to data and application confidentiality.
To partition the application data into fragments that allows distributing fine-grained fragments of the data to distinct clouds.
None of the involved cloud providers gains access to all the data, which safeguards the datas confidentiality.

III.

PROPOSED PROTOCOL
The proposed system includes all the existing system approach which covers multiple cloud service provider environments.

In addition, size blocks of data are being processed with varying size nature in different cloud locations having same copy of data. The
data blocks is stored and retrieved in different cloud locations based on the storage and computational capability. Thus the proposed
system explores such issue to provide the support of variable-length block verification. Likewise, the privacy level for all cloud
providers is analyzed by trusted authority and security degree and performance is quantified for encryption algorithms. The proposed
system has following advantages.

Partial data of files are taken from multiple mirror locations and send to selected client.
Suitable for very large size files.
Irrelevant size blocks of data are handled among the multiple cloud service providers based on their computational
capabilities.

Different trust level is set to different cloud providers and encryption/decryption is varied based on the clouds computational
capability.

PROTOCOL PROCESS
1) MULTI-CLOUD SECURITY
In this first step process, the cloud node id and the cloud provider name is added. There are more cloud nodes for single cloud
710

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

provider. From the trusted authority, the cloud node receives secret tags for file blocks so that the blocks can be processed/ verified by
the cloud nodes. The next step, files are added to cloud nodes and executed based on a) Replication of applications from the random
cloud node, b) Partition of application System into tiers such that even the web server does not know the location of record in database
server, c) Partition of application logic into fragments such that half of the application login in one file stored in one cloud node and
other half of the application logic in other file stored in other cloud node and d) Partition of application data into fragments such that
partial records in one cloud database and remaining records in other cloud database.

2) PRIVACY PRESERVING AUDITING PROTOCOL


In this step, the file name is selected, the file content is split into various segments and each segment is given two prime
numbers each of which belongs to two prime order. One is given to the user, other is given to third party auditor. The combination of
the two is kept in server. During auditing, third party auditor randomly picks the segment ids and send corresponding prime number
vector to cloud server. If the credentials match, then the file integrity is said to be verified.

3) BATCH AUDITING PROTOCOL


In this step, during auditing, two processes of same third party auditor randomly pick the two set of segment ids and send
corresponding prime number vectors to cloud server. If the credentials match, then the file integrity is said to be verified.

4) STORAGE AND COMPUTATIONAL CAPABILITY BASED FILE STORAGE

A) FILE SELECTION
In this step, the file content is selected from client files. The file data is saved in cache.
B) ENCRYPTION
In this step, either DES (Data Encryption Standard) or AES (Advanced Encryption Standard) encryption work is carried out
and the selected file is encrypted.
Speed: The requirement of this level presents that no sensitive information in the data. Cloud location with low
computational capability uses weak encryption composition (DES) and high computational capability uses more encryption (AES) to
obtain more performance for using cloud services.
C) DECRYPTION
In this step, decryption work (DES and AES) is carried ou

711

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

V. EXPERIMENTAL RESULTS
Table 1.1 is describe the theoretical analysis for existing system and proposed system. The replication allocation
and computation overhead details.
METHOD

REPLICATION

COMPUTATION OVERHEAD

Resource Replication

Required

In client only

PIR based segmentation

Not required

Low in client tier/ More in database tier (stored


procedure) and negligible in web tier

Segmentation of application logic and data

Not required

In client only

Third party auditing

Not required

High In client, Low in third party system and


negligible in cloud node

FIG 1.1 TPA Computation Time Chart Comparison

Fig 1.2 Chart Comparisons for TPA Computation Time In %


Findings:

712

The proposed system provides a safe cloud storage methodology which supports privacy-preserving third party
auditing better than existing system.
This thesis suggests that the security can be increased if the architecture is changed from single cloud to multi cloud
environment.
Security mechanisms involved during third party auditing of outsourced data is discussed.
The methods are studied to perform the auditing without demanding the local copy of data and thus drastically
reduce the communication and computation overhead.
Four schemes are presented that can be applied in multi cloud environment to increase the security aspects.
Hiding resource usage statistics of a single resource for a single cloud provider is achieved if first method is applied.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The computation and data transfer size is very low if the second method is applied.
The third method provides the security such that a single provider may not be aware of the execution flow of the
single application as well as the cloud provider could not know or access all the data.
The fourth method provides the benefit of auditing with very low credential data to verify the file content.
It is proved that the third party auditing computation time is better than existing approach.

The future study should focus on security proof and enhancements in data retrieval of the proposed framework

ACKNOWLEDGMENT
I express my deep gratitude and sincere thanks to my supervision Mrs.S.Sabitha,M.Sc.,M.Phil., Assistant Professor,
Department of Computer Science in Vivekananda college of Arts and Sciences for women for her valuable, suggestion,
innovative ideas, constructive, criticisms and inspiring guidance had enabled me to complete the paper present work successfully.

VI.CONCLUSION
It is believed that almost all the system objectives that have been planned at the commencement of the software development
have been net with and the implementation process of the paper is completed. A trial run of the system has been made and is giving
good results the procedures for processing is simple and regular order. The process of preparing plans been missed out which might be
considered for further modification of the application. The paper effectively stores and retrieves the records from the cloud space
database server. The records are encrypted and decrypted whenever necessary so that they are secure.

VII. FUTURE ENHANCEMENTS


The following enhancements are should be in future.

The application if developed as web services, then many applications can make use of the records.
The data integrity in cloud environment is not considered. The error situation can be recovered if there is any
mismatch.
The web site and database can be hosted in real cloud place during the implementation.

REFERENCES

1.

S. Bugiel, S. Nurnberger, T. Poppelmann, A.-R. Sadeghi, and T.Schneider, AmazonIA: When Elasticity Snaps Back,
th

Proc. 18 ACM Conf. Computer and Comm. Security (CCS 11), pp. 389-400, 2011.
2.

Amazon Elastic Compute Cloud (Amazon EC2).http://aws.amazon.com/ec2/.

3.

D. Catteddu (Ed.): Security & Resilience in Governmental Clouds Making an informed decision. ENISA Report, January
2011.

4.

J. Somorovsky, M. Heiderich, M. Jensen, J. Schwenk, N. Gruschka, and L. Lo Iacono, All Your Clouds Are Belong to Us:
Security Analysis of Cloud Management Interfaces, Proc. Third ACM Workshop Cloud Computing Security Workshop
(CCSW 11), pp. 3-14, 2011.

5.

P. Mell and T. Grance: The NIST Definition of Cloud Computing (Draft). Recommendations of the National Institute of
Standards

and

Technology

(NIST),

Special

Publication

800145

(Draft),

http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf, January 2011.


713

www.ijergs.org

available

at

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

714

6.

G. Danezis and B. Livshits, Towards Ensuring Client-Side Computational Integrity (Position Paper), Proc. ACM Cloud
Computing Security Workshop (CCSW 11), pp. 125-130, 2011.

7.

S. Gro and A. Schill, Towards User Centric Data Governance and Control in the Cloud, Proc. IFIP WG 11.4 Intl Conf.

8.

Open Problems in Network Security (iNetSeC), pp. 132-144, 2011.

9.

M. Burkhart,M. Strasser, D. Many, and X. Dimitropoulos, SEPIA: Privacy-Preserving Aggregation of Multi-Domain


Network Events and Statistics, Proc. USENIX Security Symp., pp. 223-240, 2010

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANALYSIS OF EFFECTIVE MULTI USER DISTRIBUTION KEY


MANAGEMENT
SCHEME IN CLOUD DATABASE
Dr.P.SUMITRA, M.Sc., M.Phil.,MCA.,Ph.D.,
ASSISTANT PROFESSOR,
sumitravaradharajan@gmail.com

M.KAVINNELA,

M.PHIL FULL-TIME RESEARCH SCHOLAR,

Kavi92.msc@gmail.com

DEPARTMENT OF COMPUTER SCIENCE AND APPLICATIONS,


VIVEKANANDHA COLLEGE OF ARTS AND SCIENCES FOR WOMEN, TAMILNADU, INDIA.

Abstract: Database is a service paradigm poses several research challenges in terms of security and cost evaluation from a tenants
point of view. The cloud database as a service is a novel paradigm that can support several Internet-based applications, but its
adoption requires the solution of information confidentiality problems. A novel architecture for adaptive encryption of public cloud
databases that offers an interesting alternative to the tradeoff between the required data confidentiality level and the flexibility of the
cloud database structures at design time. This paper proposes a novel architecture for adaptive encryption of public cloud databases
that offers a proxy-free alternative to the system. The project demonstrates the feasibility and performance of the proposed solution
through a software prototype. The proposed architecture manages five types of information: plain data represent the tenant
information; encrypted data are the encrypted version of the plain data, and are stored in the cloud database; plain metadata represent
the additional information that is necessary to execute SQL operations on encrypted data; encrypted metadata are the encrypted
version of the plain metadata, and are stored in the cloud database; master key is the encryption key of the encrypted metadata, and is
known by legitimate clients
Keywords Cloud Database, Adaptive Encryption, SQL operations, Metadata, Distributed SQL operation, Multi key Distribution

I. INTRODUCTION
Managing and providing computational resources to client applications is one of the main challenges for the high performance
computing community framework. To monitoring resources existing solutions rely on a job abstraction for resource control, where
users submit their applications as batch jobs to a resource management system responsible for job scheduling and resource allocation
[1]. This usage model has served the requirements of a large number of users and the execution of numerous scientific applications.
However, this usage model requires the user to know very well the environment on which the application will execute. In addition,
users can sometimes require administrative privileges over the resources to customize the execution environment job model. The
manage and increasing availability of virtual machine technologies has enabled another form of resource control based on the
abstraction of containers. A virtual machine can be leased and used as a container for deploying applications [2]. Under this scenario,
users lease a number of virtual machines with the operating system of their choice; these virtual machines are further customized to
provide the software stack required to execute user applications. This form of resource control has allowed leasing abstractions that
enable a number of usage models, including that of batch job scheduling [3].
Investigate whether an infrastructure base operating its local cluster can benefit from using Cloud providers to improve the
performance of its users' requests. The evaluate scheduling strategies suitable for a distributed cloud that is managed by proposed
technology to improve its SQL operation with adaptive encryption data values. These strategies aim to utilize remote resources from
the Cloud to augment the capacity of the SQL operation. However, as the use of Cloud resources incurs a cost, the problem is tothe
price at which this performance improvement is achieved. The aim to explore the trade between performance improvement and cost.
The decryption and encrption key. As an application, they suggested private data banks: a user can store its data on an untrusted server
in encrypted form, yet still allow the server to process, and respond to, the users data queries (with responses more concise than the
trivial solution: the server just sends all of the encrypted data back to the user to process). Since then, cryptographers have
accumulated a list of killer applications for fully homomorphism encryption. However, prior to this proposal, we did not have a
715
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

viable construction.
The rest of this paper is organized as follows. Section 2 presents the related work followed by the main contribution distributed
cloud as well as the problem definition in Section 3. Section 4 gives a brief introduction to multi key distribution while and explains
the proposed approach. Finally, Section 5 presents the evaluation of the algorithm followed by the conclusions described in Section 6.

II. RELATED WORK


An effective the privacy of information stored in cloud databases represents an important objective to the adoption of the
cloud. Our proposal is characterized by two main contributions to the state of the art: architecture and cost model. Although data
encryption seems the most intuitive solution for privacy, its application to cloud database services is not trivial, because the cloud
database must be able to execute SQL operations directly over encrypted data without accessing any decryption key [7] [8]. The
encrypt whole database through some standard encryption algorithms that do not allow to execute any SQL operation directly on the
cloud. As a consequence, the tenant has two alternatives: download the entire database, decrypt it, execute the query and, if the
operation modifies the database, encrypt and upload the new data; decrypt temporarily the cloud database, execute the query, and reencrypt it. The former solution is affected by huge communication and computation overheads, and consequent costs that would make
cloud database services quite inconvenient; the latter solution does not guarantee data confidentiality because the cloud provider
obtains decryption keys [6].
This paper has a focus on database services and takes an opposite direction by analysis the cloud service costs from a
boarders point of generation. This approach is rather original because relate work is evaluate the process and connection of porting
logical applications to a distributed cloud platform, such as [4] focusing on specific astronomy software and a specific distributed
cloud provider and [5] [9] presenting a compassable cost estimation model for some classes of logical applications. Besides the focus
on a different context (logical versus database applications), the proposed model can be applied to any distributed cloud database
service provider, and it takes into account that over a medium-term period the database workload and the distributed cloud prices may
vary.

III. MAIN CONTRIBUTIONS


In the existing system, all data and metadata stored in the cloud database are encrypted and application running is a legitimate
client can transparently issue SQL operations (e.g., SELECT, INSERT, UPDATE and DELETE) to the encrypted cloud database
through the encrypted database interface. Data transferred between the user application and the encryption engine is not encrypted,
whereas information is always encrypted before sending it to the cloud database. When an application issues a new SQL operation, the
encrypted database interface contacts the encryption engine that retrieves the encrypted metadata and decrypts them with the master
key. To improve performance, the plain metadata are cached locally by the client. After obtaining the metadata, the encryption engine
is able to issue encrypted SQL statements to the cloud database, and then to decrypt the results. The results are returned to the user
application through the encrypted database interface

Multi-user key distribution scheme is not proposed to provide data to the same group of users
Encryption cost and thereby data transmission cost is more.

Same kind of encryption is maintained for all the data saved in the cloud nodes.

IV. PROPOSED PROTOCOL


Like existing system, proposed system also manages the data using both cloud server side and client side. In addition, user
group is maintained so that a single key is distributed to multiple users in the same group to reduce the key preparation overhead for
each user. This makes less computation overhead in both client and server side. Also, based on the security level, different data is
716
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

encrypted with different encryption mechanism and allowed to secure the data in inexpensive manner.

Multi-user key distribution scheme is proposed to provide data to the same group of users.

Encryption cost and thereby data transmission cost is less.

Different kind of encryption is maintained for various data saved in the cloud nodes based on the security level requirement.

ALGORITHM WORK MODEL


1. Records Collection
In this step, the records to be saved in cloud database (for example, employee and their attendance details) are keyed in and
saved. Employees and Attendance table are used to save the records. The plain meta data keyword(s) are also obtained for the given
record and saved along with the employee data.
2. Records Encryption
In this module, the using Triple Data Encryption Standard and Advanced Encryption Standard algorithm, the records
(employee details) is encrypted and saved in cloud database. EncEmployees and EncAttendance table are used to save the records.
Meta data keywords are also encrypted and saved in the database. So the cloud database contains both encrypted meta data and
encrypted data.
3. User
In this first step, the user id, username and password along with email id details are added and saved into Userstable. In
the second step, the user group id, user group name details are added and saved into UserGrouptable. In the third step, the user
group id, user id details are fetched from User Group and Users table and saved into Users Assigned table.
4. Distribute Key to User Group
In the step, the user group ids are fetched from User Group table and encryption master keys which are used to encrypt
the employees and attendance details are given to that user group. Mails are sent to all the users in the user group with encryption
master keys.
5. Client Application
In this step, application is running on a legitimate client. An SQL operations (e.g., SELECT, INSERT, UPDATE and DELETE) is to
encrypted cloud database through the encrypted database interface. Data transferred between the user application and the encryption
engine is not encrypted, whereas information is always encrypted before sending it to the cloud database. When a user issues a new
SQL operation, the encrypted database interface contacts the encryption engine that retrieves the encrypted metadata and decrypts
them with the given master key. After obtaining the metadata, the encryption engine is able to issue encrypted SQL statements to the
cloud database, and then to decrypt the results. The results are returned to the user application through the encrypted database
interface.

V. EXPERIMENTAL RESULTS
Figure 1.1 is describing existing system data transmission cost analysis. In this figure contains number of data size in
SQL operation and average encryption data size in SQL operation are shown below

717

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 1.2 is describing existing system data transmission cost analysis. In this figure contains number of data size in SQL operation
and average encryption data size in SQL operation are shown below. The comparison for existing and proposed system data transfer
cost analysis is better than the proposed multi key distribution model for distributed could environments.

ACKNOWLEDGMENT
I express my deep gratitude and sincere thanks to my supervisor Dr.P.Sumitra,M.Sc.,M.Phil.,MCA.,Ph.D., Assistant
Professor, Department of Computer Science at Vivekananda college of Arts and Sciences for Women for her valuable,
Suggestion, innovative ideas, constructive, criticisms and inspiring guidance had enabled me to complete the paper present work
successfully
718

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

VI.CONCLUSION
We address the data privacy concerns by proposing a novel cloud database model that uses adaptive encryption techniques
with no intermediate servers. This scheme provides tenants with the best level of privacy for any database workload that is to change
in a medium-term period. We investigate the feasibility and performance of the proposed architecture through a large set of
experiments based on a software prototype subject. Our results analysis proved that the cloud networks semantic that are typical of
cloud database environments hide most overheads related to static and adaptive encryption. Moreover, we propose a model and a
methodology that allow a tenant to estimate the costs of plain and encrypted cloud database services even in the case of workload and
cloud price variations in a medium-term based. By applying the model to actual cloud provider cost, we can determine the encryption
and adaptive encryption costs for data privacy. Future research could analysis the proposed model for distribution user key schemes
and under different threat model hypotheses

REFERENCES;
1.

R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, Cloud computing and emerging it platforms: Vision, hype,
and reality for delivering computing as the 5th utility, Future Generation Comput. Syst., vol. 25, no. 6, pp. 599616, 2009.

2.

T. Mather, S. Kumaraswamy, and S. Latif, Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance.

3.

Sebastopol, CA, USA: OReillyMedia, Inc., 2009.

4.

H.-L. Truong and S. Dustdar, Composable cost estimation and monitoring for computational applications in cloud
computing environments, Procedia Comput. Sci., vol. 1, no. 1, pp. 21752184, 2010.

5.

E. Deelman, G. Singh, M. Livny, B. Berriman, and J. Good, The cost of doing science on the cloud: The montage
example, in

6.

Proc. ACM/IEEE Conf. Supercomputing, 2008, pp. 112.

7.

H. Hacig um u s, B. Iyer, and S. Mehrotra, Providing database as a service, in Proc. 18th IEEE Int. Conf.
Data

8.

Eng.,Feb.2002, pp. 2938.

9.

G.Wang, Q. Liu, and J. Wu, Hierarchical attribute-based encryption for fine-grained access control in cloud storage
services, in

10. Proc. 17th ACMConf. Comput. Commun. Security, 2010, pp. 735737.
11. Google.

(2014,

Mar.).

Google

Cloud

Platform

Storage

with

server

side

encryption

[Online].

Available:

HTTP://GOOGLECLOUDPLATFORM. blogspot.it/2013/08/google-cloud-storage-now-provides.html.

12. H.Hacig, u s, B. Iyer, C. Li, and S.Mehrotra, Executing SQL over encrypted data in the database-service-provider model,
in Proc. ACMSIGMODIntlConf.Manage.Data, Jun. 2002,pp. 216227.

13. L. Ferretti, M. Colajanni, and M. Marchetti, Distributed, concur- rent, and independent access to encrypted cloud
databases,
14. IEEE Trans. Parallel Distrib. Syst., vol. 25, no. 2, pp. 437446, Feb. 2014.

719

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Performance Enhancement ofThree Phase Squirrel Cage Induction Motor


using BFOA
M.Elakkiya1, D.Muralidharan2
1

PG Student,Power Systems Engineering, Department of EEE, V.S.B. Engineering College, Karur - 639111, India
2

Assistant Professor,Department of EEE, V.S.B. Engineering College, Karur - 639111, India


elakkiya.kavin@gmail.com, dmuralidharan20@gmail.com

AbstractThis paper describes an Intelligent Bio-inspired optimization technique to minimize the square of errors in the parameters
of an induction motor. The torque ripples and speed error would degrade the performance of the machine. In order to enhance the
performance parameters of cage rotor type induction motor, the speed error and torque ripples should be minimized. This can be
achieved by tuning the gain parameters in PI controllers. Hence, this research work focuses on the new optimization technique called
Bacterial Foraging optimization Algorithm (BFOA). Here, Bacterial Foraging optimization (BFO) is used for efficiently tuning the
derivative free Proportional Integral (PI) controller to an optimum value. This algorithm is used for tuning the speed controller, flux
controller and torque controller to achieve the desired control performance. Hence the machine can run at reference speed under
dynamically varying conditions. Also the peak overshoot, undershoot and settling time can be minimized. Moreover, simulation
results are given clearly by using MATlab and the hardware implementation will be the future work.

Keywords Bacterial foraging optimization, PI Controller, Squirrel cage Induction motor, Sensorless speed estimation, chemotaxis,
swarming, dispersal.
INTRODUCTION

Three-phase induction motors are widely used in industrial, domestic as well as commercial applications [6]. Especially, squirrel cage
rotor type is used because of its advantages such as simple and rugged design, less maintenance and low cost. But in the other hand,
controlling of speed is one of major difficult task in case of AC induction motor. Therefore to improve the performance of the machine
is very essential, also controlling the speed is very important. For controlling purpose of Ac motors, the two methods are:

Field oriented control


Direct torque control

In Field Oriented Control or vector control scheme, torque and speed control is achieved by decoupling of the stator components. But
still there is complexity in implementation and also it requires necessary coordinate transformations. These drawbacks are overcome by
the introduction of Direct Torque Control (DTC) scheme for AC motors. In this stator resistance is only required for the estimation of
torque and flux and there would be very fast dynamic response to torque. Here decoupling between the stator flux component can be
achieved by directly controlling the magnitude of the stator flux. The stator voltage measurements should have as low offset error as
possible in order to minimize the flux estimation error. Hence, the stator voltages are usually estimated from the measured DC
intermediate circuit voltage. Hence PI controllers are used to keep the measured components such as torque or flux at their reference
values. The classical PI (proportional, integral) control method is mostly used in motor control system to eliminate the forced
oscillations and steady state error. But they are slow adapting to parameter variations, load disturbances and speed changes. There are
several design techniques for PI controllers are found in the literature, starting from Ziegler Nichols method to modern ones (ANN,
Fuzzy, evolutionary programming, sliding mode, etc).Thus, many intelligent techniques were used for tuning the controllers. In genetic
algorithm and particle swarm optimization, there was premature convergence which degrades the performance of the system.
In this paper, an evolutionary optimization technique called bacterial foraging optimization algorithm has been proposed for making
efficient tuning of PI controller. This BFO undergoes the following steps such as chemotaxis, swarming, reproduction, elimination and
dispersal. There are two characteristics such as swimming and tumbling which is used for the movement of bacteria. In next step it gives
signal to the neighboring bacteria to form a swarm (group). The healthier bacteria reach the reproduction stage and get split into two
720
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

groups. The least healthy bacterium can be eliminated and dispersed. Several steps were done to find the best solution. Hence, this BFO
technique enhances the search capability and also it overcomes the premature convergence. Thus error minimization can be done for the
controllers using optimal tuning of gain parameters.
ESTIMATION OF INDUCTION MOTOR PARAMETERS

For estimation of induction motor parameters Sensorless speed estimation is used. The conventional speed sensor is replaced by
Sensorless speed estimation to achieve more economical control. In order to minimize the torque ripples, Sensorless estimation of
speed, torque, flux and theta are calculated by using stator current. This Sensorless speed estimation improves reliability and decrease
the maintenance requirements. The torque ripples can be minimized by the following estimation of induction motor parameters. It can
be able to control directly the stator flux and the electromagnetic torque by directly controlling the voltage and current. Parks
Transformation for stator voltage and current is done to reduce the machine complexity due to the varying angle and time for
inductance terms. Also the three phase to single phase conversion makes the estimation quiet easy. Consider the following current
equation in d-q terms from abc is obtained using the equation,
Id =

2
3

Ia sint + Ib sin t
Iq =

2
3
2
3

+ Ic sin t +

2
3

(1)

Ia cost + Ib cos t

2
3

+ Ic cos t +

2
3

(2)

IO = (Ia + Ib + Ic ) (3)
3

Similarly the voltage equations are,


Vd =

2
3

Vq =

Va sin t + Vb sin t

2
3

Va cos t + Vb cos t

+ VC sin t +
+ Vc cos t +

VO = (Va + Vb + VC )
3

2
3

(4)

(5)

(6)

The stator flux estimation is given as,


sd = Vsd Rs . Isd dt

(7)

sq = Vsq Rs . Isq dt

(8)

Where Rs is the stator resistance and it can be obtained by calculating the rotor resistance. From the d-axis and q-axis stator flux
component, the magnitude of stator flux is 2sd + 2sq obtained and the Torque equation is,
3

Te = P (sd . Isq sq . Isd )


2

(9)

Where Te is the electromagnetic torque, P is the number of poles. Then the stator current for d-axis and q-axis is denoted asIsd and Isq
respectively. Similarly stator voltage for d-axis and q-axis is denoted as Vsd and Vsq respectively. Because of the Sensorless speed
estimation, Electrical speed is obtained by calculating the torque, current and voltage. Then the rotor flux is obtained by

Lm
Lr

times the

stator flux. The rotor angle is given as,


= tan1

sq

(10)

sd

Estimated speed is given as,


Ne =
721

N r (field ) S

(11)

(rotor flux )2

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The speed of rotor field and slip equation is,


Nr(field ) = rd . r rq . r

(12)
3

S = Te R r /2
2

(13)

CONTROLLER USING OPTIMIZATION TECHNIQUES

a) PI CONTROLLER
The Proportional and Integral (PI) controller is widely used in speed control of motor drives. The proportional controller improves the
steady state tracking accuracy and load disturbance signal rejection. It also decreases the sensitivity of the system to parameter
variations. The proportional control is not usedalone because it produces constant steady state error. Hence Proportional plus Integral
(PI) controller will eliminate forced oscillations i.e. peak overshoot and undershoot and also the steady state error. In PI controller,
= +

(14)

Where K p - Proportional gain, K i - Integral gain.

Fig 1. Classical PI controller


The Fig 1 shows the classical PI controller.The speed controller compares the actual motor speed with the corresponding reference
speed and it outputs the electromagnetic torque reference. Tuning is the adjustment of control/gain parameters (proportional, integral) to
the optimum values for the desired control response.ZieglerNichols method is one of the tuning methods for controllers. Since it has a
major drawback is very aggressive tuning.

b) PI CONTROLLER WITH BFOA


The tuning of PI controller gain parameters is one of the difficult tasks. For efficient tuning, Bacterial foraging optimization algorithm
is used to select the proportional (KP) and integral (Ki) gain constant.Consider the speed controller block which is tuning the gain
parameters Kp and Ki values.

Fig 2. PI controller with BFOA

The following block diagram shows the controller block with BFOA:

722

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 3. Overall block diagram with BFOA tuning

BACTERIAL FORAGING OPTIMIZATION ALGORITHM


The following are the steps in BFOA

1. CHEMOTAXIS
In this step, process of swimming and tumbling of bacterias such as E.Coli for searching the food location is done using flagella.
Through swimming action, the bacteria can move in a specified direction and during tumbling it can modify the direction of search.
Then, in computationalchemotaxis, the movement of the bacterium is given by the following equation,
j + 1, k, l = j, k, l + c i

(15)

t i i

2. SWARMING
In this step, after the success in the directionof the best food location, the bacterium which has the knowledge about the optimum
path to the food source will attempt to communicate to other bacteria by using a magnetism signal.If the attractant between the cells is
high and very deep, the cells will have a strong tendency to swarm. The cell-to-cell interaction is given by the following function,
Jcc , , ,

=1 (, (, , ))

(16)

3. REPRODUCTION
The least healthy bacteria eventually die while each of the healthier bacteria split into two bacterias, which remains in the same
place. This keeps the swarm size constant; the bacteria which did not split will die.

4. ELIMINATION AND DISPERSAL


According to the preset probability, an individual bacterium which is selected for elimination is replaced by a new bacterium in
random new location within optimized domain. The bacterium is dispersed to a new area, which destroys the chemotaxis, but the
bacteria may find the more abundant areas. This mimics the real-world process of the bacteria can be dispersed to new location. Thus
the step size of each bacterium is the main determining factor for both the speed of convergence and error in final output.
723

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The following points give the advantages of using this optimization algorithm: 1) Control and Accuracy is high compared to other
methods. 2) Minimal torque ripples response in comparing with other control circuit. 3) Auto tuning is introduced. 4) Better dc link
voltage response. 5) Good performance of the system under load and speed varying conditions.

SIMULATION AND RESULTS


The simulation is carried out on the three phase squirrel cage induction motor using MATlab is shown in fig 4. The speed output is
given as feedback to the workspace for adjusting the gain parameters in controller using BFOA.

Fig 4. Simulink block with BFOA


During the tuning of PI Controller using BFO Algorithm, there is several trials for speed waveform with different values of gain
parameters. Consider the following result, which shows the improvement in settling time period by each trial.

Fig 5(a) speed during trial 1 (TS= 1.8 sec)

Fig 5(b) speed during trial 2(TS= 1.7 sec)

Fig 5(c) speed during trial 3 (TS = 1.6 sec)


724

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Thus there is improvement in settling time (T S ) by each trial from the fig 5(a) to 5(c). Also there is minimum overshoot in the torque
waveforms as shown in the fig 6(a) to 6(c).

Fig 6(a) torque during trial 1

Fig 6(b) torque during trial 2

Fig 6(c) torque during trial 3


From the Simulink controller block fig 4, the reference value of 0.9 is set. The stator fluxes for d and q axis is given in this waveform.
To achieve the desired performance, the rotor flux should reach the reference flux. This is shown in the fig 8.Then the dc link voltage
reaches its constant voltage in fig 9.

725

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 7. Stator reference flux

Fig 8. Rotor flux

Fig 9. DC link voltage


Comparison of speed waveform with and without BFOA is shown below:

Fig 10(a) normal speed without BFOA

726

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig 10(b) optimized speed with BFOA


Thus from the comparison of speed with and without BFOA, the optimized result is obtained at T S (settling time) equal to 0.8 seconds.
Hence this shows the improvement in performance parameters.
SPECIFICATIONS
4 KW

Power
Voltage
Current
Frequency
Rated Speed
Stator Resistance
Stator Inductance
Rotor Resistance
Rotor Inductance
Mutual Inductance
Capacitor
Pole pairs
Kp , Ki (speed controller)
Kp , Ki (torque controller)
Kp , Ki (flux controller)
Kp , Ki (limit)
Kp , Ki (optimized)

400 V
10 A
50 Hz
1500 rpm
0.5 ohm
0.0415 H
0.25 ohm
0.0413
0.0403
1200f
2
5, 100
20, 10
20, 70
5, 100
3.3, 80

In BFO Algorithm, the following are the considerations:Number of bacteria = 5, Number of chemotaxis step = 3, Number of swim
length = 2, Number of elimination and dispersal = 2 and probability of elimination and dispersal is 0.25. The cost functions are to
reduce the peak overshoot and settling time. Then the saturation limit in controller module is given as lower limit zero to upper limit
12. Thus Overall system performance fully depend on the DC link voltage response, hence the Peak Over Shoot, Peak Under Shoot,
Settling Time can be minimized.

CONCLUSION
In this paper, a new bio inspired optimization technique is presented. The Direct torque controlled space vector modulated VSI is fed
with the induction machine to improve the performance of the system. Thus the closed loop control of induction motor with BFOA
technique gives the minimum torque ripples and the machine can run at reference speed under different loading conditions. Hence the
peak overshoot, undershoot and settling time can be minimized by optimum tuning of gain parameters in PI controllers using Bacterial
foraging algorithm. During the running condition, the rotor flux reaches the reference circular frame to achieve the good performance of
the system. This can be verified with the various results of the Simulink block using MATlab.
727

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] M. Hafeez, M. NasirUddin ,NasrudinAbd. Rahim, Hew Wooi Ping, Self-Tuned NFC and Adaptive Torque Hysteresis-Based
DTC Scheme for IM Drive, IEEE transactions on industrial applications, vol. 50, no. 2, march/april 2014.
[2] S. V. Jadhav, J. Srikanth, and B. N. Chaudhari, Intelligent controllers applied to SVM-DTC based induction motor drives: A
comparative study, in Proc. IEEE Power India Int. Joint Conf. Power Electron., Drives Energy Syst., Dec. 2010, pp. 18.121,
Apr. 2011.
[3] V.P.Sakthivel, S.Subramanian Using MPSO Algorithm to Optimize Three-Phase Squirrerl Cage Induction Motor design IEEE,
2011.
[4] O.S. EI-Laban , H.A. Abdel Fattah, H.M.Emara, and A.F.Sakr, Particle swarm optimized direct torque control of induction
motors. IEEE Trans, 2006.
[5] T. VamseeKiran and N. Renuka Devi ,Particle swarm optimization based direct torque control (DTC) of induction motor Vol.
2, Issue 7, July 2013.
[6] AhmudIwanSolihin, Lee Fook Tack and MoeyLeapKean. Tuning of PID controller using particle swarm optimization (PSO).
2011.
[7] S. Nomura, T. Shintomi, S. Akita, T. Nitta, R. Shimada, and S. Meguro, Technical and cost evaluation on SMES for electric
power compensation, IEEE Trans. Appl. Supercond., vol. 20, no. 3, pp. 1373 1378, Jun. 2010.
[8] Ping Guo, Dagui Huang, DaiweiFeng, Wenzheng Yu and Hailong Zhang is Optimized Design of Induction Motor Parameters
Based on PSO (Particle Swarm Optimization), IEEE, 2012.
[9] FlahAymen and SbitaLassaad is BFO control tuning of a PMSM high speed drive, IEEE, 2012.
[10] Kevin M. Passino Biomimicry of Bacterial Foraging for Distributed Optimization and Control, IEEE Control systems, June
2002.
[11] M. Tripathy and S. Mishra, Bacteria Foraging-Based Solution to Optimize Both Real Power Loss and Voltage Stability Limit,
IEEE transactions on power systems, vol. 22, no. 1, February 2007.
[12] Xiabo Shi,Wei-xing Lin, PID Control Based on an Improved Cooperative Particle Swarm-Bacterial Hybrid Optimization
Algorithm for the Induction Motor AISS, Volume4, Number21, Nov 2012

728

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ACTIVE RESOURCES ALLOCATION FOR CLOUD VIRTUAL


ENVIRONMENTS USING EASJSA
Mrs.S.ANITHA, MCA., M.Phil.,
ASSISTANT PROFESSOR,
selvianithas@gmail.com,

S.GAYATHRI,
M.PHIL FULL-TIME RESEARCH SCHOLAR,
Gayushanmugam2@gmail.com, 9677665858.

DEPARTMENT OF COMPUTER SCIENCE AND APPLICATIONS,


VIVEKANANDHA COLLEGE OF ARTS AND SCIENCES FOR WOMEN, TAMILNADU, INDIA.

Abstract Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the
touted gains in the cloud model come from resource multiplexing through virtualization technology. This project presents a system that uses
virtualization technology to allocate data center resources dynamically based on application demands and support green computing by
optimizing the number of servers in use. The existing system is skewness model introduced to measure the unevenness in the
multidimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve
the overall utilization of server resources. It develops a set of heuristics that prevent overload in the system effectively while saving energy
used. In addition, this paper proposes an Enhanced Adaptive Scoring Job Scheduling Algorithm (EASJSA) for the cloud environment.
Compared to other methods, it can decrease the completion time of submitted jobs, which may compose of computing-intensive jobs and
data-intensive jobs.

Keywords Cloud Computing, Virtualization Technology, Data-intensive jobs, EASJSA, Multidimensional Resource Utilization
I. INTRODUCTION
Studies have found that servers in many existing data centers are often severely underutilized due to over provisioning for the
peak demand. The cloud model is expected to make such practice unnecessary by offering automatic scale up and down in response to
load variation. Besides reducing the hardware cost, it also saves on electricity which contributes to a significant portion of the
operational expenses in large data centers. Virtual machine monitors (VMMs) like Xen provide a mechanism for mapping virtual
machines (VMs) to physical resources [1]. This mapping is largely hidden from the cloud users. Users with the Amazon EC2 service,
for example, do not know where their VM instances run. It is up to the cloud provider to make sure the underlying physical machines
(PMs) have sufficient resources to meet their needs.
VM live migration technology makes it possible to change the mapping between VMs and PMs while applications are
running. However, a policy issue remains as how to decide the mapping adaptively so that the resource demands of VMs are met
while the number of PMs used is minimized. The paper considers a system which introduces the concept of skewness to measure the
unevenness in the multidimensional resource utilization of a server [2]. By minimizing skewness, it can combine different types of
workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the
system effectively while saving energy used.
The rest of this paper is organized as follows. Section 2 presents the related work followed by the main contribution dynamic resource
allocation as well as the problem definition in Section 3. Section 4 gives a brief introduction to Enhanced Adaptive Scoring Job
Scheduling algorithm while and explains the proposed approach. Finally, Section 5 presents the evaluation of the algorithm followed
by the conclusions and future work described in Section 6 and Section 7.

729

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

II. RELATED WORK


Data centers server farms that run networked applicationshave become popular in a variety of domains such as web
hosting, enterprise systems, and e-commerce sites. Server resources in a data center are multiplexed across multiple applications each
server runs one or more applications and application components may be distributed across multiple servers [3]. Further, each
application sees dynamic workload fluctuations caused by incremental growth, time-of-day effects, and flash crowds. Since
applications need to operate above a certain performance level specified in terms of a service level agreement, effective management
of data center resources while meeting SLAs is a complex task [7]. One possible approach for reducing management complexity is to
employ virtualization. In this approach, applications run on virtual servers that are constructed using virtual machines, and one or
more virtual servers are mapped onto each physical server in the system.
Virtualization of data center resources provides numerous benefits [4]. It enables application isolation since malicious or
greedy applications can not impact other applications co-located on the same physical server. It enables server consolidation and
provides better multiplexing of data center resources across applications. Perhaps the biggest advantage of employing virtualization is
the ability to flexibly remap physical resources to virtual servers in order to handle workload dynamics. Data center energy savings
can come from a number of places: on the hardware and facility side, e.g., by designing energy efficient servers and data center
infrastructures, and on the software side, e.g., through resource management [8]. In this paper, we take a software-based approach,
consisting of two interdependent techniques: dynamic provisioning that dynamically turns on a minimum number of servers required
to satisfy application specific quality of service, and load dispatching that distributes current load among the running machines [5].

III. MAIN CONTRIBUTIONS


The main contribution for solve problem of existing system is develops a resource allocation system that can avoid overload
in the system effectively while minimizing the number of servers used. It introduces the concept of skewness to measure the uneven
utilization of a server. By minimizing skewness, it can improve the overall utilization of servers in the face of multidimensional
resource constraints [6]. It designs a load prediction algorithm that can capture the future resource usages of applications accurately.
The algorithm can capture the rising trend of resource usage patterns and help reduce the placement churn significantly. It looks inside
a VM for application level statistics, e.g., by parsing logs of pending requests. Doing so requires modification of the VM which may
not always be possible.

To split the Jobs are into sub tasks and assign them to more cloud nodes.

To take in to account both dependent tasks and independent task scheduling.

To consider job replication strategy.

To decrease completion time of jobs.

IV. PROPOSED PROTOCOL


The proposed system covers all the existing system approach. In addition, among all the cloud nodes, Enhanced Adaptive
Scoring Job Scheduling algorithm (EASJS) is applied for cloud nodes resource scheduling so that the given job is split into N tasks
along with Replication Strategy.
Enhance Adaptive Scoring Job Scheduling (EASJSA) aims to decrease jobs completion time. It considers not only the computing
power of each resource in the grid but also the transmission power of each cluster in a grid system. It defines the computing power of
each resource, the product of CPU speed and available CPU percentage. The transmission power of each cluster is defined as the
average bandwidth between different clusters. It should use the status of each resource in the grid as parameters to initialize the cluster
score of all clusters.
730

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Each job is considered as sub tasks.

A single job is given to a selected multiple clusters since jobs are split into tasks.

Cluster score values are recalculated even during the job is partially completed. This is achieved when a particular sub task is
finished.

Storage capacity of cluster resources is taken into account.

Multiple and and values are calculated for each sub task and so cluster assignment is effective than existing system.

Replication method assists in faster job completion.


ENHANCES ADAPTIVE SCORING ALGORITHM
Get Job Information and Split into N sub tasks and Replication Count R

Calculate Jobs N Data Intensive And Computation Intensive (Alpha and Beta)
Values for each sub task
Get Cluster Information from Information Server

Get CPU_Speeds And CPU_Loads Of All Resources In All Clusters

Calculate Available CPU_Speed Of All Resources In All Clusters

Calculate COMPUTING POWER (CP) of all resources in all clusters

Calculate CLUSTER SCORE (CS) of all clusters

Assign Job (N Sub Tasks) to Clusters having Top R maximum Cluster Score (CS).

Send Alert (Job Completion Status) to Information Server after Each Task completion.

Information Server Alerts Clusters (assigned with given completed Task) to rollback
the given Task.

Recalculate COMPUTING POWER (CP) of all resources and CLUSTER SCORE


(CS) of all clusters

FIG 1.1 SYSTEM ARCHITECTURE


731

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Algorithm Work:
In this module, the cluster score is calculated based on the following formula.
CSi = . ATPi + . ACPi +
where CSi is the cluster score for cluster i, a and b are the weight value of ATPi and ACPi respectively, the sum of

and

is

1, ATPi and ACPi are the average transmission power and average computing power of cluster i respectively. ATP i means the
average available bandwidth the cluster i can supply to the job and is defined as:

where Bandwidth_availablei,j is the available bandwidth between cluster i and cluster j, m is the number of clusters in the
entire grid system.
Similarly, ACPi means the average available CPU power cluster i can supply to the job and is defined as:

where CPU_speedk is the CPU speed of resource k in cluster I, load is the current load of the resource k in cluster i, n is the
number of resources in cluster i. Also let

CPk indicates the available computing power of resource k.


Because the transmission power and the computing power of a resource will actually affect the performance of job
execution, these two factors are used for job scheduling. Since the bandwidth between resources in the same cluster is usually very
large, we only consider the bandwidth between different clusters. Local update and global update are used to adjust the score. After a
job is submitted to a resource, the status of the resource will change and local update will be applied to adjust the cluster score of the
cluster containing the resource. What local update does is to get the available CPU percentage from Information Server and
recalculate the ACP, ATP and CS of the cluster. After a job is completed by a resource, global update will get information of all
resources in the entire grid system and recalculate the ACP, ATP and CS of all clusters .

V. EXPERIMENTAL RESULTS
The following result finding for our experimental works, they are

732

It is found that the cluster selection is efficient if the job is split into sub tasks.

Resources are effectively utilized and waiting time is less in scheduling next successive job in queue.

Resources with limited values are also having the chance for job allocation if the job is split into sub tasks.

Instead of calculating the right cluster after each job completion, the proposed system calculates the clusters availability at
regular intervals so that any new job can be assigned even during the execution of current job.

Overall efficiency of the grid is more compared to existing system.

Better suitable for jobs which can be split based on RAM, CPU speed and storage location.

The experimental results show that EASJSA is capable of decreasing completion time of jobs and the performance of ASJS
is better than other methods.

Inter dependant jobs are not combined in the proposed system which may be future work.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Studying and improving EASJSA for such kinds of jobs may be carried out in the future

ACKNOWLEDGMENT
I express my deep gratitude and sincere thanks to my supervisor Mrs.S.Anitha,MCA.,M.Phil., Assistant Professor,
Department of Computer Science at Vivekanandha college of Arts and Sciences for women for her valuable, suggestion,
innovative ideas, constructive, criticisms and inspiring guidance had enabled me to complete the paper present work successfully.

VI.CONCLUSION
The proposed Enhanced adaptive scoring method to schedule jobs in cloud environment. EASJS selects the fittest resource to
execute a job according to the status of resources. Local and global update rules are applied to get the newest status of each resource.
Local update rule updates the status of the resource and cluster which are selected to execute the job after assigning the job and the Job
Scheduler uses the newest information to assign the next job. Global update rule updates the status of each resource and cluster after a
job is completed by a resource. It supplies the Job Scheduler the newest information of all resources and clusters such that the Job
Scheduler can select the fittest resource for the next job. The experimental results show that ASJS is capable of decreasing completion
time of jobs and the performance of EASJS is better than other methods

VII. FUTURE ENHANCEMENTS


In future, EASJS can be applied to real grid applications. This paper focuses on job scheduling. The paper can be modified to
consider division of file and the replica strategy in data-intensive jobs. Jobs are independent in this project, but they may have some
precedence relations in real-life situation. Studying and improving EASJS for such kinds of jobs may be carried out in the future.
REFERENCES
1) P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, Xen and the Art of
Virtualization, Proc. ACM Symp. Operating ystems Principles (SOSP 03), Oct. 2003.
733
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2) C. Clark, K. Fraser, S. Hand, J.G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield, Live Migration of Virtual
3) Machines, Proc . Symp. Networked Systems Design and Implementation (NSDI 05), May
4) M. Nelson, B.-H. Lim , and G . Hutchins , Fast Transparent Migration for Virtual Machines, Proc. USENIX Ann. Technical
5)

Conf., 2005

6) T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif, Black-Box and Gray-Box Strategies for Virtual Machine
Migration, Proc. Symp. Networked Systems Design and Implementation (NSDI 07), Apr. 2007.
7) Chen, H. Wenbo, J. Liu, S. Nath, L. Rigas, L. Xiao, and F. Zhao, Energy-Aware Server Provisioning and Load Dispatching
for Connect ion-Intensive Intern et Services, Pro c. USENIX Symp. Networked Systems Design and Implementation (NSDI
08), Apr. 2008.
8) P. Padala, K.-Y. Hou, K.G. Shin, X. Zhu, M. Uysal, Z. Wang, S. Singhal, and A. Merchant, Automated Control of Multiple
Virtualized Resources , Proc. ACM European conf. Computer Systems (EuroSys 09), 2009.
9) J.S. Chase, D.C. Anderson, P.N. Thakar, A.M. Vahdat, and R.P. Doyle, Managing Energy and Server Resources in Hosting
Centers, Proc. ACM Symp. Operating System Principles (SOSP 01), Oct. 2001
10) Tang, M. Steinder, M. Spreitzer, and G. Pacifici, A Scalable Application Placement Controller for Enterprise Data Centers,
Proc. Intl World Wide Web Conf. (WWW 07), May 2007
11) M. Zaharia, A. Konwinski, A.D. Joseph, R.H. Katz, and I. Stoica, Improving Map Reduce Performance in Heterogeneous
Environments, Proc. Symp. Operating Systems Design and Implementation (OSDI 08), 2008.

12) M. Isard, V. Prabhakaran, J. Currey, U. Wieder, K. Talwar, and A. Goldberg, Quincy: Fair Scheduling for Distributed
Computing Clusters, Proc. ACM Symp. Operating System Principles (SOSP 09), Oct. 2009

734

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANALYSIS OF WORKFLOW SCHEDULING PROCESS USING


ENHANCED SUPERIOR ELEMENT MULTITUDE OPTIMIZATION IN
CLOUD
Mrs. D.PONNISELVI, M.Sc., M.Phil.,1
ASSISTANT PROFESSOR,
gokulponnics@gmail.com

E.SEETHA,2
M.PHIL FULL-TIME RESEARCH SCHOLAR,
seethashines@gmail.com ,8754079904

DEPARTMENT OF COMPUTER SCIENCE AND APPLICATIONS,


VIVEKANANDHA COLLEGE OF ARTS AND SCIENCES FOR WOMEN, NAMAKKAL, TAMIL NADU, INDIA

Abstract

In this paper the resource condition and scheduling model for logical workflows on Infrastructure as a Service (IaaS)

clouds is described. This paper presents an algorithm based on the Enhanced Superior Element Multitude Optimization (ESEMO),
which aims to minimize the overall workflow execution cost while meeting deadline constraints. The main scope of the proposed
system is used to analyze best available resource in the cloud depend upon the total execution time and total execution cost which is
compare between one process to another process. The cloud provider compare the two processes result in the application, if the
second result better than previous result process then the process will be terminated. The administrator utilizes the best resource in the
area using the execution cost and execution time.

Keywords Cloud Environments, logical workflow, ESEMO, Cost and Execution Time
I. INTRODUCTION
Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically.
However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are
scheduled taking into account only the execution time [1] [2]. In addition to optimizing execution time, the cost arising from data
transfers between re-sources as well as execution costs must also be taken into account.
A workflow is a set of ordered tasks that are linked by data dependencies. A Workflow Management System (WMS) is generally
employed to define, manage and execute these workflow applications on Grid resources [3]. A WMS may use a specific scheduling
strategy for mapping the tasks in a workflow to suitable cloud resources in order to satisfy user requirements.
The main challenge of dynamic workflow scheduling on virtual clusters lies in how to reduce the scheduling overhead to adapt to the
workload dynamics with heavy fluctuations[4] [5]. In a cloud platform, resource profiling and stage simulation on thousands or
millions of feasible schedules are often performed, if an optimal solution is demanded. An optimal workflow schedule on a cloud may
take weeks to generate.
The rest of this paper is organized as follows. Section 2 presents the related work followed by the main contribution workflow as
well as the problem definition in Section 3. Section 4 gives a brief introduction to work logical scheduling and explains the proposed
approach. Finally, Section 5 presents the evaluation of the algorithm followed by the conclusions described in Section 6.

II. RELATED WORK


The advent of Cloud computing as a new model of service provisioning in distributed systems encourages researchers to
735

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems
in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of
workflow execution. It have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called
Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a user-defined deadline.
However, it believes Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous
networks, and the pay-as-you-go pricing model. It adapt the PCP algorithm for the Cloud environment and propose two workflow
scheduling algorithms: a one-phase algorithm which is called IaaS Cloud Partial Critical Paths, and a two-phase algorithm which is
called IaaS Cloud Partial Critical Paths with Deadline Distribution [6] [7].
Workflows are usually represented by Graph. Workflows for large computational problems are often composed of several
interrelated workflows. Workflows in an ensemble typically have a similar structure, but they differ in their input data and number of
tasks, individual task sizes. There are many applications that require scientific workflow in single cloud provider in cloud
environments utility computing has emerged as a new service provisioning model and is capable of supporting diverse computing
services such as servers, storage, network and applications for e-Business and e-Science over a global network [8] [9].

III. MAIN CONTRIBUTIONS


Workflow scheduling on distributed systems has been widely studied over the years and is NP-hard by a reduction from the
multiprocessor scheduling problem. Therefore it is impossible to generate an optimal solution within polynomial time and algorithms
focus on generating approximate or near-optimal solutions. Numerous algorithms that aim to find a schedule that meets the users
quality of services (QoS) requirements have been developed.
A vast range of the proposed solutions target environments similar or equal to community grids. This means that minimizing
the applications execution time is generally the scheduling objective, a limited pool of computing resources is assumed to be
available and the execution cost is rarely a concern. The solutions provide a valuable insight into the challenges and potential
solutions for workflow scheduling. However, they are not optimal for utility-like environments such as IaaS clouds. There are various
characteristics specific to cloud environments that need to be considered when developing a scheduling algorithm.

IV. PROPOSED ALGORITHM


The proposed algorithm named ESEMO (Enhanced Superior Element Multitude Optimization) which is compare the total execution
time and total execution cost between one processes to another process. In addition, it extends the resource model to consider the data
transfer cost between data in cloud environment so that nodes can be deployed on different regions
Also, it assigns different options for the selection of the initial resources and the given task the different set of initial resource
requirements is assigned. In addition, data transfer cost between data environment are also calculated so as to minimize the cost of
execution in multi-cloud service provider environment.
In this proposed system contains are workflows scheduling strategies and dynamic resource allocation for executing IaaS in
Cloud environment. The scenario is modeled as implement the multi cloud environment and its optimization problem is to minimize
the overall execution cost.
The proposed approach incorporates basic IaaS cloud principles heterogeneity, multi cloud, and cloud provider of the
resources. The proposed system is workflow scheduling strategies implementing superior element multitude optimization execution
cost reduced to optimization execution. It aims to minimize the overall execution cost and time in executing the processes in multi
cloud environment.
736

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

PROPOSED ALGORITHM
Initialize resources
Repeat
For each resources do
calculate the fitness value

If fitness value is better than the best fitness value (optimal best) in list
set current value as the new optimal best
End if
End for resources with the best fitness value of all the
resources a for each resources do
Calculate resource capacity according to capacity update
equation Update resources position according to position update
equation end for
until maximum iterations or minimum error criteria is attained

V. EXPERIMENTAL RESULTS
The Table 1.1 and Table 1.2 used to represent the process of value as Total execution cost and Total Execution Time in multi system
environment. The specified value is varied from dynamic manner depend upon the number of the resource and belong tasked to those
resource. It represented in the basis of number of task assigned to the resource belong to the resource capability.
Resource Total Execution Total Execution
Id
Cost
Time
1,2

34

36

2,3

39

34

3,4

28

32

4,5

14

34

1,2,3

35

22

2,3,4

28

36

3,4,5

22

40

1,2,3,4

40

34

2,3,4,5

32

40

1,2,3,4,5

30

34

Table 1.1 Process value of Total execution cost and Total execution time in multi resource

737

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 1.2 Chart representation of Total execution cost and Total execution time in multi resource

ACKNOWLEDGMENT
My abundant thanks to Dr.S.SEETHALAKSHMI,M.Sc.,M.Phil.,Ph.D.,PGDEM.,PGDEM.,PGDCA.,FICS,Principal,
Vivekananda College of Arts and Sciences for women, Namakkal who gave this opportunity to do this research work. I am deeply
indebted to Dr.S.DHANALAKSHMI, MCA., M.Phil.,Ph.D.,M.E., Head Department of Computer Science and Applications at
Vivekananda College of Arts and Sciences for women, Namakkal for this timely help during the paper. I express my deep gratitude
and sincere thanks to my supervisor Mrs.D.PONNISELVI, M.Sc., M.Phil., Assistant Professor, Department of Computer Science
and Applications at Vivekananda College of Arts and Sciences for women, Namakkal for her valuable, suggestion, innovative
ideas, constructive, criticisms and inspiring guidance had enabled me to complete the work successfully.

VI.CONCLUSION
As research conclusion of proposed system work the paper presents the ESEMO (Enhanced Superior Element Multitude
Optimization) algorithm which is used to predict the least time computation in the cloud provider area. In addition, the dissertation
compared the time evaluation work between one dynamic resource flows to another process flow of dynamic resource in the cloud
environment. In addition, it extends the resource model to consider the data transfer cost between data centers so that nodes can be
deployed on different regions. Extending the algorithm to include heuristics that ensure a task is assigned to a node with sufficient
memory to execute it will be included in the algorithm. Also, it assigns different options for the selection of the initial resource pool.
In addition, data transfer cost between data center are also calculated as to minimized the cost of execution in multi cloud services
provider environments.
REFERENCES:
1.
2.
3.
4.

5.
738

G.Thickins, Utility Computing: The Next New IT Model, Darwin Magazine, April 2003.
T.Eilametal, A utility computing framework to develop utility systems, IBM System Journal, 43(1):97-120, 2004.
S.Benkneretal., VGE - A Service-Oriented Grid Environment for On-Demand Supercomputing, In the Fifth
IEEE/ACM International Workshop on Grid Computing (Grid 2004), Pittsburgh, PA, USA, November 2004.
M.Malawski, G.Juve, E.Deelman, and J.Nabrzyski, Cost-and deadline-constrained provisioning for scientific
workflow ensembles in IaaS clouds, in Proc. Int. Conf. High Perform. Computer Network, Storage Anal., 2012, vol.
22, pp. 111.
E.Deelman, G.Singh, M.Livny, B.Berriman, and J. Good, The cost of doing science on the cloud: The montage
example, in 2008 ACM/IEEE Conference on Supercomputing (SC 08), 2008.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

6.
7.
8.
9.

739

J.Vockler,G.Juve,E.Deelman, M.Rynge, and G.B.Berriman, Experiences using cloud computing for a scientific
workflow application, in 2nd Workshop on Scientific Cloud Computing (Science Cloud 11), 2011
S.Abrishami, M.Naghibzadeh, and D.Epema, Deadline-con-strained workflow scheduling algorithms for IaaS
clouds, Future Generation Computer System, vol. 23, no. 8, pp. 14001414, 2012.
S.Pandey, and R.Buyya,A particle swarm optimization-based heuristic for scheduling workflow applications in
cloud computing environments, in Proc. IEEE Int. Conf. Adv. Inform. Network Appl., 2010, pp. 400407.
S.Callaghan,P.Maechling, P. Small, K. Milner, G. Juve, T. Jordan, E. Deelman, G. Mehta, K.Vahi, D. Gunter, K.
Beattie, and C. X. Brooks, Metrics for heterogeneous scientific workflows: A case study of an earthquake science
application

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review of Literature Survey on Effect of Thermal Coating on


Cylinder and Piston Crown
Mr.Bhavin D. Patel1, Prof.Ramesh N. Mevada2, Prof. Dhaval P. Patel3
1

Student, Mechanical Engineering Department, Smt.S.R.Patel Engineering College, Dabhi-Unjha, GTU.

Professor, Mechanical Engineering Department, Smt.S.R.Patel Engineering College, Dabhi-Unjha,GTU.

Assistant Professor, Mechanical Engineering Department, Gandhinagar Institute of Technology, Moti Bhoyan,GTU.

Abstract The desire to reach higher efficiencies, lower specific fuel consumptions and reduce emissions in modern internal
combustion (IC) engines has become the focus of engine researchers and manufacturers for the past three decades. The global concern
over the decreasing supply of fossil fuels and the more stringent emissions regulations has placed the onus on the engine industry to
produce practical, economical and environmentally conscious solutions to power our vehicles.
Over the years, a variety of different approaches have been taken to attain improvements in efficiency and reduce emissions in
existing engine designs. The introduction of new technologies has played a role in making advancements to this century-old
technology. Lighter and stronger materials, advanced manufacturing processes, improved combustion chamber designs, advanced
exhaust after-treatment technologies, and new computational means for designing, analyzing and optimizing the internal combustion
engine are just a few of the advancements we have made to achieve significant improvements in performance, efficiency and
emissions. The review of different research paper is reflected different methodology such as thermal barrier coating, HVOF (High
velocity oxygen fuel) coating, Particle image velocimetry (PIV), Coating thickness and roughness effect etc. And such system they
had applying such kind of coating process those are metal sheet, gas turbine, parabolic reflector, automobile industry etc. Some
research paper indicated about optimization method applied coating structure and FEA analysis or CFD analysis. In CI and SI engine,
they were done component level coating as per design requirement for surface roughness and life of component. There was not work
done on energy saving during power stroke and there is not any provision for inside coating method from above research paper survey.

Keywords Internal Combustion (IC) Engines, Performance, Manufacturing Processes, Cylinder, Emission, Efficiency,
Compression.
INTRODUCTION

The quantity of the energy acquired from the fuel is not an intended level because of the factors in the combustion chamber of the
engine. Some of the factors are, design of the combustion chamber, lack of adequate turbulence in the combustion chamber, poor
oxygen at the medium, lower combustion temperature, compression ratio and advance of injection timing. It is thought that
combustion temperature is the one of the most important factor among the aforementioned factors. All of the hydrocarbons cannot be
reacted with oxygen chemically at the during combustion time. With this aim, coating combustion chamber components with low
thermal conductivity materials becomes a more important subject at these days. For this reason, combustion chamber components of
the internal combustion engines are coated with ceramic materials using various methods.
The efficiency of most commercially available diesel engine ranges from 38% to 42%. Therefore, between 58% and 62% of the fuel
energy content is lost in the form of waste heat. Approximately 30% is retained in the exhaust gas and the remainder is removed by
the cooling, etc. More than 55% of the energy which is produced during the combustion process is removed by cooling water/air and
through the exhaust gas. In order to save energy, it is an advantage to protect the hot parts by a thermally insulating layer. This will
reduce the heat transfer through the engine walls, and a greater part of the produced energy can be utilized, involving an increased
efficiency.

740

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The major promises of thermal barrier coated engines were increased thermal efficiency and elimination of the cooling system. A
simple first law of thermodynamics analysis of the energy conversion process within a diesel engine would indicate that if heat
rejection to the coolant was eliminated, the thermal efficiency of the engine could be increased.
Thermal barrier coatings were used to not only for reduced in-cylinder heat rejection and thermal fatigue protection of underlying
metallic surfaces, but also for possible reduction of engine emissions.
Thermal insulation brings, according to the second law of thermodynamics, to engine heat efficiency improvement and fuel
consumption reduction. Exhaust energy rise can be effectively used in turbocharged engines. Higher temperatures in the combustion
chamber can also have a positive effect in diesel engines, due to the ignition delay drop and hardness of engine operation.

LITERATURE SURVEY
Abdullah Cahit Karaoglanli, Kazuhiro Ogawa,Ahmet Trk and Ismail Ozdemir, Thermal Shock and Cycling Behavior of
Thermal Barrier Coatings (TBCs) Used in Gas Turbines [1] has presented Gas turbine engines work as a power generating facility
and are used in aviation industry to provide thrust by converting combustion products into kinetic energy. Basic concerns regarding
the improvements in modern gas turbine engines are higher efficiency and performance. Increase in power and efficiency of gas
turbine engines can be achieved through increase in turbine inlet temperatures. The materials used should have perfect mechanical
strength and corrosion resistance and thus be able to work under aggressive environments and high temperatures. The temperatures
that turbine blades are exposed to can be close to the melting point of the super alloys. Internal cooling by cooling channels and
insulation by thermal barrier coatings (TBCs) is used in order to lower the temperature of turbine blades and prevent the failure of
super alloy substrates.
L. Wang, X.H. Zhong, Y.X. Zhao, S.Y. Tao, W. Zhang, Y. Wang, X.G. Sun, Design and optimization of coating structure
for the thermal barrier coatings fabricated by atmospheric plasma spraying via finite element method [2] has presented fabricating the
thermal barrier coatings (TBCs) with excellent performance is to find an optimized coating structure with high thermal insulation
effect and low residual stress. This paper discusses the design and optimization of a suitable coating structure for the TBCs prepared
by atmospheric plasma spraying (APS) using the finite element method. The design and optimization processes comply with the rules
step by step, as the structure develops from a simple to a complex one. The research results indicate that the suitable thicknesses of the
bond-coating and top-coating are 60120 m and300420 m, respectively, for the single ceramic layer YSZ/NiCoCrAlY APS-TBC.
D. Freiburg, D. Biermann, A. Peukera, P. Kersting,H. -J. Maier, K. Mhwald, P. Kndler, M.Otten, Development and
Analysis of Microstructures for the Transplantation of Thermally Sprayed Coatings [3] has presented thermally sprayed coatings and
tribological surfaces are a point of interest in many industrial sectors. They are used for better wear resistance of lightweight materials
or for oil retention on surfaces. Lightweight materials are often used in the automotive industry as a weight-saving solution in the
production of engine blocks. It is necessary to coat the cylinder liners to ensure wear resistance. In most cases, the coating is sprayed
directly onto the surface. Previous research has shown that it is possible to transfer these coatings inversely onto other surfaces. This
was achieved with plasma sprayed coatings which were transplanted onto pressure-casted surfaces.
Mr. Atul A. Sagade, Prof. N.N. Shinde, Prof. Dr. P.S. Patil, Effect of receiver temperature on performance evaluation of
silver coated selective surface compound parabolic reflector with top glass cover[4] has presented the experimental results of the
prototype compound parabolic trough made of G.I and silver coated selective surface. The performance of collector has been
evaluated with three kinds of receiver coated with two kinds of receiver coatings black copper and black zinc and top cover. This line
focusing parabolic trough yields instantaneous efficiency of 60 %with top cover. A simple relationship between the parameters has
been worked out with the regression analysis. [Latitude: 16.42 N, Longitude: 74.13W]
Andrew Roberts, Richard Brooks, Philip Shipway, Internal combustion engine cold-start efficiency: A review of the
problem, causes and potential solutions [5] has presents vehicle emissions continues to become more stringent in an effort to
minimise the impact of internal combustion engines on the environment. One area of significant concern in this respect is that of the
cold-start; the thermal efficiency of the internal combustion engine is significantly lower at cold start than when the vehicle reaches
steady state temperatures owing to sub-optimal lubricant and component temperatures. The drive for thermal efficiency (of both the
internal combustion engine and of the vehicle as a whole) has led to a variety of solutions being trialed to assess their merits and
741

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

effects on other vehicle systems during this warm-up phase (and implemented where appropriate). The approaches have a common
theme of attempting to reduce energy losses so that systems and components reach their intended operating temperature range as soon
as possible after engine start. In the case of the engine, this is primarily focused on the lubricant system. Lubricant viscosity is highly
sensitive to temperature and the increased viscosity at low temperatures results in higher frictional and pumping losses than would-be
observed at the target operating temperature.
T. Karthikeya Sharma, Performance and emission characteristics of the thermal barrier coated SI engine by adding argon
inert gas to intake mixture [6] has investigates Dilution of the intake air of the SI engine with the inert gases is one of the emission
control techniques like exhaust gas recirculation, water injection into combustion chamber and cyclic variability, without scarifying
power output and/or thermal efficiency (TE). This paper investigates the effects of using argon (Ar) gas to mitigate the spark ignition
engine intake air to enhance the performance and cut down the emissions mainly nitrogen oxides. The input variables of this study
include the compression ratio, stroke length, and engine speed and argon concentration. Output parameters like TE, volumetric
efficiency, heat release rates, brake power, exhaust gas temperature and emissions of NOx, CO 2 and CO were studied in a thermal
barrier coated SI engine, under variable argon concentrations. Results of this study showed that the inclusion of Argon to the input air
of the thermal barrier coated SI engine has significantly improved the emission characteristics and engines performance within the
range studied.
J. Barriga, U. Ruiz-de-Gopegui, J. Goikoetxeaa, B. Coto, H. Cachafeiro, Selective coatings for new concepts of parabolic
trough collectors [7] has presented the CSP technology based on parabolic trough solar collector for large electricity generation
purposes is currently the most mature of all CSP designs in terms of previous operation experience and scientific and technical
research and development. The current parabolic trough design deals with a maximum operating temperature around 400C in the
absorber collector tube but some recent designs are planned to increase the working temperature to 600C increasing the performance
by 5-10% to attain the improved productivity that the market demands .These systems are expected to be working during 20-25 years.
One of the key points of the receiver is the stack of layers forming the selective absorber coating. With this new design the coating has
to fulfill new requirements as the collector will be working at600C and in a low vacuum of 10-2 mbar.
V.D. Zhuravlev, V.G. Bamburov, A.R. Beketov, L.A. Perelyaeva, I.V. Baklanova,O.V. Sivtsova, V.G. Vasilev, E.V.
Vladimirova, V.G. Shevchenko, I.G. Grigorov, Solution combustion synthesis of a-Al2O3 using urea [8] has presents processes
involved in the solution combustion synthesis of a-Al2O3 using urea as an organic fuel were investigated. The data describing the
influence of the relative urea content on the characteristic features of the combustion process, the crystalline structure and the
morphology of the aluminum oxide are presented herein. Our data demonstrate that the combustion of stable aluminum nitrate and
urea complexes leads to the formation of a-alumina at temperatures of approximately 600800C. Our results, obtained using
differential thermal analysis and IR spectroscopy methods, reveal that the low-temperature formation of a-alumina is associated with
the thermal decomposition of an -AlO (OH) intermediate, which was crystallized in the crystal structure of the diaspore.
Helmisyah Ahmad Jalaludin, Shahrir Abdullah, Mariyam Jameelah Ghazali, Bulan Abdullah, Nik Rosli Abdullah,
Experimental Study of Ceramic Coated Piston Crown for Compressed Natural Gas Direct Injection Engines [9] has presents High
temperature produced in a compressed natural gas with direct injection system (CNGDI) engine may contribute to high thermal
stresses. Without appropriate heat transfer mechanism, the piston crown would operate ineffectively. Bonding layer NiCrAl and
ceramic based yttria partially stabilized zirconia (YPSZ) were plasma sprayed onto AC8A aluminium alloy CNGDI piston crowns and
normal CamPro piston crowns in order to minimize thermal stresses.
Vinay Kumar. D, Ravi Kumar.P, M.Santosha Kumari, Prediction of Performance and Emissions of a Biodiesel Fueled
Lanthanum Zirconate Coated Direct Injection Diesel engine using Artificial Neural Networks [10] has presented different techniques
are being attempted over the years to use low pollution emitting fuels in diesel engines to reduce tail pipe emissions with improved
engine efficiency. Especially, Biodiesel fuel, derived from different vegetable oils, animal fat and waste cooking oil has received a
great attention in the recent past. Trans esterification is a proven simplest process to prepare biodiesel in labs with little infrastructure.
Application of thermal barrier coatings (TBC) on the engine components is aseriously perused area of interest with low grade fuels
like biodiesel fuels. Artificial neural networks (ANN) are gaining popularity to predict the performance and emissions of diesel
engines with fairly accurate results besides the thermodynamic models with considerably less complexity and lower computing time.
742

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Nitesh Mittal, Robert Leslie Athony, Ravi Bansal, C. Ramesh Kumar, Study of performance and emission characteristics of
a partially coated LHR SI engine blended with n-butanol and gasoline [11] has describes to meet the present requirements of the
automotive industry, there is continuous search to improve the performance, exhaust emission, and life of the IC engines. The meet the
first two challenges, researchers are working both on newer engine technologies and fuels. Some of the published work indicates that
coating on the combustion surface of the engine with ceramic material results in improved performance and reduced emission levels
when fueled with alternate fuel blended fuels, and this serves as a base for this work. Normal-Butanol has molecular structure that is
adaptable to gasoline, and it is considered as one of the alternative fuels for SI engines.
Helmisyah A.J., Ghazali M.J., Characterisation of Thermal Barrier Coating on Piston Crown for Compressed Natural Gas
Direct Injection (CNGDI) Engines [12] has presented the high temperature and pressure produced in an engine that uses compressed
natural gas with direct injection system (CNGDI) may lead to high thermal stresses. The piston crown fails to operate effectively with
insufficient heat transfer. In this study, partially stabilized zirconia (PSZ) ceramic thermal barrier coatings were plasma sprayed on
CNGDI piston crowns (AC8A aluminium alloys) to reduce thermal stresses. Several samples were deposited with NiCrAl bonding
layers prior to the coating of PSZ for comparison purposes. Detailed analyses of microstructure, hardness, surface roughness, and
interface bonding on the deposited coating were conducted to ensure its quality. High stresses were mainly concentrated above the
pinhole and edge areas of the piston. In short, the PSZ/ NiCrAl coated alloys demonstrated lesser thermal stresses than the uncoated
piston crowns despite a rough surface. Extra protection is thus given during combustion operation.
Ming SONG, Yue MA, Sheng-kai GONG, Analysis of residual stress distribution along interface asperity of thermal barrier
coating system on macro curved surface [13] has presented the Royal Automotive Club of Victoria (RACV) Energy Breakthrough
annual event is to provide an opportunity to school students to design and develop human powered vehicles (HPVs) and race a
nonstop 24 hours event that requires energy conservation, endurance and reliability. The event involves primary and secondary school
students, teachers, parents and local industry to work together on the design and use of energy efficient vehicles. The key areas with
interest of HPVs are the significance of aerodynamic design and ways to improve overall aerodynamics as most HPVs are designed
with minimal or no aerodynamic consideration.
A. Moridi, M. Azadi and G.H. Farrahi, Coating thickness and roughness effect on stress distribution of A356.0 under thermomechanical loadings [14] has presented Cast aluminium-silicon alloy, A356.0, is widely used in automotive components such as
diesel engine cylinder heads and also in aerospace industries because of its outstanding mechanical, physical, and casting properties.
Thermal barrier coatings are applied to combustion chamber in order to reduce fuel consumption and pollutions and also improve
fatigue life of components. However, studies on behaviour of A356.0 with thermal barrier coating are still rare. The purpose of the
present work is to simulate stress distribution of A356.0 under thermo-mechanical cyclic loadings, using a two-layer elastic-viscoplastic model of ABAQUS software. The results of stress strain hysteresis loop are validated by an out of phase thermo-mechanical
fatigue test. Ceramic coating thickness effect on stress distribution of test specimens is investigated. Different thicknesses from 300 to
800 microns of top coat and also roughness of the interfaces are simulated to get best stress gradient which can cause an improvement
of fatigue life.
M. Fadaei, H. Vafadar, A. Noorpoor, New thermo-mechanical analysis of cylinder heads using a multi-field approach
[15] has presented a thermo-mechanical analysis of a natural gas, internal combustion engine cylinder head are presented in this paper.
The results are pertinent to the evaluation of overheating damage in critical areas. The three-dimensional geometries of the cylinder
head and the water jacket were modeled by means of a computer-aided engineering tool. Commercial finite element and
computational fluid dynamics codes were used to compute details of mechanical stress in the head and flow details in the cylinder and
cooling jacket, respectively. A six-cylinder, four-stroke diesel engine and a spark-ignition natural gas engine were modeled over a
range of speeds at full load. Computed results, such as maximum allowable cylinder pressure, output power, BMEP and BSFC, were
validated by experimented data in the diesel engine model. The results were in good agreement with experimental data. The results
show high stresses at the valve bridge. Cylinder head temperatures and comparison of output power with high stress measurements,
often exceeding the elastic limit, were found at the valve bridge.

743

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

B. Murali Krishna and J.M. Mallikarjuna, Effect of Engine Speed on In-Cylinder Tumble Flows in a Motored Internal
Combustion Engine-An Experimental Investigation Using Particle Image Velocimetry [16] has presented the stratified and direct
injection spark ignition engines are becoming very popular because of their low fuel consumption and exhaust emissions. But, the
challenges to them are the formation and control of the charge which is mainly dependent on the in-cylinder fluid flows. An optical
tool like particle image velocimetry (PIV) is extensively used for the in-cylinder fluid flow measurements. The experimental
investigations of the in-cylinder fluid tumble flows in a motored internal combustion engine with a flat piston at different engine
speeds during intake and compression strokes using PIV. The two-dimensional in-cylinder flow measurements and analysis of tumble
flows have been carried out in the combustion space on a vertical plane at the cylinder axis. To analyze the fluid flows, ensemble
average velocity vectors have been used.
A.Triwiyanto, E. Haruman, M. Bin Sudin. S. Mridha and P.Hussain, Structural and Properties Development of Expanded
Austenite Layers on AISI 316L after Low Temperature Thermo chemical Treatments [17] has presented low temperature thermo
chemical treatments in fluidized bed furnace involving nitriding, carburizing and hybrid treating, sequential carburizing-nitriding,
have been conducted with the aim to improve surface properties of AISI 316L. The resulting layer is expanded austenite which is
responsible to the higher hardness and better wears properties. Characterization of this expanded austenite layer were performed
including XRD analysis, SEM and SPM and micro hardness indentation were used to reveal the characters of the produced thin layers.
B.M. Krishna and J.M. Mallikarjuna, Characterization of Flow through the Intake Valve of a Single Cylinder Engine Using
Particle Image Velocimetry [18] has investigations of the in-cylinder flow pattern around the intake valve of a single-cylinder
internal combustion engine using Particle Image Velocimetry (PIV) at different intake air flow rates. The intake air flow rates are
corresponding to the three engine speeds of 1000, 2000 and 3000 rev/min., at all the static intake valve opening conditions. In-cylinder
flow structure is characterized by the tumble ratio and maximum turbulent kinetic energy of the flow fields. Two specified lines of the
combustion chamber, the radial and axial velocity profiles have been plotted. It is found that the overall airflow direction at the exit of
the intake valve does not change significantly with the air flow rate and intake valve opening conditions.
F.S. Silva, Fatigue on engine pistons A compendium of case studies [19] has presented engine pistons are one of the most
complex components among all automotive or other industry field components. The engine can be called the heart of a car and the
piston may be considered the most important part of an engine. There are lots of research works proposing, for engine pistons, new
geometries, materials and manufacturing techniques, and this evolution has undergone with a continuous improvement over the last
decades and required thorough examination of the smallest details. Notwithstanding all these studies, there are a huge number of
damaged pistons. Damage mechanisms have different origins and are mainly wearing, temperature, and fatigue related. Among the
fatigue damages, thermal fatigue and mechanical fatigue, either at room or at high temperature, play a prominent role.
.
Nitesh Krishnan J, Hariharan P, Influence Of Hardness By WC Based Coating on Alsi Alloy And Grey Cast Iron using HVOF
Coating Method [20] has presented Grey Cast Iron and AlSi alloy are the more commonly used materials for cylinder liner
applications in automobiles. With the upcoming need for an efficient utilization of fuel resources and alternate fuel resources, there is
a subsequent need for the improvement of surface properties as well as to reduce the engine weight. Hard Chromium coatings exhibit
attractive properties such as high hardness and excellent wear resistance and have been widely used in the automotive, aerospace and
manufacturing industries.
ACKNOWLEDGMENT
It is indeed a great pleasure for me to express my sincere gratitude to those who have always helped me for this dissertation
work.
I am extremely thankful to my thesis guide Prof. Ramesh N. Medava, Professor in Mechanical Engineering Department Smt.
S.R.Patel Engineering College, Unjha are valuable guidance, motivation, cooperation, constant support with encouraging attitude at all
stages of my work.

CONCLUSION
The recent trend in Four Stroke Diesel Engine performance of engine depend on their heat losses in inside engine but from
review of following paper were given different methodologies and formulation for try to reduced losses and more efficient
744

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

vehicle.
The research survey was implied to author to justified their research area in direction of engine performance and try to make
new experimental set up for same which given support reduced heat losses and improve fuel consumption rate with respect
their efficiency.

The research survey was reflected different methodology such as thermal barrier coating, HVOF (High velocity oxygen fuel) coating,
Particle image velocimetry (PIV), Coating thickness and roughness effect etc. and such system they had applying such kind of coating
process those are metal sheet, gas turbine, parabolic reflector, automobile industry etc.
Some research paper indicated about optimization method applied coating structure and FEA analysis or CFD analysis.
In CI and SI engine, they were done component level coating as per design requirement for surface roughness and life of component.
There was not work done on energy saving during power stroke and there are not any provision for inside coating method from above
research paper survey.

REFERENCES:
[1] Abdullah Cahit Karaoglanli, Kazuhiro Ogawa, Ahmet Trk and Ismail Ozdemir, Thermal Shock and Cycling Behavior of Thermal
Barrier Coatings (TBCs) Used in Gas Turbines 2014 Karaoglanli et al.; licensee In Tech.
[2] L. Wang, X.H. Zhong, Y.X. Zhao, S.Y. Tao, W. Zhang, Y. Wang, X.G. Sun, Design and optimization of coating structure for the thermal
barrier coatings fabricated by atmospheric plasma spraying via finite element method Journal of Asian Ceramic Societies 2 (2014) 102116.
[3]D. Freiburg, D. Biermann, A. Peukera, P. Kersting, H. -J. Maier, K. Mhwald, P. Kndler, M.Otten, Development and Analysis of
Microstructures for the Transplantation of Thermally Sprayed Coatings Procedia CIRP 14 ( 2014 ) 245 250.
[4]Mr. Atul A. Sagade, Prof. N.N. Shinde, Prof. Dr. P.S. Patil, Effect of receiver temperature on performance evaluation of silver coated
selective surface compound parabolic reflector with top glass cover Energy Procedia 48 (2014) 212 222.
[5]Andrew Roberts, Richard Brooks, Philip Shipway, Internal combustion engine cold-start efficiency: A review of the problem, causes and
potential solutions Energy Conversion and Management 82 (2014) 327350.
[6]T. Karthikeya Sharma, Performance and emission characteristics of the thermal barrier coated SI engine by adding argon inert gas to
intake mixture Journal of Advanced Research (2014).
[7]J. Barriga, U. Ruiz-de-Gopegui, J. Goikoetxeaa, B. Coto, H. Cachafeiro, Selective coatings for new concepts of parabolic trough
collectors Energy Procedia 49 (2014) 30 39.
[8]Helmisyah Ahmad Jalaludin, Shahrir Abdullah, Mariyam Jameelah Ghazali, Bulan Abdullah, Nik Rosli Abdullah, Experimental Study of
Ceramic Coated Piston Crown for Compressed Natural Gas Direct Injection Engines Procedia Engineering 68 ( 2013 ) 505 511.
[9]V.D. Zhuravlev, V.G. Bamburov, A.R. Beketov, L.A. Perelyaeva, I.V. Baklanova, O.V. Sivtsova, V.G. Vasilev, E.V. Vladimirova, V.G.
Shevchenko, I.G. Grigorov, Solution combustion synthesis of a-Al2O3 using urea Ceramics International 39 (2013) 13791384.
[10]Vinay Kumar. D, Ravi Kumar.P, M.Santosha Kumari, Prediction of Performance and Emissions of a Biodiesel Fueled Lanthanum
Zirconate Coated Direct Injection Diesel engine using Artificial Neural Networks Procedia Engineering 64 (2013) 993 1002.
[11]Nitesh Mittal, Robert Leslie Athony, Ravi Bansal, C. Ramesh Kumar, Study of performance and emission characteristics of a partially
coated LHR SI engine blended with n-butanol and gasoline Procedia Engineering 56 (2013) 521 530.
[12]Helmisyah A.J., Ghazali M.J., Characterisation of Thermal Barrier Coating on Piston Crown for Compressed Natural Gas Direct
Injection (CNGDI) Engines AIJSTPME (2012) 5(4): 73-77.
[13] Ming SONG, Yue MA, Sheng-kai GONG, Analysis of residual stress distribution along interface asperity of thermal barrier coating
system on macro curved surface Progress in Natural Science: Materials International 21 (2011) 262-267.
[14]A. Moridi, M. Azadi and G.H. Farrahi, Coating thickness and roughness effect on stress distribution of A356.0 under thermo-mechanical
loadings Procedia Engineering 10 (2011) 13721377.
745

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[15]M. Fadaei, H. Vafadar, A. Noorpoor, New thermo-mechanical analysis of cylinder heads using a multi-field approach Scientia Iranica B
(2011) 18 (1), 6674.
[16]B. Murali Krishna and J.M. Mallikarjuna, Effect of Engine Speed on In-Cylinder Tumble Flows in a Motored Internal Combustion
Engine-An Experimental Investigation Using Particle Image Velocimetry Journal of Applied Fluid Mechanics, Vol. 4, No. 1, pp. 1-14, 2011.
[17]A.Triwiyanto, E. Haruman, M. Bin Sudin. S. Mridha and P.Hussain, Structural and Properties Development of Expanded Austenite
Layers on AISI 316L after Low Temperature Thermochemical Treatments Journal of Applied Sciences 11(9). 1536-1543, 2011.
[18]B.M. Krishna and J.M. Mallikarjuna, Characterization of Flow through the Intake Valve of a Single Cylinder Engine Using Particle
Image Velocimetry Journal of Applied Fluid Mechanics, Vol. 3, No. 2, pp. 23-32, 2010.
[19]F.S. Silva, Fatigue on engine pistons A compendium of case studies Engineering Failure Analysis 13 (2006) 480492.
[20]Nitesh Krishnan J, Hariharan P, Influence Of Hardness By WC Based Coating On Alsi Alloy And Grey Cast Iron Using HVOF Coating
Method IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE), e-ISSN: 2278-1684, p-ISSN: 2320-334X, PP 36-4

746

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Flow Simulation and Performance Prediction of Cross Flow Turbine


Using CFD Tool
Misrak Girma1* and Edessa Dribssa2
1

School of Mechanical and Industrial Engineering, Addis Ababa University, Institute of Technology, Addis Ababa, Ethiopia, E-mail:
misrak_mgh@yahoo.com

School of Mechanical and Industrial Engineering, Addis Ababa University, Institute of Technology, Addis Ababa, Ethiopia, E-mail:
Edessa_dribssa@yahoo.com

Abstract: the need of small hydropower schemes and availability of small hydropower potential sites in Tanzania makes Dar Es-Salam
University and NBCBN to work together towards designing and manufacturing of cross flow turbine for the rural population
electrification program. The aim of this paper is to study the internal flow and performance characteristics of the cross flow turbine
using CFD-Tools.
A 2D-CFD steady state flow simulation has been performed using GAMBIT for geometry creation and ANSYS FLUENT for
analysis. Simulations were carried out using viscous, standard K-epsilon and standard wall function model. The velocity and pressure
distribution within the internal surface runner of the cross-flow turbine were analyzed.
From the simulation results it was observed that there is a high pressure area inside the nozzle and near to the first stage runner blades.
Then it is going to decrease at the second stage and the out let of the turbine. And even it can be negative at the region where there is
no cross flow.
In general, the simulation using CFD-Tool is very important to show the velocity and pressure distribution inside the nozzle, runner
and casing.
Keywords: Cross Flow Turbine, Simulation, ANSYS FLUENT, GAMBIT, Performance Prediction, CFD-Tool
______________________________________________________________________________________

1.

INTRODUCTION

Recently, micro hydro become attractive because of its clean energy sources, renewable and has a good future development.
However, the turbine type must be fit to the area conditions of the built turbine. The study or research against the effective and the
relatively high production costs with a complex structure are the biggest obstacles to develop micro hydro. Turbine flow of latitude
(Cross-Flow) is adopted because it has a relatively simple structure [5].
A classical cross-flow turbine consists of two main parts, a nozzle and a runner. The main characteristic feature of a cross-flow
turbine is the water jet of rectangular cross-section, which passes twice through the blade cascade. Water flows through the runner
blades first from the periphery towards the centre, and then, after crossing the internal space, from the inside outwards. This
machine is therefore a double stage turbine and the water fills only a part of the runner at a time [6].
The energy from the water is transferred to the rotor in two stages, hence the name of the machine as the turbine of double effect.
The first stage on average transfers 70% of the total energy to the rotor whilst the second transfers the remaining 30%. As the flow
enters the second stage, a compromise direction is achieved which causes significant shock losses.
Cross flow turbines are characterized by simple design and ease of construction and its main attraction lies in its low cost and
potential to be used in small-scale operations. The Tanzanian research group works a research on Design and Manufacturing of
Cross Flow Turbine which is a good example to show how much the design is simple and the construction is easy. In addition to
this the machine is simple to operate and maintain. Cross flow turbine can operate over a range of water flow and head conditions.
These characteristic features of cross flow turbine makes them suitable to address the rural electrification problem in developing
countries [2].
However, the overall of efficiency of cross flow turbine is lower than conventional turbines, but remains at practically the same
level for a wide range of flows and heads. Therefore, the objective of this paper is to study the internal flow and performance
characteristics of cross flow turbine which is designed and manufactured by the Tanzanian research groups using commercial CFD
software.

747

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.

LITERATURE REVIEW

The cross flow turbine was originally designed and patented by the Australian engineer, in 1903. His work was further developed
by Donat Banki and presented in a series of publications between 1917 and 1919. Banki work resulted in a theory of operation and
experimental results indicating an efficiency of 80%. The popularity of the turbine increased after these publications, and it became
known for its ability to be efficient at low head with a wide range of flow [6].
Many researchers have studied the flow and performance characteristics of cross flow turbines experimentally, theoretically and
using CFD tools. Some of the studies /researches are reviewed here as follows:
Chiyembeke, Cuthbert and Torbjorn [7] they conduct a research on title A Numerical Investigation of Flow Profile and
Performance of A low cost Cross Flow Turbine in 2014. This paper studied the flow profile in turbine at best efficiency point and
at operating conditions that are away from best efficiency point, using numerical methods. And researchers numerically obtained
flow pattern should positions where the flow gives maximum efficiency. According to this study, the researchers have got
maximum efficiency 82% and they said that numerical method is a superior design tool for cross flow turbines.
Vincenzo, Costanza,Armando,Oreste and Tullio [8] they are focused on theoretical frame work for a sequential design of the
turbine parameters, taking full advantage of recently expanded computational capability. To this aim, they describe two steps
procedure for banki-michel parameter design. In the first step, some criteria design parameters were theoretically estimated, on the
basis of some simplifying assumptions. In the second step, the influence of efficiency of the remaining design parameters was
analyzed by means of CFD numerical testing. And they apply these new design procedures for a specific power plant, for a give
design point. In this test case the turbine with 35 blades and an attack angle equal to 22 exhibited at the design point a high
efficiency equal to 86 %.
Eve Cathrin Walseth [9] was conduct two different experiment to determine the flow pattern and torque transfer through the
runner. A cross flow turbine manufactured by remote hydrolight in Afganistan was installed in The Waterpower Laboratory at The
NTNU in September 2008 and efficiency measurement was performed on the turbine. A maximum efficiency of 78.6% was
obtained at 5 meter head. The first experiment was to visualize the flow through the runner with use of the high speed camera. The
second experiment measured the torque transfer to the runner by the use of strain gages. But the main objective was to determine
the flow pattern and torque transfer through the runner.
In Tanzania, a research team at Daresalam University designed and manufactured a cross flow turbine. All the manufacturing
process carried out at TDTC workshop with the exception of the standard parts such as bearings, bolts & nuts e.t.c. which were
purchased. Manufacturing can be carried out by a team of three or four people, consisting of a trained mechanic, a skilled worker
trained on the job, and semi skilled helper. Unfortunately, they have very limited resource to conduct performance test of the
turbine, therefore only two simple experimental tests conducted but it is not satisfactory.
Therefore, this paper deals with the analysis of flow field through the cross flow turbine which is manufactured by research team at
Darussalam University and predict its performance characteristics using commercial CFD software (Ansys Fluent).

3.

DESCRIPTION OF CROSS FLOW TURBINE

A.

Hydraulic Parameter and Operation Principle

The main characteristic of the Cross-Flow turbine is the water jet of rectangular cross-section which passes twice through the rotor
blades -arranged at the periphery of the cylindrical rotor - perpendicular to the rotor shaft. The water flows through the blades first
from the periphery towards the centre and then, after crossing the open space inside the runner, from the inside outwards. Energy
conversion takes place twice; first upon impingement of water on the blades upon entry, and then when water strikes the blades
upon exit from the runner [2].
Cross-Flow turbines may be applied over a head range from less than 2 m to more than 100 m (Ossberger has supplied turbines for
heads up to 250 m). A large variety of flow rates may be accommodated with a constant diameter runner, by varying the inlet and
runner width. This makes it possible to reduce the need for tooling, jigs and fixtures in manufacture considerably. Ratios of rotor
width/diameter is normally from 0.2 to 4.5 have been made. For wide rotors, supporting discs (intermediate discs) welded to the
shaft at equal intervals prevent the blades from bending [2].
The effective head driving the Cross-Flow turbine runner can be increased by inducing a partial vacuum inside the casing. This is
748

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

done by fitting a draft tube below the runner which remains full of tail water at all time. Any decrease in the level creates a greater
vacuum which is limited by an air- bleed valve in the casing. Careful design of the valves and casing is necessary to avoid
conditions where water might back up and submerge the runner. This principle is in fact applicable to other impulse type of
turbines but is not used in practice on any other than the cross flow; It has additional advantage of reducing the spray around the
bearing by rending suck air into the machine [2].
B.

Sizing of Main Elements

Table 1 below summarizes the different parts of the cross flow turbine and its sizes which is manufactured by a research team at
Daresalam University.
TABLE 1: CROSS FLOW TURBINE DIAMENSION

Parameter
Runner Diameter
Runner Width
No. of Blade
Entry Angle
Shaft Speed
Nozzle Width
Power Calculated
jet velocity
Nozzle Arc
overall dimensions
Area of the jet
Bearing
source: Wakati (2010)

4.

Specification
230mm
200mm
30
16 degree
354 rpm
180 mm
2.5Kw
9.7 m/s
73
523X343X520
0.00824m2
SKF 62206C

NUMERICAL ANALYSIS OF FLOW THROUGH THE TURBINE

General purposes of numerical computation of the flow through cross flow turbines included determination of the fields of pressure
and velocity. The analysis was conducted on two dimensional models in the whole area of flow from the inlet stub pipe to the
outflow part of turbine. Computations in the third (axial) dimension were omitted because of the invariability of the flow channel
geometry in this direction. This step decreased somehow the accuracy of the results, but on the other hand reduced substantially the
time of calculations. The analysis was performed using the ANSYS Fluent solver, which is based on the finite volume method.
A.

Grid Generation

In the process of grid generation one software tools were used. The Gambit software was applied to build the initial geometry of
the turbine and to generate the Grid. In the whole area of flow, triangular mesh is used. This decision resulted from substantial
deformations of the structural grid in many crucial areas of the flow field. In the areas of higher gradients of analyzed parameters,
higher density of the grid was used to obtain the acceptable level of solution.

749

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1 Cross Flow Turbine Meshing

B.

Boundary Conditions

Velocity inlet boundary conditions were used to define the fluid velocity at the flow inlet. In the incompressible flow, the inlet total
pressure and the static pressure are related to the inlet velocity by Bernoullis equation. Hence, the velocity magnitude and the
mass flow rate could be assigned at the inlet boundary. Outflow boundary condition was defined at the outflow of the turbines.
C.

Flow field in the rotating elements of the turbine

To analyze the flow in the rotating elements of the turbine, the Fluent Moving Reference Frame option was used. The calculations
were performed in the domain moving with the runner. In this case, the flow was referred to the rotating frame of reference, which
simplified the analysis. As no averaging process of the inflow parameters at the interface between the rotating and stationary zone
was applied, computations were performed in the entire flow field.
D.

Turbulence model

In the computation process the Renormalization Group (RNG) k-" Turbulence Model was used. Unlike the standard k-" model, the
RNG-based k-" turbulence model is derived from the instantaneous Navier Stokes equations. The idea of this model is to eliminate
the direct influence of small-scale eddies through some mathematical procedures. This treatment reduces computational
requirements for solving the system of Navier-Stokes equations. In the presented examples the turbulence intensity and hydraulic
diameters were used to describe the parameters of the model.

5.

RESULT AND DISCUSSION

A.

Introduction

The results of the analysis in ANSYS FLUENT can be shown by using graphical displays and numerical outputs. They are used to
investigate the internal flow characteristics of cross flow turbine. The outputs here are to illustrate graphical displays and numerical
outputs for a 2-D model of cross flow turbine. The graphical outputs can be displayed either in two dimensional (x-y) plots or in
contour plot. In this chapter the graphical displays are shown and the performance characteristics of the cross flow turbine are also
plotted using MatLab after processing the numerical results obtained from Fluent Analysis.
B.

750

Graphical Display

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The overall flow pattern in cross flow turbines can be clearly seen in graphical displays. The necessary graphical displays for flow
in cross flow turbine are velocity vector colored by velocity magnitude, contour of static pressure, contour of velocity magnitude,
and Tangential velocity, are shown below.

Fig. 2 Velocity Vector Colored by Velocity Magnitude

Figure 2 show the velocity vector colored by velocity magnitudes of cross flow turbine. The velocity magnitude is constant at the inlet of the
nozzle and it is going to increases inside nozzle. The flow inside the nozzle is guide by the guide vane towards the runner blade at which the
velocity becomes high.

Fig. 3 Contours of Velocity Magnitude

751

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 4 Contour of Static Pressure

Figure 4 shows contour plot of static Pressure of cross flow turbine. According to the above figure the static pressure is maximum
inside the nozzle at the inlet area and minimum at the outlet of the turbine or discharge area. The static pressure on runner blade is
also divided in to two regions. The first region is called stage one in which the static pressure is with a certain value and the second
one is stage two in which the static pressure is lease than stage one.

Fig. 5 Contour of Total Pressure

752

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 5 Show contour of Total pressure of cross flow turbine. The total pressure is high inside the nozzle area. And its going to
decrease in the first and second stage of the runner blade. According to the above figure the pressure is negative at the blade in
which there is any cross flow.
C.

Numerical Result

After convergence is reached printed quantitative results like flux balances, moments and surface integrated quantities are taken for
the evaluation of design parameters. Reports of area-weighted average field variables on inlet and outlet surfaces, mass flow rate
on inlet and outlet surfaces and moment about center (0, 0, 0) are illustrated below.
Area-weighted average field variables are the average value of field variables on inlet and outlet surfaces. They are computed by
dividing the summation of the product of the selected field variable and facet area by the total area of the surface.
The total moment vector about center (0, 0, 0) is computed by summing the pressure and viscous moment vectors on the impeller
wall. The z-component of total moment is then taken for the evaluation of shaft power.
The volume flow rate through the inlet and outlet boundaries is also computed by FLUENT. As described in the previous chapter
there should be a volume flow rate balance at the inlet and out let boundaries.

6.

EVALUATION OF PERFORMANCE CHARACTERISTICS OF THE TURBINE

Dimensional and non dimensional design parameters i.e. Head Coefficient, Flow Coefficient, power Coefficient, Head, Power and
Hydraulic efficiency are evaluated from the numerical output results of ANSYS FLUENT. These parameters are used to compare
the performance characteristics of different turbine models.
Both the operating characteristics and the non-dimensional characteristics are plotted after processing the numerical results from
Fluent using Mat-lab.
Operating characteristics are plotted after processing the numerical results from Fluent using Mat-lab.

Flow Rate Vs. Head Curve for Cross Flow Turbine


70

N= Constant
60

Effective Head , m

50

40

30

20

10

0
0

0.5

1.5

2.5

Turbine Flow,m3/s
Fig. 6 Performance Curve of Cross Flow Turbine ( Head vs Flow Rate)

Figure 6 shows the variation of head with flow rate. Theoretically it is expected that the head goes on increasing as the flow rate
increases because the head is the difference in height between the intake and the turbine.
753

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Flow Rate Vs. Efficiency for Cross Flow Turbine


0.9

0.8

N=Constant

Efficiency

0.7

0.6

0.5

0.4

0.3
0

0.2

0.4

0.6

0.8

1.2

1.4

1.6

1.8

Flow Rate ,m3/s


Fig. 7 Performance Curve of Cross Flow Turbine (Efficiency Vs. Flow Rate Curve)

Figure 7 show as the variation of Efficiency vs. flow rate, According to figure 7 the efficiency is maximum at 0.42m3/s after that
the efficiency is dropped as the flow rate is increased.

Shaft Speed Vs. Head Curve for Cross Flow Turbine


21
20

Q= Constant

Effective Head , m

19
18
17
16
15
14
13
12
11
100

200

300

400

500

600

700

800

900

1000

Shaft Speed (RPM)


Fig. 7 Performance Curve of Cross Flow Turbine (Effective head Vs. Shaft speed)

According to figure 7 effective head and shaft speed are directly proportional, as the head increase the possible shaft speed also
increase.

754

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Shaft Speed Vs. Torque Curve For Cross Flow Turbine


2200
2100

Q= Constant
2000

Torque (N-m)

1900
1800
1700
1600
1500
1400
1300
100

200

300

400

500

600

700

800

900

1000

Shaft Speed (RPM)


Fig. 8 Performance Curve of Cross Flow Turbine (Torque Vs. Shaft Speed)

As we have seen from figure 8 torque and shaft speed are inversely proportional, b/s as the shaft speed increases the torque
produced is going to decrease.

Shaft Speed Vs. Efficiency Curve For Cross Flow Turbine


1

Q= Constant

0.9

0.8

Efficiency

0.7

0.6

0.5

0.4

0.3

0.2
100

200

300

400

500

600

700

800

900

Shaft Speed (RPM)


Fig. 9 performance curve of cross flow turbine (Efficiency Vs. Shaft Speed)

The above figure indicates that the maximum efficiency is occurring at 700 RPM at constant Q.

755

www.ijergs.org

1000

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Performance Curve for Cross Flow Turbine


80

100

N=Constant
Efficiency

80

Head (m)

Head
40

60

20

40

00
0

0.5
0.5

11

1.5
1.5

22

Efficiency (%)

60

20

2.5
2.5

Flow Rate ,m3/s


Fig. 5 Performance Curve of Cross Flow Turbine (Head and Efficiency Vs. Flow Rate)

CONCLUSION
In this study a steady state CFD analysis of a 2-D model of forward curved 18 blades cross flow turbine is carried out. The contour
and vector plot of pressure and velocity distributions in the flow passage are displayed. And the operating characteristics of the
turbine are also computed from fluent numerical results. Different performance curves for the Darussalam University cross flow
turbine are plotted using Mat-lab program by analyzing the numerical results from Fluent.
From the study it was observed that there is a high pressure area inside the nozzle and near to the first stage runner blades. Then it
is going to decrease at the second stage and out let of the turbine. And even it can be negative at the region where there is no cross
flow.
The flow velocity at the inlet of the nozzle is constant and it is going to increase inside the nozzle nearer to the first stage runner
blades and then going to decrease at the second stage out let of the turbine. The static pressure also dropped after the flow passes
the first stage regions and it becomes negative at the region in which there is no any cross flow.
REFERENCE
Vincenzo Sammartano 1,*, Costanza Aric 1, Armando Carravetta 2, Oreste Fecarotta 2 and Tullio Tucciarelli 1,Cross flow
(Banki-Michell) Optimal Design by Computational Fluid Dynamics Testing and Hydrodynamic Analysis, ISSN 1996-1073,
Energies 2013, 6, 2362-2385; doi:10.3390/en6052362.
[2]
Felix Mtalo, Ramadhani Wakati, A.Towo, Sibilike K.Makhanu, Omar Munyaneza and Biniyam Abate, Design and
Fabrication of Cross Flow Turbine, Nile basin Capacity Buliding Network (NBCBN-SEC) Office, 2010.
[3]
Bilal Abdullah Nasir ,Design of High Efficiency Cross-Flow Turbine for Hydro-Power Plant, International Journal of
Engineering and Advanced Technology (IJEAT) ISSN: 2249 8958, Volume-2, Issue-3, February 2013.
[4]
Mohammad Durali, Design of Small Water Turbine for Farms and Small Communities, Massachusetts Institute of
Technology may 10, 1976.
[5]
Jusuf Haurissa, Slamet Wahyudi, Yudy Surya Irawan, Rudy Soenoko. The Cross Flow Turbine Behavior towards the Turbine
Rotation Quality, Efficiency, and Generated Power. Mechanical Engineering Doctorate Program, University of Brawijaya and
Mechanical Engineering, Jayapura Science and Technology University, Jl. Raya Sentani Padang Bulan-Jayapura. Mechanical
Engineering, University of Brawijaya, Jl. MT. Haryono 167, Malang. (INDONESIA).
[6]
Jusuf Haurissa, Slamet Wahyudi, Yudy Surya Irawan, Rudy Soenoko. The Cross Flow Turbine Behavior towards the Turbine
Rotation Quality, Efficiency, and Generated Power. Mechanical Engineering Doctorate Program, University of Brawijaya and
Mechanical Engineering, Jayapura Science and Technology University, Jl. Raya Sentani Padang Bulan-Jayapura. Mechanical
Engineering, University of Brawijaya, Jl. MT. Haryono 167, Malang. (INDONESIA).
[1]

756

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7]

MACIEJ KANIECKI, (2002). Modernization of the Outflow System of Cross Flow Turbine, Institute of Fluid-Flow
Machinery, Polish Academy of Sciences, Fiszera 14, 80-952 Gdansk, Poland kaniecki@imp.gda.pl.
[8]
Chiyembeke, Cuthbert and Torbjorn. Experimental study on a simplified cross flow turbine, Waterpower Laboratory,
Department of Energy and Process Engineering, Norwegian University of Science and Technology, Trondheim 7491, Norway.
Department of Mechanical and Industrial Engineering, University of Dar es Salaam, P.O. Box 35091, Dar es Salaam, Tanzania.
2014.
[9]
Vincenzo, Costanza,Armando,Oreste and Tullio. Banki-Michell. Optimal Design by Computational Fluid Dynamics Testing
and Hydrodynamic Analysis, energies 2013, ISSN 1996-1073.
[10]
Eve Cathrin Walseth. Investigation of the Flow through the Runner of a Cross-Flow Turbine, Master of Science in Product
Design and Manufacturing, Norwegian University of Science and Technology, Department of Energy and Process Engineering,
July 2009.
[11]
Jes us De Andrade, Christian Curiel, Frank Kenyery, Orlando Aguill on, Auristela Vasquez, andMiguel Asuaje,
Numerical Investigation of the Internal Flow in a Banki Turbine, July 7, 2011.
1

[12]
Bryan Ho-Yan , W. David Lubitz1, Performance Evaluation of Cross-flow Turbine for Low Head Application, world
renewable energy congress 2011-sweden, 8-13 may 2011.
Incheol Kim,Joji Wata, M. Rafiuddin Ahmed, Youngho Lee, CFD study of a ducted cross flow turbine concept for high efficiency
tidal current energy extraction, AWTEC 2012, Asian Wave and Tidal Conference Series, Nov. 27-30, Jeju island , korea

757

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of Jammer Circuit


Chirag Gupta, Nitin Garg
Department of ECE, ITM University, chirag26121992@gmail.com, 9910917959

Abstract Users of mobile communication devices have dramatically increased in the last twodecade. This proliferated the need of
a more effective and secures signal scrambler for a cultured society. Mobile jammer circuits are used to prevent mobile phones from
receiving or transmitting signals. Mobile jammer effectively disables mobile phones within the defined range zones without causing
any disturbance/interference to other communication means. Mobile jammer can be used in any practically location, but generally are
used in places where phone calls are disruptive like Schools, Churches, Libraries, Hospitals etc. Similarly with other radio jamming,
mobile jammer sends the radio waves of same frequency that mobile phones uses to block the signal. This signalcauses interference
with the communication between mobile phones and base stations to make the phones unusable. When mobile jammer is activated all
mobile within the range willindicate "NO NETWORK service. Design and study of Mobile jammer is discussed in this paper

Keywords GSM, Radio Frequency, Antenna, RF Generator, RF Amplifier, Tuning Circuit, Voltage Controlled Oscillator
INTRODUCTION

Primarily, military developed and used communication jamming devices. Where tactical commanders used RF communications to
block the hostile encroachment to their communication link. This idea comes from the fundamental area of not letting out the
successful transport of the information from the sender to receiver. Mobile jammers were originally developed for law enforcement
and the military to malign communications by rogues and terrorists to foil the use of certain remotely detonated explosives. The
civilian applications came into account with growing public resentment over usage of mobiles in public areas on the rise & reckless
intrusion of privacy. As a result, many companies originally contracted to design mobile jammer for government switched over to sell
these devices to private firms. As with other radio jamming, mobile jammer block mobile phone use by sending out radio waves along
the same frequencies that mobile phones use. This causes enough interference with the communication between mobile phones and
communicating towers to render the phones unusable. When mobile jammer s activated, all mobile phones will indicate "NO
NETWORK service. Incoming calls to the mobile phone are blocked as if the mobile were off. When the Mobile jammers are
switched off, all mobile phones will automatically reestablish their communication link and provide full service. Mobile jammers
effect can vary widely based on factors such as transparency of wall to signal, proximity to towers, presence of buildings and
landscape, even temperature and humidity play a role. Size of mobile jammer can vary the required range starting with the personal
pocket mobile jammer that can be carried along with oneself to ensure no disruption meeting with ones client or a portable jammer
for a class room or medium power mobile jammer or high power mobile jammer for your organization to very high power military
jammers to jam a large campuses.

JAMMER TECHNOLOGY
Five types of devices are developed (or being considered for development) for preventing mobile phones from ringing in certain
specified locations [1].
A. Type A Device (JAMMERS): This type of device have several independent oscillators transmitting jamming signals capable
of blocking different frequencies used by paging devices as well as those used by cellular systems control channels for call
establishment.
B. Type B Device (Intelligent Cellular Disablers): Unlike jammers, Type B devices do not transmit an interfering signal on the
control mobile channels. The device, when located in a designated quite area, functions as a detector. It has a unique identification
number for communicating with the cellular base station.
C. Type C Device (Intelligent Beacon Disablers): Unlike jammers, Type C devices do not transmit an interfering signal on the
control channels. The device, when located in a designated quiet area, functions as a beacon and any compatible terminal is
instructed to disable its ringer or disable its operation, while within the coverage area of beacon.
758

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

D. Type D Device (Direct Receive & Transmit Jammers): This jammer acts like a small, independent and portable base station,
which can directly interact with the operation of the local mobile. The jammer is predominantly in receive mode and will intelligently
choose to interact and block the cell phone directly if it is within close range of the jammer.
E. Type E Device (EMI Shield - Passive Jamming): EMI suppression techniques are used inthis to make a room into what is called
Faraday cage. Although labor intensive to construct, the Faraday cage essentially blocks or greatly attenuates, virtually all
electromagnetic radiation from entering or leaving the cage - or in this case a target room.

EXPERIMENTAL DETAIL, METHODS & MATERIALS


The effects of jamming depend on the jamming-to-signal ratio (J/S),range between transmitter and receiver, modulation scheme,
channel coding and interleaving of the target system, bandwidth of transmitter and receiver. Generally Jamming-to-Signal ratio can be
measured according to the following Equation.

Pj = jammer power
Pt = transmitter power
Gjr = antenna gain from jammer to receiver
Grj = antenna gain from receiver to Jammer
Gtr = antenna gain from transmitter to receiver
Grt = antenna gain from receiver to transmitter
Br = communications receiver bandwidth
Bj = jamming transmitter bandwidth
Rtr = range between communications transmitter and receiver
Rjt = range between jammer and communications receiver
Lj = jammer signal loss (including polarization mismatch)
Lr = communication signalloss
Mobile jammer circuit is shown in figure 1 below. Tuning Circuit, Voltage Controlled Oscillator, RF Amplifier combine to form the
Jammer circuit.

Figure 1
The transistor Q1, capacitors C4 & C5 and resistor R1 constitute the RF amplifier circuit. This amplifies the signal generated by the
tuned circuit. Antenna receives the amplification signal from capacitor C6.Work of the capacitor C6 is to remove the DC and allow
only the AC signal which is transmitted in the air.
759

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

When the transistor Q1 is turned ON, the tuned circuit at the collector will get turned ON. The tuned circuit consists of capacitor C1
and inductor L1, tuned circuit will act as an oscillator with zero resistance.This oscillator or tuned circuit produce a very high
frequency with minimum damping. The both inductor and capacitor of tuned circuit oscillate at its resonating frequency.
Operation of tuned circuit operation is very simple and easy. When the circuit gets ON, potential difference came across the plates of
the capacitor and the electrical energy is stored by the capacitor. Once the capacitor is completely charged, it allows the charge to flow
through inductor. Inductor is used to store magnetic energy. When the current is flowing across the inductor, it will store the magnetic
energy by this voltage across the capacitor will get decreased, as a result electrical energy of the capacitor is converted into magnetic
energy stored by inductor and the charge across the capacitor will be zero. In a flinch of time, the magnetic charge through the
inductor will decreased and the current will charge the capacitor in a reverse polarity. Again after some period of time, capacitor will
be completely charged and magnetic energy across the inductor will be gradually reduce to zero. Again once more electrical energy
will be converted in magnetic energy. After some time, inductor will give charge to capacitor and become zero and they will oscillate
and generate the frequency.
This cycle will run until when the internal resistance is generated and oscillations get stop. Capacitor C5 gives the RF amplifier feed
to the collector terminal before C6 for gain or like a boost signal to the tuned circuit signal. The capacitors C2 and C3 are used to
generate the noise for the frequency generated by the tuned circuit. Electronic pulses in some random fashion (technically called noise)
will be generated by capacitors C2 and C3.
These all entities, feedback back or boost given by the RF amplifier, frequency generated by the tuned circuit, the noise signal
generated by the capacitors C2 and C3 are combined, amplified and transmitted to the air.
Normally, cell phone operates at the frequency of 450 MHz frequency. To block this frequency, we also need to generate 450MHz
frequency with some noise that will act as simple rendering or blocking signal, since cell phone receiver are not able to understand to
which signal it has been received. By this, we can able to block the cell phone signal from reaching the cell phones.
So the following circuit, we generated the 450 MHz frequency to block the actual cell phone signal. Thats why the circuit shown
below will act as a jammer for blocking the actual signal mobile signals.
Circuit shown below will work in 100 meter range that is it capable of blocking signals upto 100 meter range. Circuit can be used
in TV transmission and remote controlled applications. To obtain more efficiency value of circuit components can be altered.

RESULT
The simulation results of various circuit components are observed and results are shown below. The first part of the Jammer is the
Tuning circuit shown in the figure 2 and second graph is drawn between input power and output power as shown in figure 3. Both power
calculations are in dBm.

Figure 2

Figure 3

CONCLUSION
Mobile jammer circuits have both advantages and disadvantages, these can be considered as good and bad both depending on the
signal intended to be jammed. Simple jammer circuits can be pretty easily constructed; however subtle circuit can be made for higher
requirements.

760

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] Ahmad Hijazi, GSM-900 Mobile Jammer
[2] Vijay S Kamble,Mrs, ArchnaWasule,Dilip S Kale, Mrs.NeemaShikha, Antenna for mobile phone jammer, First international on
Emerging Trends in Engineering & Technology, ICETET 08, 16-18 July 2008 Pp 856-859.
[3] Daniel S. V. Araujo,Jose C.A. Santos,Mauricio H.C. Cias,A dual band steerable Cell Phone jammer, Microwave and
Optoelectronic conference 2007 IMOC - 2007,SBMO/IEEE MTT-S, Oct29-Nov 2007 pp 611-615.
[4] Mobile & Personal Communications Committee of the Radio Advisor Board of Canada, Use of jammer and disabler Devices for
blocking PCS, www.rabc. ottawa. on.ca
[5] Braun,T.;carle,G.; Koucheryavy,Y.; T saoussids, V.; Wired/WirelessInternetCommunication, ThirdInternational Conference,
WWIC 2005,Xanthi, Greece
[6] http://tcil-india.com/new/White%20 Papers.htm
[7] Theodore S Rappport, Wireless Communication , Second edition, Pearson Education
[8] http://pt.com/page/tutorials/gsm-tutorial
[9] http://datasheets.maxim-ic.com/en/ds/ MAX2361-2365.pdf
[10] http://www.antennafactor.com/documents/ ANT-916-PW- t.pdf
[11] K.D. Prasad, Antenna & Wave Propagation, SatyaPrakashan,
[12] http://www.electronics-manufacturers.com/products/rf-microwaveomponents/antenna

761

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design and Implementation of an On-Chip timing based


Permutation Network for Multiprocessor system on Chip
Ms Lavanya Thunuguntla1, Saritha Sapa 2
1 Associate Professor, Department of ECE, HITAM, Telangana (S), India
2 Research Scholar, Department of ECE, HITAM, Telangana (S), India

Abstract: The communication mechanism is employed in systems on a single chip (SoC) are an important contribution to their overall
performance. To date bus based mechanism is applied in many areas of real time applications of SoCs realizing on FPGA due to its
flexibility and simplification in designing tool. This paper presents a novel on-chip network to support traffic permutation in
multiprocessor SoC applications. The proposed work employed dynamic path setup with round robin algorithm with word length
control and also circuit switching approach to enable runtime path arrangement for arbitrary traffic permutations. The proposed work
is carried out based on Xilinx FPGA family and by using Verilog HDL
Keywords: SoC, FPGA and Dynamic path setup
I.

INTRODUCTION

Today field programmable gate-arrays (FPGAs) are used for a wide sector of applications. The usage in former times was focused on
rapid-prototyping system for integrating test systems. After the test-phase, often an ASIC approach substituted these systems for massproduction. Due to dramatic growth in circuit-design complexity regarding Moors Law, the ability of implementing complex
architecture in a single chip always presents new challenges. One of the issues found by designers while implementing large SoCs is
the communication among their components. Buses are an increasingly inefficient way to communicate, since only one source can
drive the bus at a time, thus limiting bandwidth.
SoCs are increasing in popularity because of their advantages: larger bandwidth, and lower power dissipation through shorter
wire segments. Communications in large SoCs are so important that many designers have adopted the NoC approach. The challenges
consist in offering the best connectivity and throughput with the simplest and cheapest architecture of methodology; whereas, many
topologies and architectures have been investigated. This is well illustrated in [4], where researchers propose a two-level FIFO
approach in order to simplify the design of the arbitration algorithm and improve the bandwidth. However, this method tends to be
expensive in term of hardware.
Although the completely embedded tools of FPGA manufactures such as Xilinx and Altera are offered to help their
customers to design the complex Multi Processor Systemon- Chip (MPSoCss), their environments only offer the busbased paradigm
or point-to-point connection. More complex MPSoCs may require higher bandwidths than a bus-based system can offer, or may need
to be more efficient than pointto- point connections.
Most on-chip networks in practice are general-purpose and use routing algorithms such as dimension-ordered routing and
minimal adaptive routing. To support permutation traffic patterns, on-chip permutation networks using application-aware routings are
needed to achieve better performance compared to the general-purpose networks. The idea of designing the Reconfigurable Crossbar
Switch for NoCs to gain a high data throughput and to be capable of adapting topologies on demand was presented in [5]. Their
evaluation results showed that output latency, resource usage, and power consumption were better than a traditional crossbar switch.
Nevertheless, they did not focus the problem while operating many-to-one and one-to-many data communication.
This paper is organized as follows: Section II presents over communication problems; section III presents on-chip network
with dynamic path setup; section IV implementation of on-chip network and V section is for concluding the work.
II. COMMUNICATION PROBLEM
In Fig. 1, all communication in a process group becomes many communications involving an arbitrary subset of processor
from the systems point of view. In order to efficiently support data communication, a system has to support one-to-one (unicast), oneto-many (multicast), many to- one (gathering) and many-to-many communication primitives in hardware. Actually, most
interconnection can support unicast, but not gathering and multicasting.

762

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1. Systems point of view


In Fig. 2(a) shows a multicast communication with three destinations where process P0 has to send the same data to three
processors: P1, P2 and P3. Without the multicast functionality on interconnection, the system can use unicast functionality to send
data to all destinations sequentially. However, blocking will occur, if P0 is executing sending state to P1, and P1 has not yet execute
receiving state, P0 is blocked; meanwhile, P2 is executing receiving state, and is blocked because P0 has not yet executed sending
state. Obviously, system resources are wasted due to unnecessary blocking. Fig.2(b) shows gathering data communication from three
sources to one destination. Because of sending data from all sources, congestion is found at destination where it can be reduced by
applying source priority. Supposing P1,P2 and P3 are the first, second and third priorities. While their data arrive at P0, the data from
P1 will be the first forwarding; meanwhile, the rest will be buffered. Thus, the output latency of system will increase, and buffer will
also require.

Fig. 2. a) One-to-many b) Many-to-one


III. ON CHIP NETWORK DESIGN
The key idea of proposed on-chip network design is based on a pipelined circuit-switching approach with a dynamic pathsetup scheme supporting runtime path arrangement.

Fig. 3. Switch-by-switch interconnection and path-diversity capacity.


Fig. 3. The bit format of the handshake includes a 1-bit Request (Req) and a 2-bit Answer (Ans). Req=1is used when a switch requests
an idle link leading to the corresponding downstream switch in the setup phase. The req=1 is also kept during data transfer along the
set up path. A req=0 denotes that the switch releases the occupied link. This code is also used in both the setup and the release phases.
An ans=01 (ack) means that the destination is ready to receive data from the source. When the ans=01 propagates back to the source, it
denotes that the path is set up, then a data transfer can be started immediately. An ans=11 (nack) is reserved for end-to-end flow
control when the receiving circuit is not ready to receive data due to being busy with other tasks, or overflow at the receiving buffer,
etc. An ans=10 (Block) means that the link is blocked. This Block code is used for a backpressure flow control of the dynamic pathsetup scheme.
Switching Node Designs
Three kinds of switches are designed for the proposed on-chip network. These switches are all based on a common switch
architecture shown in Fig. 4, with the only difference being in the probe routing algorithms. This common architecture has basic
763

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

components: INPUT CONTROLs (ICs), OUTPUT CONTROLs (OCs), an ARBITER, and a CROSSBAR. Incoming probes in the
setup phase can be transported through the data paths to save on wiring costs. The ARBITER has two functions: first, crossconnecting the Ans_Outs and the ICs through the Grant bus, and second, as a referee for the requests from the ICs. When an incoming
probe arrives at an input, the corresponding IC observes the output status through the Status bus, and requests the ARBITER to grant it
access to the corresponding OC through the Request bus. When accepting this request, the ARBITER cross-connects the
corresponding Ans_Out with the IC through the Grant bus with its first function. With the second function, the ARBITER, based on a
pre-defined priority rule, resolves contention when several ICs request the same free output. After this resolution, only one IC is
accepted, whereas the rest are answered as facing a blocked link. The IC is implemented with finite-state machine (FSM). The probe
routing algorithm and the operation of the switches are controlled according to this FSM implementation.

Fig. 4. Common switch architecture.

IV. IMPLEMENTATION OF ON-CHIP NETWORK

Fig. 5. 6x6 switch-based crossbar structure.


The major components that make up the switch-based crossbar consist of Input Port module, Switch module and Output Port
module. Depending on the kind of channels and switches, there are two alternatives for designing switch-based crossbar:
unidirectional and bidirectional [6]. In this paper, unidirectional switch-based crossbar is selected and simplified with 6x6 switchbased crossbar structure shown in Fig. 1 because of resource constraint. Obviously, transmitting data from a Source Node to a
Destination Node require crossing the link between the Source Node and the Input Port module, and the link between the Output Port
module and the Destination Node where the Switch module in data path will dynamically establish the link for the Output Port module
according to switching protocol.
According to the 6x6 switch-based crossbar architecture, its switching protocol shows in Fig. 5.
Because of resource constraint on the target FPGA [2], data width, the N.of word register, the destination port register are 16,10 and 6
bits respectively. Moreover, synchronous protocol is applied to synchronize all modules in the switch-based crossbar-Clock signal,
Ready signal and acknowledge signal.
764

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 6. 16-bit switching protocol


Fig.6 shows the 10-bit Last Significant Bit (LSB) of switching protocol, which defines the number of packets, required
transferring from a Source Node to a Destination Node, where the maximum is 1,024 packets per time, and the rest define the
Destination Node. For example, when the Source Node1 wants to transfer 100 packets to the Destination Node number 3 and 4, the
16-bit switching protocol has to be 0011_0000_0110_0100 (0x3064H).
Input Port module
Its behavior shows in Fig. 7 based on switching conceptual [1]. At the beginning, the state is in an idle state, and then the
header ready signal enables HIGH to inform a Source Node that ready to read the switching protocol. As soon as the 16-bit switching
protocol is asserted at the Data in signal and the Data in valid signal, the state goes to the next check state. In this state, the 16-bit
switching protocol is separated and written on the N.of word register and the destination port register resided in FSM module, where
10-bit low and 6-bit high are written on the N.of word register and the destination port register; meanwhile, the register port signal is
read to check, whether or not the required destination ports are free.

Fig. 7. State machine


If it is free, the state will go to the req state. In the req state, the N.of words register, the destination port register and the req
signal are read and presented on the N.of words signal and the N.of word valid signal. When the rsp signal enables HIGH, the channel
for transferring data is guaranteed, and the state goes to the Run state. In the Run state, the data in ready signal enables HIGH. The 16bit data in signal and the data in valid signal are sequentially asserted into
the channel. When the lasts data is completely forwarded, the rsp signal from the Output port module will enable LOW, and the state
will start to the next cycle.
Output Port module

Fig. 8. a) Output Port module Architecture


b) Flow control diagram
765

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 8(b) explains its behavior. At first, the state is in an Idle state, and the mode signals are read where there are two kind of
possible modes, normal mode and interleaving mode. In the normal mode, the Round Robin component will be enabled but will be
disabled in the interleaving mode. Whenever a Source Node requires transferring its data to a Destination Node, the req signal of the
Input Port module connected with the Source Node will enable HIGH. Then, at the Output-Port module connected with the
Destination Node, each bit of the 6-bit grant register related with each req signal of each Input Port module will set 1. The N.of
words signal and the N.of words valid signal will be read and stored into the temporal register. Then, the state goes to the Write state.
In the Write state, each bit of the grant register is read to check the Source Nodes requirement and to identify the location of the next
state. In order to simplify, supposing the grant register contains 101100;
It implies that the input port modules number 3,4 and 6 will forward their data to the output port module. Therefore, the state
will start at the Run3 state, and the Data in ready signals number 3,4 and 6 will be enabled HIGH. Until the Run6 state is executed, the
state goes to the Check state. The counter register counting the forwarded packets is checked, whether or not the forwarded packets
has been completely sent. Whenever, they have been sent properly this counter register will be 0000000000 and the Clear state will
be done. At the Clear state, the rep signals at 3,4 and 6 are enabled LOW and the grant register will be 000000, as well as the state
will begin the next cycle.
V.CONCLUSION
This paper proposes and implements the interleaving mechanism to overcome the output delay (latency) while operating many-to-one
and one-to-many data communication. All primitive modules are structured by Verilog HDL, verified their behaviors with Model Sim
6.2c, synthesized their resource usages and estimated frequency on the Xilinx FPGA by ISE tool.

REFERENCES;
[1] D. Bafumba-Lokilo, Z. Savaria, J. David, Generic Crossbar Network on Chip for FPGA MPSoCs, Circuits and Systems and
TAISA Conference, 2008, pp 269-272.
[2] Xilinx , On-chip Peripheral Bus V2.0 with OPB arbiter, Processor Local Bus(PLB)v4.6(v1.03a), http://www.xilinx.com.
[3] H. C. d. Freitas, P. Navaux, On the Design of Reconfigurable Crossbar Switch for Adaptable On-Chip Topologies in
Programmable NoC Routers, ACM Special Interest Group on Design Automation, 2009, pp. 129-132.
[4] H.Po-Tsang, H. Wei , 2-Level FIFO Architecture Design for Switch Fabrics in Network-on-Chip, Circuits and Systems, ISCAS
2006, Proceedings 2006, IEEE International Symposium on, pp.4863-4866.
[5] M. Hbner, L. Braum,D. Ghringer, J. Becker, Run-Time Reconfigurable Adaptive Multilayer Network-On-Chip for FPGABased systems, IEEE International Symposium on In Parallel and Distributed Processing, 2008, pp. 1-6.
[6] C. Hilton, B. Nelson, PNoC: a flexible circuit-switched NoC for FPGA based systems, IEE Proceeding Computing Vol. 153,
2006, pp. 181-188

766

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fast Feature subset selection algorithm based on clustering for high


dimensional data
Mrs. Komal Kate, Prof. S. D. Potdukhe

PG Scholar, Department of Computer Engineering, ZES COER, pune, Maharashtra

Assistant Professor, Department of Computer Engineering, ZES COER, pune, Maharashtra


Abstract A Feature selection algorithm employ for removing irrelevant, redundant information from the data. Amongst feature
subset selection algorithm filter methods are used because of its generality and are usually good choice when numbers of features are
large. In cluster analysis, graph-theoretic clustering methods to features are used. In particular, the minimum spanning tree (MST)based clustering algorithms are adopted. A Fast clustering bAsed feature Selection algoriThm (FAST) is based on MST method. In the
FAST algorithm, features are divided into clusters by using graph-theoretic clustering methods and then, the most representative
feature that is strongly related to target classes is selected. Features in different clusters are relatively independent. A feature subset
selection algorithm (FAST) is used to test high dimensional available image, microarray, and text data sets. Traditionally, feature
subset selection research has focused on searching for relevant features. The clustering-based strategy of FAST having a high
probability of producing a subset of useful and independent features.

Keywords Cluster analysis, Graph-theoretic clustering, Minimum spanning tree, Feature selection, feature subset selection
algorithm (FAST), High dimensional data, Filter method.
INTRODUCTION

Data mining is a process of analyzing data and summarizes it into useful information. In order to achieve successful data mining,
feature selection is an essential component. In machine learning feature selection is also known as variable selection or attributes
selection. The main idea of feature selection is to choose a subset of features by eliminating irrelevant or no predictive information. It
is a process of selecting a subset of original features according to specific criteria. Feature selection is an important and frequently
used technique in data mining for dimension reduction. It employ for removing irrelevant, redundant information from the data to
speeding up a data mining algorithm, improving learning accuracy, and leading to better model comprehensibility. Supervised,
unsupervised and semi-supervised feature selection algorithms are developed as result of process of feature selection algorithm. A
supervised feature selection algorithm determines features' relevance by evaluating their correlation with the class or their utility for
achieving accurate prediction, and without labels, an unsupervised feature selection algorithm may exploit data variance or data
distribution in its evaluation of features' relevance and a semi-supervised feature selection algorithm uses a small amount of labelled
data as additional information to improve unsupervised feature selection [2].
Feature subset selection methods can be divided into four major categories: Embedded, Wrapper, Filter, and Hybrid. The embedded
methods has a feature selections as a part of the training process and are usually specific to given learning algorithms, and thus
possibly more efficient than the other three categories. Machine learning algorithms like decision trees or artificial neural networks are
examples of embedded approaches. Wrapper methods assess subsets of variables according to their relevance to a given predictor. The
method conducts a search for a good subset using the learning algorithm itself as part of the evaluation function. Filter methods are
pre-processing methods. They attempt to assess the useful features from the data, ignoring the effects of the selected feature subset on
the performance of the learning algorithm. Examples are methods that select variables by ranking them through compression
techniques or by computing correlation with the output. The hybrid methods are a combination of filter and wrapper methods by using
a filter method to reduce search space that will be considered by the subsequent wrapper. The important part of hybrid method is
combination of filter and wrapper methods to achieve the best possible performance with a particular learning algorithm with similar
time complexity of the filter methods [1].
In cluster analysis, graph theoretic approach is used in many applications. In general graph-theoretic clustering a complete graph is
formed by connecting each instance with all its neighbours. Zahn's clustering Algorithm
1. Construct the MST for the set of n patterns given.
2. Identify inconsistent edges in MST.
3. Remove the inconsistent edges to form connected components and call them clusters.
In the FAST algorithm, features are divided into clusters by using graph-theoretic clustering methods and then, the most
representative feature that is strongly related to target classes is selected. Features in different clusters are relatively independent. A
feature subset selection algorithm (FAST) is used to test high dimensional available image, microarray, and text data sets.
Traditionally, feature subset selection research has focused on searching for relevant features. The clustering-based strategy of FAST
767
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

having a high probability of producing a subset of useful and independent features.

RELATED WORK
Feature selection is aim at a choosing a subset of features by eliminating irrelevant or no predictive information. It is a
process of selecting a subset of original features according to specific criteria. Irrelevant features do not contribute to the accuracy and
redundant features mostly provide the information which is already present in other features.
There are many feature selection algorithm present, some of them are useful at removing irrelevant features but not effective to handle
redundant features. Yet some of the other can eliminate irrelevant feature while taking care of redundant features [1]. FAST algorithm
falls in to second group.
One of the feature selection algorithms is Relief [3], which weighs each feature according to its ability to discriminate
instances under different targets based on distance-based criteria function. However, Relief is useless at removing redundant features
as two predictive but highly correlated features are likely both to be highly weighted [4]. Relief-F [5] extends Relief, enabling this
method to work with noisy and incomplete data sets and to deal with multiclass problems, but still cannot identify redundant features.
Redundant features also affect the accuracy and speed of learning algorithm; hence it is necessary to remove it. CFS [6], FCBF [7],
and CMIM [9] are examples that take into consideration the redundant features. CFS [6] is achieved by the hypothesis that a good
feature subset is one that contains features highly correlated with the target, yet uncorrelated with each other. FCBF ([7], [8]) is a fast
filter method which can identify relevant features as well as redundancy among relevant features without pair wise correlation
analysis. CMIM [9] iteratively picks features which maximize their mutual information with the class to predict, conditionally to the
response of any feature already picked.
Different from above algorithms, FAST algorithm uses minimum spanning tree-based method to cluster features.

FEATURE SUBSET SELECTION ALGORITHM


FRAMEWORK
Dataset

Symmetric Uncertainty computation

Irrelevant Feature Removal

T-Relevance

MST Construction

Redundant Feature Elimination

Tree Partitioning & representative


feature selection

Selected features

Fig.1. Framework of feature subset selection algorithm

768

Feature subset selection algorithms are aim at identifying and removing irrelevant and redundant features as much as
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

possible. Good feature subsets contain features highly correlated with (predictive of) the class, yet uncorrelated with (not predictive
of) each other. [10]
Feature selection framework can deal with effectively and efficiently deal with irrelevant and redundant features. It is made
up of two connected components of irrelevant feature removal and redundant feature elimination. The former obtains features relevant
to the target concept by eliminating irrelevant ones, and the latter removes redundant features from relevant ones via choosing
representatives from different feature clusters, and thus produces the final subset [1].
FAST algorithm, it involves 1) the construction of the minimum spanning tree from a weighted complete graph; 2) the
partitioning of the minimum spanning tree into a forest such that each tree representing a cluster; and 3) and then the selection of
representative features from the clusters.
Relevant features have strong correlation with target concept hence they are always needed for a best subset, while redundant
features are not needed because their values are completely correlated with each other. Thus, notions of feature redundancy and feature
relevance are normally defined in terms of feature correlation and feature-target concept correlation.

SYMMETRIC UNCERTAINTY
Mutual information majors how much feature values and target classes differ from each other. This is nonlinear estimation of
correlation between feature values or feature values and target classes [1]. The symmetric uncertainty () [11] is derived from the
mutual information by normalizing it to the entropies of feature values or feature values and target classes, and has been used to
evaluate the goodness of features for classification by a number of researchers (e.g., Hall [6], Hall and Smith [10], Yu and Liu [7], [8],
Zhao and Liu [12], [13]).
The symmetric uncertainty is defined as follows

(, ) =

2 ()
() + ()

Where,

= (|)
= (|)
=

() log 2 ()

()

(|) log 2 (|)

Where, p(x) is the probability density function and p (x|y) is the conditional probability density function.

T-RELEVANCE
The relevance between the feature Fi F and the target concept C is referred to as the T-Relevance of Fi and C, and denoted
by SU(Fi,C). If SU(Fi,C) is greater than a predetermined threshold, then Fi is a strong T-Relevance feature. After finding the relevance
value, the redundant attributes will be removed with respect to the threshold value.

F-CORRELATION
The correlation between any pair of features Fi and Fj (Fi,Fj F^ i j) is called the F-Correlation of Fi and Fj, and denoted
by SU(Fi, Fj). The same equation of symmetric uncertainty which is used for finding the relevance between the feature and the target
class is again applied to find the similarity between two attributes with respect to each label.

MINIMUM SPANNING TREE


Viewing features and as vertices and (, ) ( = ) as the weight of the edge between vertices and , a weighted
complete graph = (,) is constructed. As symmetric uncertainty is symmetric further the F-Correlation (, ) is symmetric as
well, thus is an undirected graph. The complete graph reflects the correlations among all the target-relevant features.
769

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Unfortunately, graph has vertices and (1)/2 edges. For high dimensional data, it is heavily dense and the edges with different
weights are strongly interwoven. Moreover, the decomposition of complete graph is NP-hard [15]. Thus for graph , we build a MST,
which connects all vertices such that the sum of the weights of the edges is the minimum, using the well-known Prim algorithm [14].
The weight of edge (, ) is F-Correlation (, ). After building the MST, we first remove the edges , whose weights are smaller
than both of the T-Relevance (, ) and (, ), from the MST. Each deletion results in two disconnected trees 1 and 2.
This can be illustrated by an example. Suppose fig.2 shows MST which is generated from complete graph. We first travel all
the edges then decide to remove the edge (F0, F4) because its weight SU (F0, F4)= 0.2 is smaller than both SU (F0, C)= 0.6 and
SU(F4, C)= 0.7. This makes the MST is clustered into two clusters. The details of the FAST algorithm are shown in algorithm1.
SU(F0,C)=0.6

F0

0.5

0.2
SU(F4,C)=0.7

SU(F1,C)=0.3

F4

F1
0.8

0.6

F3

F2
SU(F2,C)=0.4

0.7

0.9

SU(F3,C)=0.7

F5

F6
SU(F5,C)=0.3

Fig.2. Example of clustering


ALGORITHM 1: FAST

Inputs: D(F1, F2, .... Fm, C)- the given data set
- the T-Relevance threshold.
Output: S- selected feature subset.
//-------Part1: Irrelevant Feature Removal -----1
2
3
4

for i=1 to m do
T-Relevance= SU(Fi, C)
if T-Relevance > then
S= S U {Fi};

//------Part2: Minimum spanning tree construction----5


6
7
8
9

G= NULL; // G is a complete graph


for each pair of features {Fi, Fj} S do
F-Correlation = SU (Fi, Fj)
Add Fi and/ or Fj to G with F-Correlation as the weight of the corresponding edge;
minSpanTree = Prim(G); //Using Prim Algorithm to generate the minimum spanning tree

//-------Part3: Tree Partition and Representation Feature Selection-----10


11
12
13
14
15
16
17
18
770

Forest= minSpanTree
for each edge Eij Forest do
if SU(Fi, Fj) < SU(Fi, C) SU(Fi, Fj) < SU(Fj, C) then
Forest= Forest- Eij
S=
for each tree Ti Forest do
F = argmax Fk Ti SU(Fk, C)
S= S U { F};
return S
www.ijergs.org

SU(F6,C)=0.6

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

TIME COMPLEXITY
The first part of the algorithm has a linear time complexity () in terms of the number of features . When 1 < , the second part
of the algorithm firstly constructs a complete graph from relevant features and the complexity is (2), and then generates a MST
from the graph using Prim algorithm whose time complexity is (2). The third part partitions the MST and chooses the
representative features with the complexity of (). Thus when 1 < , the complexity of the algorithm is (+2).

CONCLUSION
FAST cluster based subset selection algorithm involves three important steps: 1.Removal of irrelevant features. 2. Elimination of
Redundant features using minimum spanning tree. 3. Partitioning the MST and collect the selected features. Each cluster consists of
redundant features and which is treated as single feature, so that dimensionality is reduced. A feature subset selection algorithm
(FAST) is used to test high dimensional available image, microarray, and text data sets. The clustering-based strategy of FAST
produces a subset of useful and independent features. The FAST algorithm can efficiently and effectively deal with both irrelevant and
redundant features, and obtain a good feature subset.

REFERENCES:

[1] Qinbao Song,' Jingjie Ni and Guangtao Wang A Fast Clustering-Based Feature Subset Selection Algorithm for High Dimensional
Data IEEE transactions on knowledge and data engineering vol:25 no:1 year 2013
[2] Almuallim H. and Dietterich T.G., Algorithms for Identifying Relevant Features, In Proceedings of the 9th Canadian
Conference on AI, pp 38-45,1992.
[3] Kira K. and Rendell L.A., The feature selection problem: Traditional methods and a new algorithm, In Proceedings of
Nineth National Conference on Artificial Intelligence, pp 129-134, 1992.
[4] Koller D. and Sahami M., Toward optimal feature selection, In Proceedings of International Conference on Machine Learning,
pp 284-292, 1996.
[5] Kononenko I., Estimating Attributes: Analysis and Extensions of RELIEF, In Proceedings of the 1994 European Conference on
Machine Learning, pp171-182, 1994.
[6] Hall M.A., Correlation-Based Feature Subset Selection for Machine Learning, Ph.D. dissertation Waikato, New Zealand: Univ.
Waikato, 1999.
[7] Yu L. and Liu H., Feature selection for high-dimensional data: a fast correlation-based filter solution, in Proceedings of 20th
International Conference on Machine Leaning, 20(2), pp 856-863, 2003.
[8] Yu L. and Liu H., Efficient feature selection via analysis of relevance and redundancy, Journal of Machine Learning Research,
10(5), pp 1205-1224, 2004.
[9] Fleuret F., Fast binary feature selection with conditional mutual Information,Journal of Machine Learning Research, 5, pp
1531-1555, 2004.
[10] Hall M.A. and Smith L.A., Feature Selection for Machine Learning: Comparing a Correlation-Based Filter Approach to the
Wrapper, In Proceedings of the Twelfth international Florida Artificial intelligence Research Society Conference, pp 235-239,
1999.
[11] Press W.H., Flannery B.P., Teukolsky S.A. and Vetterling W.T., Numerical recipes in C. Cambridge University Press,
Cambridge, 1988.
[12] Zhao Z. and Liu H., Searching for interacting features, In Proceedings of the 20th International Joint Conference on AI, 2007.
[13] Zhao Z. and Liu H., Searching for Interacting Features in Subset Selection, Journal Intelligent Data Analysis, 13(2), pp 207228, 2009.
[14] Prim R.C., Shortest connection networks and some generalizations, Bell System Technical Journal, 36, pp 1389-1401, 1957.
[15] Garey M.R. and Johnson D.S., Computers and Intractability: a Guide to the Theory of Np-Completeness. W. H. Freeman & Co,
1979.
[16] Almuallim H. and Dietterich T.G., Learning boolean concepts in the presence of many irrelevant features, Artificial
Intelligence, 69(1-2), pp 279-305, 1994.
[17] Arauzo-Azofra A., Benitez J.M. and Castro J.L., A feature set measure based on relief , In Proceedings of the fifth international
conference on Recent Advances in Soft Computing, pp 104-109, 2004
[18] Baker L.D. and McCallum A.K., Distributional clustering of words for text classification, In Proceedings of the 21st Annual
international ACM SIGIR Conference on Research and Development in information Retrieval, pp 96-103, 1998.
[19] Battiti R., Using mutual information for selecting features in supervised neural net learning, IEEE Transactions on Neural
Networks, 5(4), pp 537- 550, 1994.
771
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[20] Bell D.A. and Wang, H., A formalism for relevance and its application in feature subset selection, Machine Learning, 41(2), pp
175-195, 2000.
[21] Biesiada J. and Duch W., Features election for high-dimensional dataa Pearson redundancy based filter,
AdvancesinSoftComputing, 45, pp 242C249,2008.
[22] Butterworth R., Piatetsky-Shapiro G. and Simovici D.A., On Feature Selection through Clustering, In Proceedings of the Fifth
IEEE international Conference on Data Mining, pp 581-584, 2005.
[23] Cardie, C., Using decision trees to improve case-based learning, In Proceedings of Tenth International Conference on Machine
Learning, pp 25-32, 1993.
[24] Chanda P., Cho Y., Zhang A. and Ramanathan M., Mining of Attribute Interactions Using Information Theoretic Metrics, In
Proceedings of IEEE international Conference on Data Mining Workshops, pp 350-355, 2009

772

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Extending Knowledge Bases using various Neural Network Models - A Survey


1

Sagarika Sahoo, 2Avani Jadeja

M.E. Scholar, Computer Engineering, HashmukhGoswami College of Engineering, GTU

Assistant Professor, Computer Engineering, HashmukhGoswami College of Engineering, GTU

Abstract Knowledge bases are an important resource for easily accessible, systematic relational knowledge. They provide the
benefit oforganizing knowledge in the relational form but suffer from incompleteness of new entities and relationships. However, each
of them (eg. Freebase, Wordnet etc)is based on different severe symbolic framework which makes them hard to use their data for
different other purposes in the field of Artificial Intelligence(AI) such as natural language processing (word-sense disambiguation,
natural language understanding, ...), vision (scene classification, image semantic annotation, ...) or collaborative filtering. Much work
has been done on relation extraction from knowledge bases and extending knowledge bases using various Neural Network models.
This paper provides a survey on extending knowledge bases using various Neural Network models and the ideas and strength of those
models.

Key WordsKnowledge bases, semi-supervised learning, score function, entity vectors.


I.

INTRODUCTION

The fundamental challenge for AIhas always been to be able to gather, organize and make intelligent use of the colossal amounts
of information generated daily. Recent developments and collaborative processes has accomplished the task by creating ontologies
and knowledge bases such as WordNet [1], Yago [2] or the Google Knowledge Graph are extremely useful resources for query
expansion [3], coreference resolution [4], question answering (Siri), information retrieval or providing structured knowledge to
users.Much work focused on extending knowledge bases using pattern or classifiers on large text bodies. However, the knowledge that
is recognizable is not expressed in the large text corpora. Due to this, they suffer from incompleteness and a lack of reasoning
capability.
To take the advantage of these knowledge bases much work has been focused on relation extraction from these knowledge bases
by extending them using various Neural Network Models. The aim of this paper is to provide an overview of various methodologies
used in extending the knowledge bases.

II.

STRUCTURAL EMBEDDING OF KNOWLEDGE BASES

The main idea [8] behind the structural embedding of KBs is the following:
(i) Entities can be modelled in a d-dimensional vector space, termed the embedding space. The ithentity is assigned a vector
EiRd.
(ii) Within that embedding space, for any given relation type, there is a specific similarity measure that captures that relation between
entities. For example, the part of relation would use one measure of similarity, whereas similar towould use another. Note that
these similarities are not generally symmetric, as e.g. part of is not a symmetric relation. It is modelled by assigning for the
kthgiven relation a pair Rk= (Rlh, k,Rrhs, k ), where Rlhs, jand Rrhs, jare both ddmatrices. The similarity function for a given
entity is thus defined as:
Sk(Ei,Ej) = ||Rlhs, k EiRrhs, k Ej||p
using the p-norm. In this work we chose p = 1due to the simplicity of the gradient learning in that case. That is, we transform the
entity embedding vectors Eiand Ejby the corresponding left and right hand relation matrices for the relation Rkand then similarity is
measured according to the 1-norm distance in the transformed embedding space.

773

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

III.

VARIOUS MODELS

The different models, compute a score of how probable it is that two entities are in certain relationship. Let e1, e2 Rd be the
vector representation of two entities then, the different models of Neural Network based function predicts the relationship of two
entities in the knowledge base. Each model assigns a score to a triplet using a function g measuring how likely the triplet is
correct.We now introduce several related models in increasing order of expressiveness and complexity.

A. DISTANCE MODEL
The model of Bordes et al. [8] scores relationships by mapping the left and rightentities to a common space using a relationship
specific mapping matrix and measuring the L1distance between the two. The scoring function for each triplet has the following form:

g(e1; R; e2) = ||WR,1 e1 WR,2 e2||1


dxd

whereWR,1;WR,2R are the parameters of relation Rs classifier. This similarity-based model scores correct triplet lower (entities
most certainly in a relation have 0 distances). All other functions are trained to score correct triplets higher. The function fis trained to
rank the training samples below all other triplets in terms of 1-norm distance. It is parameterized by the following neural network:

f(eli, ri, eri ) =||RlhsriEv(eli) RrhsriEv(eri )||1


Rlhs and Rrhs are both ddDrtensors, where e.g. Rlhsimeans to select the ith component along the third dimension of Rlhs, resulting

RDe maps

in a d d matrix. E is a d Dematrix containing the embeddings of the Deentities and the function v (n): {1, . . . ,De}
the entity dictionary index n into a sparse vector of dimensionDeconsisting of all zeros and a one in the nth dimension.

B. SINGLE LAYER MODEL


The Single Layer model [20] tries to alleviate the problems of the distance model by multitask learning and semi supervised
learning. This model tries to alleviate the problems of the distance model by connecting the entity vectors implicitly through the
nonlinearity of a standard neural network. The architecture deals with raw words, the first layer has to map words into real-valued
vectors for processing by subsequent layers of the Neural Network.
Look Up Table Layer: Each word i D is embedded into a d-dimensional space using a lookup tableLTW():

LTW(i)=Wi
whereW R
is a matrix of parameters to be learnt, Wi R is the ith column of W and d is the word vector size (wsz) to be
chosen by the user. In the first layer of our architecture an input sentence {s1, s2, . . . sn} of n words in D is thus transformed into a
series of vectors {Ws1 , Ws2 , . . . Wsn} by applying the lookup-table to each of its words. The structure of the model is shown in
k
figure 1. A word i is then embedded in a d =k d dimensional space by concatenating all lookup-table outputs:
d|D|

LTW1,...,WK(i)T = (LTW1 (i1)T, . . . , LTWK(iK)T)


When a word is decomposed into K elements (features), it can be represented as a tuple i = {i1, i2, . . .iK} D1 DK, where
Dkis the dictionary for the kth-element. We associate to each element a lookup-table LTWk(), with parameters Wk Rdk|Dk| where
dk N is a user-specified vector size.The scoring function has the following form:

g(e1;R; e2) = uTf (WR,1e1 +WR,2e2)


where f = tanh, WR,1, WR,2 R

774

kxd

and u Rkx1 are the parameters of relation Rs scoring function.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C. HADAMARD MODEL
This model was introduced by Bordes et al. [10] and tackles the issue of weak entity vector interaction through multiple matrix
products followed by Hadamard products. It is different to the other models in our comparison in that it represents each relation
simply as a single vector that interacts with the entity vectors through several linear products all of which are parameterized by the
same parameters.

Figure1: A general deep NN architecture for NLP. Given an input sentence, the NN outputs class probabilities for one chosen word.

The semantic matching energy function has a parallel structure first, pairs (lhs, rel) and (rel, rhs) are combined separately and then,
these semantic combinations are matched. More precisely the model can be represented as:

(W
) (W

Elhs(rel) =(Went,lElhs)

rel,lErel)

+ bl

Erhs(rel) =(Went,rErhs
rel,rErel) + br
h(Elhs(rel),Erhs(rel)) = Elhs(rel) Erhs(rel)
whereWent,l, Wrel,l, Went,r and Wrel,rareddweight matrices, bl, br are d bias vectors and
product.This bilinear parameterization is appealing because the operation

depicts the element-wise vector

allows encoding conjunctions between lhs and rel, and

rhs and rel.


D. Bilinear Model
The fourth model [11, 9] fixes the issue of weak entity vector interaction through a relation-specific bilinear form.The scoring
T
dxd
function is as follows: g(e1;R; e2) = e 1 WRe2; where WR R are the only parameters of relation Rs scoring function. This is a
big improvement over the two previous models as it incorporates the interaction of two entity vectors in a simple and efficient way.
775
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CONCLUSION
We represented four various models of Neural Networks which enhance the knowledge bases without any external data other than
the knowledge bases itself. The various models considered the structural embedding of knowledge base as either objects or entities
and the relationship between them and using the scoring function based on their Neural Network Model, unseen relation has been
found. Much work is still in progress to improve the above models to extend the knowledge bases and make it more useful for the
various purposes in the field of AI.

REFERENCES
[1] G.A. Miller. WordNet: A Lexical Database for English. Communications of the ACM, 1995.
[2] M. Suchanek, G. Kasneci, and G. Weikum. Yago: a core of semantic knowledge. In Proceedings of the 16 th international
conference on World Wide Web, 2007.
[3] J. Graupmann, R. Schenkel, and G. Weikum. The SphereSearch engine for unified ranked retrieval of heterogeneous XML and
web documents. In Proceedings of the 31st international conference on Very large data bases, VLDB, 2005.
[4] Ng and C. Cardie. Improving machine learning approaches to coreference resolution. In ACL, 2002.
[5] R. Snow, D. Jurafsky, and A. Y. Ng. Learning syntactic patterns for automatic hypernym discovery. In NIPS, 2005.
[6] A. Fader, S. Soderland, and O. Etzioni. Identifying relations for open information extraction. In EMNLP, 2011.
[7] G. Angeli and C. D. Manning. Philosophers are mortal: Inferring the truth of unseen facts. In CoNLL, 2013.
[8] A. Bordes, J. Weston, R. Collobert, and Y. Bengio. Learning structured embeddings of knowledge bases. In AAAI, 2011.
[9] R. Jenatton, N. Le Roux, A. Bordes, and G. Obozinski. A latent factor model for highly multi-relational data. In NIPS, 2012.
[10] A. Bordes, X. Glorot, J. Weston, and Y. Bengio. Joint Learning of Words and Meaning Representations for Open-Text Semantic
Parsing. AISTATS, 2012.
[11] I. Sutskever, R. Salakhutdinov, and J. B. Tenenbaum. Modelling relational data using Bayesian clustered tensor factorization. In
NIPS, 2009.
[12] M. Ranzato and A. Krizhevsky G. E. Hinton. Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images.
AISTATS, 2010.
[13] D. Yu, L. Deng, and F. Seide. Large vocabulary speech recognition using deep tensor neural networks. In INTERSPEECH, 2012.
[14] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts. Recursive deep models for semantic
compositionality over a sentiment treebank. In EMNLP, 2013.
[15] A. Yates, M. Banko, M. Broadhead, M. J. Cafarella, O. Etzioni, and S. Soderland. Textrunner: Open information extraction on the
web. In HLT-NAACL (Demonstrations), 2007.
[16] M. Nickel, V. Tresp, and H. Kriegel. A three-way model for collective learning on multirelational data. In ICML, 2011.
[17] A. Bordes, N. Usunier, A. Garca-Durn, J. Weston, and O. Yakhnenko. Irreflexive and hierarchical relations as translations. CoRR,
abs/1304.7158, 2013.
[18] J. Turian, L. Ratinov, and Y. Bengio. Word representations: a simple and general method for semi-supervised learning. In
Proceedings of ACL, pages 384394, 2010.
[19] R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In
EMNLP, 2012.
[20] R. Collobert and J.Weston. A unified architecture for natural language processing: deep neural networks with multitask learning.
In ICML, 2008.
[21] R. Socher, E. H. Huang, J. Pennington, A. Y. Ng, and C. D. Manning. Dynamic Pooling and Unfolding Recursive Autoencoders
for Paraphrase Detection. In NIPS. MIT Press, 2011.
[22] E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. Improving Word Representations via Global Context and Multiple Word
Prototypes. In ACL, 2012.
[23] Sagarika Sahoo, NishthaTripathi, AvaniJadeja. Neural Tensor Networks for capturing new facts from knowledge bases.
International Journal of Computer Science and Mobile Computing, Vol.3 Issue.10, October- 2014, pg. 860-863

776

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fuzzy Based Model Predictive Control for a Quadruple Tank System


Harihara Subramaniam S C*, Harish C
St.Josephs College of Engineering, Chennai
*scharihara@gmail.com

ABSTRACT: This paper presents a control strategy that combines the predictive controller and fuzzy logic. A fuzzy based model
predictive control has been employed to cope with multivariable systems. The fuzzy controller acts a predictor that makes the Model
predictive control (MPC) to work with future data. The performance of this control strategy is studied using a quadruple tank problem.
The results proved that incorporation of fuzzy into predictive control outperformed MPC and PID controllers.

Keywords
Fuzzy logic, MPC, Quadruple tank, PID, ANFIS, Neural network, MIMO

1. INTRODUCTION
The basic need for the controller lies in the aim to achieve the process variable to the given set point value in a smooth way. Besides
this, thecontroller should be flexible with changesinset point and disturbances. In the modern control scenario various intelligent
controllers have emerged like predictive controller, fuzzy control, neural networks and expert systems. These controllers can either be
used independently or can be used in combinations to obtain best control solution.
The need to achieve tighter control of strong nonlinear process has led to more general MPC formulation in which nonlinear dynamic
model is used for prediction. One of the techniques is by using non-linear approximators such as Neural Networks (NNs) and Fuzzy
Logic (FL) which have been used in various engineering domains. As a subset of Artificial intelligence, both emulate the human way
of using past experiences, adaptingitself accordingly and generalizing. When applying the model based predictive control with
TakagiSugeno fuzzy model, it is always important how to choose fuzzy sets and corresponding membership functions.
In most fuzzy systems, fuzzy if-then rules were obtained from a human expert. However, this method has great disadvantages
becausehuman expertise mostly leads to erroneous rules [1]. For this reason,NNs were incorporated into fuzzy systems, which can
acquire knowledgeautomatically by learning algorithmsof NNs. These systems are called Neuro-Fuzzy systems and have advantages
over fuzzy systems.In order to achieve faster response and accurate set point, the predictive control is added with a fuzzy controller.
This paper uses a predictive control strategy based on fuzzy controller with a self learning capability for achieving prescribed control
objectives.

2. MODEL PREDICTIVE CONTROL

2.1 Classical MPC


An advanced control methodology for discrete-time application is the
Model Predictive Control (MPC) which is the most commonly used techniquein chemical process industries [2]. The basic idea behind
MPC is that at each step time k, an optimization problem is solved using an objective function based on output predictions over a
Prediction Horizon of P times. A selection of manipulated variables moves over a control horizon of M control moves.
Although M moves are optimized, only the first move is implemented.
777

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The MPC optimization is performed for a sequence of hypothetical future control moves over the control horizon and only the first
move is implemented. The problem is solved again at time k+1 with the measured output y(k+1) as the new starting point.

2.2 FUZZY BASED MPC


Predictive control is closely related to decision making. The decision making can be effectively carried out by using fuzzy logic. In
our paper, the system consists of two inputs and two outputs and hence the rule set is as follows:
Rij: if y(k) is A1i and u(k) is A2j then yp(k+1)=dij1 and yq(k+1) = dij2
Where Rij is the fuzzy implication and dij is the output parameter of fuzzy. Using a fuzzy inference, the rule set can be framed for the
system as per our requirements. But, framing rule sets manually for Multivariable systems is erroneous and time consuming. So,
automatic estimation of fuzzy rules can be achieved by training and testing a neural network model. This neuro-fuzzy inference
system is an adaptive estimator model which is closely associated with mixture of decision making and artificial intelligence.

3. AIR - CONDITIONING SYSTEM


Conventionally, a Comfort Air Conditioning (CAC) System is installed with a PID or ON/OFF control system. The control objective
is focused on the indoor temperature and energy efficiency is seldom and hence a sophisticated control strategy is needed [3]. We have
considered a SISO type CAC as a case study and applied predictive control strategy in it. The system considered is a first order system
without delay but with a negative gain. The model obtained using complimentary response method [4]is as follows:
=

0.634
300 + 1

The negative gain indicates that energy consumption increases with decrease in internal temperature.

4. QUADUPLE TANK SYSTEM


Quadruple tank process is a commonly used MIMO process in the modern control literature to illustrate dynamics of a multivariable
system and their non-linearity.It consists of four interconnected water tanks and two pumps. Its manipulated variables are voltages to
the pumps (ultimately input flow) and the controlled variables are the water levels in the two lower tanks[5]. The quadruple-tank
process can easily be built by using two double-tank processes. The overall setup of the system is as shown in figure 1. The transfer
functions of individual tanks[6] are as follows:
=

778

0.0509
for tank 1
4.16 + 1

0.0184
for tank 2
(11.46 + 1)(3.5 + 1)

0.0293
for tank 3
(7.63 + 1)(4.17 + 1)
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

0.03706
for tank 4
3.506 + 1

Figure 1 Setup of quadruple tank system

5. ANFIS AS AN ESTIMATOR
ANFIS is a sophisticated tool for the fuzzy modelling procedure where the membership function parameters are computed in such a
way that the associated fuzzy inference system can track the given input-output data[7]. It serves as a basis for building the fuzzy ifthen rule set with appropriate membership functions to generate the input output pairs. As a first step to design an estimator, training
data setsshould be generated that consists of estimator inputs and desired output values. In our study, the input output data of the MPC
is taken and trained to generate the ANFIS structure. It is then loaded into the Fuzzy controller block and executed to observe the
results. The steps in ANFIS estimator design utilizing the SIMULINK fuzzy logic toolbox are as follows:
1. Training data is generated and loaded to the Editor GUI.
2. Number of input MF, type of input and output MF, are chosen. Thus, initial ANFIS structure is formed.
3. The FIS file is generated for training data and exported to the model file.
4. The ANFIS structure is imported into the Fuzzy controller block, saved and simulated.
The response of PID, MPC and FMPC applied to SISO system is shown in figures 2, 3 and 4 respectively.

Figure 2- PID response SISO


779

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 3- MPC response SISO

Figure 4- FMPC response SISO


The developed controller was checked for servo and regulatory operation to check its ability to track changes in set point and reject
unwanted disturbances. As far as CAC is considered, set point tracking is mandatory because set point temperature for an AC keeps
differring from one day to another.
The set point tracking is shown in figure 5 and disturbance rejection is shown in figue 6.

Figure 5- Set point tracking

Figure 6- Disturbance rejection

780

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

From the above simulations, it is observed that PID controller produces slow response. Though with bearable overshoots, MPC was
able to outperform PID in terms of faster settling. FMPC did not produce any overshoot and had a smooth response.

6. SIMULATED RESULTS FOR QUADRUPLE TANK

The following figures show the responses for PID, MPC, FMPC controllers incorporated for a MIMO system.

Figure 7- PID response MIMO

Figure 8- MPC response MIMO

Figure 9- MPC response MIMO

Figure 10- FMPC response (tank 1)

781

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 11- FMPC response (tank 2)

Figure 12- Set point tracking

Figure 13- Disturbance rejection


The results clearly show that interacting MIMO systems cannot be controlled with good accuracy using PID controller. MPC can
provide a feasible solution to control MIMO systems but it cannot handle non-linear models directly. The FMPC can deal with direct
non-linear models and provide a faster, offset-free and smooth response.

7. CONCLUSION
For the CAC system, PID was controller was very slow. MPC was able to provide a faster response but with little overshoots. The
incorporation of fuzzy into MPC gave a smoother response that was both devoid of overshoots and faster. As far as Quadruple tank
system is considered, PID was not able to handle interaction between inputs and outputs. MPC and FMPC both gave a feasible
solution for MIMO system but FMPC has an advantage of handling non-linear models directly.

REFERENCES:
1.Katsuhiko Ogata, Modern Control Engineering (3rd Edition), Beijing:
Electronics Industry, 2000.
782

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. Wayne Bequette, Process Control, Eastern Economy Edition, 2003.

3.Yonghong Huang, Nianping Li, Yixun Yi and Jihong Zhan Fuzzy Model Predictive Control for a Comfort Air-Conditioning
System, International Conference on Automation Science and Engineering, 2006.

4. Ana Paula Batista, Maria Eugenia de Almeida Freitas, Euler Cunha Martins and Fabio Goncalves Jota Optimization of the
operation of an Air-Conditioning system by means of a Distributed Control System, International Building Performance Simulation
Association,2007.

5. Ivan Drca, Nonlinear Model Predictive Control of the Four Tank Process, Masters Degree Project, KTH Electrical Engineering,
Stockholm, Sweden, July 2007, XR-EE-RT 2007:016.

6. Prof D.AngelineVijula, Anu K, Honey Mol P, PoornaPriya S, M


athematical Modelling of Quadruple Tank System, International Journal of Emerging Technology and Advanced Engineering, ISSN
2250
-2459, Volume 3, Issue 12, December 2013.

7. S. N. Engin, J. Kuvulmaz, V. E. Omurlu, Fuzzy control of an ANFIS model representing a nonlinear liquid-level system, Neural
Computing and Applications 13(3):202-210, 2004

783

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The Sun and its Activities


Geeta Rana1, M K Yadav2
1

Department of Physics, K L Mehta Dayanand College for woman, Faridabad, Haryana


2

Department of Humanities and Applied sciences, YMCA University of Science and


Technology, Faridabad, Haryana.

ABSTRACT: The objective of this paper THE SUN AND ITS ACTIVITIES is to explain the solar activities. The paper aims to
provide the reading with the main elements; that characterize the peculiar structures, the prominences and their environment as
deduced from observations of solar radiations/solar activities.
As modern life is very technology dependent, so studying solar activity becomes a big concern. It is well known that the technology of
satellites for telecommunication and human space activities are very sensitive to solar eruptions. For all these reasons, the past decade
has seen the development of space weather as a new branch of science aimed at forecasting solar activity and its consequences on
earth.
Key words: solar activity, solar eruptions, solar radiations

INTRODUCTION
This topic is selected to review because of many reasons. Firstly, the solar activities/solar flares have not only affected a particular
region on earth, but they pertained to the entire earth. Secondly, people are not much aware about the existence of solar flares
produced by the Sun. So the immense effects of solar flares must be known to every conscious man.
The first key concept is to understand that what are solar activities, how they occur, and what their effects on earth are.
The Sun plays a fundamental role in our life on Earth. The electromagnetic radiation emitted by the Sun is our primary source of
energy. These electromagnetic radiations heat the Earths surface which leads to the temperature gradients, and drives the climate
system. The Sun also emits continuously and sometimes explosively Plasma from its surface. This plasma, called solar wind, carries
solar magnetic fields. It forms the heliosphere, which acts as a magnetic shield against the galactic cosmic rays. As the galactic cosmic
rays are not good for earth or near-earth space so, naturally the intensity of these rays get reduced firstly, by the heliosphere and
secondly, by the geomagnetic field (Earths magnetic field) (5) .
The Sun appears much more complicated and active than a static hot plasma ball and imbibes a great variety of non stationary active
processes. Such transient non stationary processes are known as solar activities, in contrast to the so called quiet Sun. Solar activity
includes active transient and long lived phenomenon on the solar surface such as spectacular solar flares, sunspots, prominences,
coronal mass ejections (CMEs) etc.

784

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Although scientists were not sure and had a little bit of information about the existence of strange spots on the Sun since the early 17th
century but it was only in the 19th century when the scientists recognized that the solar activity varies during the course of 11 years
solar cycle. Solar variability was later found to have many different manifestations including the fact that the solar constant, or the
total solar irradiance, TSI (the amount of total incoming solar electromagnetic radiation in all wavelengths per unit area at the top of
the atmosphere) is not a constant.

THE SUN
The Sun is a huge gaseous globe situated at the center of the solar system, like a star and with an effective temperature of about
5800K. It is almost perfectly spherical, having a diameter of about 1,392,684 km around 109 times that of earth and its mass is
1.9891030 kilograms, (approximately 330,000 times the mass of Earth) which accounts for about 99.86% of the total mass of the
Solar System. Chemically, about three quarters of the Sun's mass consists of hydrogen, while the rest is mostly helium. So it owes its
energy due to internal hydrogen-helium fusion reactions. All matter present in the Sun is in the form of gas and plasma because of its
high temperature (11). This makes it possible for the Sun to rotate faster at its equator (about 25 days) than it does at higher latitudes
(about 35 days near its poles).The differential rotation of Suns latitudes causes its magnetic field lines to get twisted together over
time increases their magnetic field strength and makes them buoyant. As a result these magnetic field loops rise to erupt from the
Suns surface, block the convective flow of energy, cooling their region of photosphere and thus triggers the formation of Suns
dramatic sunspots and solar prominences. This twisting action creates the solar dynamo and an 11 year solar cycle of magnetic
activity, as the Suns magnetic field reverses itself about every 11years.
The physics of convection near the surface of the Sun is also greatly influenced by the fact that
The solar surface is a radiating surface, where the mode of energy transport all of a sudden changes from convective with energy
being carried by moving fluid to radiative with energy carried by essentially free-streaming photons (3).
The solar magnetic field extends much beyond the Sun itself. The magnetized solar wind plasma carries Suns magnetic field into the
space and forms interplanetary magnetic field.

Parts of Sun
The Sun mainly consists of three parts: the photosphere, the chromospheres and the corona. The photosphere is a visible surface of
Sun, having temperature of about 6400K. Above it there are regions which are transparent to light; the chromospheres and the corona.
The chromospheres may be seen during eclipses; it extends about 2000km above the photosphere and has a temperature up to
50,000K. The corona having a temperature of about 1.510 6 K is observable for more than 106 km but in fact has no apparent
termination (10).
The Sun is surrounded by a hot, tenuous irregular cloud of plasma called solar corona. This hot corona continuously expands in space
creating the solar wind (a stream of charged particles).The magnetic field of Sun leads to many effects, collectively called solar
activity. Solar activity includes Sunspots on the surface of Sun, solar flares, and solar winds or CME (corona mass ejection). Being
the prime determinant of space weather, solar activity clearly has enormous technical, scientific, and financial impact on activities
ranging from space exploration to civil aviation and everyday communication (4).

785

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The Sun and Its Zones:

SUN SPOTS
The Sun is always active and its variable activity can be easily observed by the sunspots. The best example known for that is the era of
Maunder minima (little ice age), when the Sun nearly stopped forming sunspots (7).
There are some well defined regions on Suns surface that appear darker than their surroundings because of lower temperature and are
called as sunspots. Here, convection is inhibited by strong magnetic fields, reducing energy transport from the hot interior to the
surface. Also at the sunspot the magnetic field strength can be up to3500 gauss (approximately 7000 times the average strength of the
magnetic field at the earths surface).The magnetic field causes strong heating in the corona, forming active regions (2).
The sun spots occur in the active region of Sun. The active regions which are about 200,000 km (in longitude) and 50,000 km
(latitude) in size, contains large number of flux tubes .Each of these tubes is confined magnetically and carries electric currents of 109
amperes or even more. Dissipation of their magnetic energy causes locally heated areas in the active regions, the so called faculae,
which have temperature around 10,000 k, as compared to other parts of solar surface. Because of their higher temperature the faculae
is source of relatively strong UV radiations.
When the solar activity is high, it is called solar maxima and when the solar activity is low, it is called solar minima.
The number of sunspots is not constant, but varies with the solar cycle. During solar maxima the number of sunspots increases and
they move closer to the equator of the Sun. Sunspots usually exist in groups with two sets of spots of opposite magnetic polarity. One
set will have positive or north magnetic field while the other set will have negative or south magnetic field. The field is stronger in the
darker parts of the sunspots-the umbra. The field is weaker and more horizontal in the lighter parts-the penumbra. The magnetic
polarity of the leading sunspot alternates at every solar cycle, so it is north magnetic pole in one solar cycle and a south magnetic pole
in the next one. Sun spots and solar activity also appear to cluster in active longitudes. It is noticed that new active regions grow in
areas previously occupied by old active regions and this can result in a periodic signal that is evident in sun spot number record (12).
786

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SOLAR FLARES
When the field lines become distorted, they get upthrusted and ascended, and then they penetrate the surface of Sun in a variety of
spectacular forms, one of which is solar flares. In simpler terms, a great magnitude of plasma from the surface of Sun is released,
outwards. When this plasma returns back to the surface of Sun, it encounters with denser material located in the chromospheres or the
second uppermost layer of the Sun between the photosphere and the corona. This interaction ejects great quantity of energy in the
form of electromagnetic radiation called solar flares and more powerful solar flares i.e. coronal mass ejections. The energy released
during a flare is typically ten million times greater than the energy released from a volcanic explosion. In fact in just a few minutes a
flare can heat a material to many millions of degrees and can release as much energy as a billion megatons of TNT (13). Solar flares
can occur everywhere on the Sun, in active regions, penumbras, on the boundaries of the magnetic network of the quiet Sun, and even
in the network interior.
There are different types of flare which are classified depending upon their intensity, like X-class flares, M-class flares and C-class
flares. The largest flare is the X-flare. M-Class flares are smaller and C-Class flares are tenth the intensity of the M- Class flares (8).

CORONAL MASS EJECTIONS (CME)


Coronal mass ejection (CME) is another feature of nuclear activity within the Sun; related to but not caused by solar flares. In a CME
plasma is emitted from the active centre, sometimes over a large region of solar surface that may span longitudinal intervals of more
than 90 degree. The CMEs carry magnetic fields along with, which tends to fill the heliosphere (the magnetically closed region around
the solar system). These interplanetary magnetic fields slowly diffuse away. The rate of occurrence of CMEs correlates with solar
activity, but the size scales of CMEs are much larger and their latitude distributions different than those of near surface activity like
flares or active regions (6). The CMEs may be important to understand the Sun-Earth relations, because the CME changes the
magnetic components of the heliosphere in an important way, while this happens during fairly long period of time because of slow
diffusion (2).
Coronal mass ejections are more likely to have a significant effect on our activities than solar flares because they carry more material
into a larger volume of interplanetary space, increasing the likelihood that they will interact with the Earth. CMEs typically drive
shock waves that produce energetic particles that can be damaging to both electronic equipment and astronauts that venture outside the
protection of the Earth's magnetic field (1).
While a flare alone produces high-energy particles near the Sun, a CME can reach the Earth and disturb the Earth's magnetosphere,
setting off a geomagnetic storm. Often, these storms produce surges in the power grid and static on the radio and if the waves of
energetic particles are strong enough, they can overload power grids and drown out radio signals. This type of activity can affect the
region from ground to sky and ship to shore and can also disturb navigational communication, military detection, and early warning
systems.
Observing the ejection of CMEs from the Sun provides an early warning of geomagnetic storms. Recently, with SOHO, it has been
possible to observe continuously the emission of CMEs from the Sun and determine if they are aimed at the Earth.

787

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Effects on Earth
Solar activity impacts us in many ways. It poses hazards to our satellites in space, our technology on the ground, and even ourselves as
we venture into space. Solar activity has well-known impacts on Earths magnetosphere, ionosphere and terrestrial climate.

Terrestrial Organisms
The impact of the solar cycle on living organisms has been investigated and is found to have some connections with human health.
The amount of ultraviolet UVB light at 300 nm reaching the Earth varies by as much as 400% over the solar cycle due to variations in
the protective ozone layer. In the stratosphere, ozone is continuously regenerated by the splitting of O2 molecules by ultraviolet light.
During a solar minimum, the decrease in ultraviolet light, received from the Sun leads to a decrease in the concentration of ozone,
allowing increased UVB to penetrate to the Earth's surface.

Radio Communication
Sky wave modes of radio communication operate by bending (refracting) radio waves (electromagnetic radiation) through
the Ionosphere. During the "peaks" of the solar cycle, the ionosphere becomes increasingly ionized by solar photons and cosmic rays.
This affects the path (propagation) of the radio wave in complex ways which can either facilitate or hinder local and long distance
communications. Forecasting of sky wave modes is of considerable interest to commercial marine, aircraft communications, amateur
radio operators, and shortwave broadcasters. These users utilize frequencies within the High Frequency or 'HF' radio spectrum which
are most affected by these solar and Ionospheric variances. Although TV and commercial radio broadcasts are rarely affected, longer
distance communication, like ground-to-air, ship-to-shore, Voice of America, and amateur radio, are frequently disrupted (9). A
number of military systems, like early warning, over-the-horizon radar, and submarine detection, are greatly hampered during times of
high solar activity.

Terrestrial climate
Both long-term and short-term variations in solar activity are hypothesized to affect global climate, but it has proven extremely
challenging to directly quantify the link between solar variation and the earths climate. The topic continues to be a subject of active
study.
Early research attempted to find a correlation between weather and sunspot activity, mostly without notable success. Later research
has concentrated more on correlating solar activity with global temperature. Most recent, research suggests that there may also be
regional climate impacts due to the solar cycle. Measurements from the Spectral Irradiance Monitor on NASAs Solar Radiation and
Climate Experiment shows that solar UV output is more variable over the course of the solar cycle than scientists had previously
thought, resulting in, for example, colder winters in the US and southern Europe and warmer winters in Canada and northern Europe
during solar minima.
There are three suggested mechanisms by which solar variations are hypothesized to have an effect on climate:

Solar irradiance changes directly affect the climate ("Radioactive forcing).

Variations in the ultraviolet component: If the UV component varies more than the standard level, this might cause an effect on
climate.

788

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Effects mediated by changes in cosmic rays (which are affected by the solar wind) such as changes in cloud cover.

Although the changes in total solar irradiance and solar variations seems too small to produce significant climatic effects but there is
good evidence as well that , to some extent, the earths climate heats and cools as the solar activity rises and falls.

Effects on spacecraft
Satellites are placed in orbits that are above most of Earths atmosphere so that there is little frictional drag affecting them.
Communications satellites, in geosynchronous orbits, are about 6 Earth radii up. The Low-orbiting satellites, which speed
up around the earth every 2 hours or so, are barely above the Earths atmosphere. During times of high solar activity there
is an increase in ultraviolet radiation and auroral energy input, and this heat up the Earths atmosphere, causing it to
expand. The low-orbiting satellites then encounter increased drag which causes them to drop in their orbits (9).The high
satellites, in geosynchronous orbits, are not subject to drag from atmospheric heating, but they are subjected to the solar
wind. These satellites are usually well protected from solar wind particles by the magnetosphere which normally has a
minimum thickness of about 10 Earth radii. But when a surge in the solar wind reaches Earth, the front side of the
magnetosphere can be compressed or eroded away to a thickness of about 4 Earth radii. This places the high satellites
outside the protective shield of the magnetosphere. The impact of high speed particles has a corrosive effect on satellites, and
charge buildup can result from these particles. Electrical discharges can arc across spacecraft components causing damage.
If astronauts on a space mission are above the shielding effect produced by the Earth's magnetic field, the radiation from a
CME would also be dangerous to humans; many future mission designs (e.g., for a Mars Mission) therefore incorporate a
radiation-shielded "storm shelter" for astronauts to protect them during such a radiation event.
In view of the problems in space flight occurring during high solar activity, prediction of the latter becomes more and more important.

CONCLUSION
The sun is the only star which can be studied in great detail and thus can be considered as proxy for cool stars. Quite a number of
dedicated ground based and space born experiments are being carried out to learn more about solar variability. Studying and modeling
solar activity can increase the level of our understanding of nature. On the other hand study of solar activity is not of purely academic
interest, it directly affects the terrestrial environment also. Although changes in the sun are barely visible without the aid of precise
scientific instruments but the changes in solar cycle has a great impact on many aspects of our lives. In particular, the heliosphere is
mainly controlled by the solar magnetic field. This leads to the modulation of galactic cosmic rays (GCRs) by the solar magnetic
activity. Additionally, eruptive and transient phenomenon in the sun can lead to sporadic acceleration of energetic particles with
greatly enhanced flux. Such processes can modify the radiation on earth and need to be taken into account for planning and
maintaining space missions and even transpolar jet flights.
For a complete understanding of the solar atmosphere or environment, ideally all of its layers should be observed at the same time but
because of spatial extent and vastly differing temperatures of its layers, multi wavelength and multi resolution observations are
needed. And unfortunately they are not always available.
789

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Radiation from the Sun ultimately provides the only energy source for the Earths atmosphere and changes in solar activity clearly
have the potential to affect climate. Changes in total solar irradiance (TSI) undoubtedly impact the Earths energy balance but
uncertainties in the historical record of TSI means that the magnitude of even this direct influence is not well known. Variations in
solar UV radiation impact the thermal structure and composition of the middle atmosphere but still the detailed responses about the
temperature and ozone concentration are not well established. Although, various theories are now being developed by which the direct
solar impacts on the atmosphere can be observed but the influences are complex and nonlinear. Hence, many questions still remain
arise like, to what extent, where, and when the solar influence can occur.

Further advances in this field require work on a number of fronts. One important issue is to establish a precise magnitude total solar
irradiance (TSI). This may be achieved by careful analysis and understanding of the satellite instruments involved in collecting data
over the past two-and-a-half solar cycles and current solar cycle.

At last we can say that there are many areas where work is needed to be done.
Obviously the first one is the Sun because there are still a lot of mysteries about what is going inside the Sun, what triggers flares and
why the Sun spots form actually. The second one is interplanetary medium filled with solar wind plasma and the third one is
geomagnetosphere. The last two domains are also directly or indirectly related to the Sun and its activities. So we hope for a
comprehensive model for the Sun, its activities and the Earth, which is seriously a problem for the physicists of future.

REFERENCES:
[1] www.epa.gov/Solar activity (Environmental protecting agency)-Radiations from Solar

activity.

[2] Netherlands journal of Geosciences, Solar activity & its influence on climate by C. DE JAGER.
[3] Living reviews in solar physics, Solar surface convection, by AKE.NORDLUND,

ROBERT F. STEIN,

MARTIN ASPLUND.

[4] Living reviews in solar physics. Solar cycle prediction, by KRISTOF PETROVAY (2010)
[5] Prediction of solar activity for the next 500 years, by FRIDHELM STEINHILBER and JURG BEER. JOURNAL GEOPHY,
SPACEPHY.
[6] Living reviews in solar physics, Coronal mass ejections: Observations, by DAVID F.WEBB, TIMOTHY A.HOWARD.
[7] Long-Term Solar Variability: Evolutionary Time Scales, by RICHARD R. RADICK (national solar observatory, New Mexico)
[8] Ionospheric disturbances due to solar activity detection using SDR by DIVYA

HARIDAS, K.P SOMAN &

SHANMUGHA SUNDARAM G.A


[9] www.swpc.noaa.gov: Solar-Terrestrial Interaction (Solar Physics and terrestrial effects) pdf.
[10] The Solar- terrestrial Environment: An introduction to Geospace. By JOHN KEITH HARGREAVES.
790

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[11] Sun-Wikipedia, the free encyclopedia.


[12] Hathaway, D. H. and Wilson, R. M. 2004, "What the Sunspot Record Tells Us about Space Climate", Solar Phys. 224, 5.
[13] Solar flare observations by ARNOLD O.BENZ

791

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Aluminium Alloy Metal Matrix Composite: Survey Paper


Vengatesh.D [1] Chandramohan.V [2]
[1]

PG Scholar, Department of Mechanical Engineering, Nandha Engineering College Perundurai Main Road, Erode -638 052, Tamil
Nadu. Phone:-7598656116
[2]
Assistant Professor,Department of Mechanical Engineering, Nandha Engineering College, Perundurai Main Road, Erode-638 052,
Tamil Nadu. Phone:-9952680556
kunamandithaya@gmail.com [1] chandru_arcfriends@yahoo.co.in [2

ABSTRACT-For the last few years there has been a rapid increase in the utilisation of aluminium alloys, particularly in the
automobile industries, due to low weight, density, coefficient of thermal expansion, and high strength, wear resistance. Among the
materials of tribological importance, Aluminium metal matrix composites have received extensive attention for practical as well as
fundamental reasons. Aluminium alloys and aluminium-based metal matrix composites have found applications in the manufacture of
various automotive engine components. Compound work pieces are developed to combine favourable properties of different materials.
Many composite materials are used in home and industrial production. Weight reducing in rapid moving parts of automobile engines
such as Crankshaft, connect rod. to a reduction of the weight and wear reduction purpose. For this review paper discussed with recent
composite technology and performance behaviour and also we discussed MMC. the material mixed with non metal and analyzed in
this mechanical properties and fabrication technique

keyword; aluminium alloys, fabrication technique, MMC


1.

INTRODUCTION

From the last few years in much industrial application the important parameter in material selection is specific strength,
weight and cost. Here we discussed the review paper relevant to this. Before going the review section we must know the difference
between the composite and MMC. The composite defined as the made of several part or element but only combined different material
not a non-metal whereas the non-metal is mixed with material this called MMC. Clearly we had seen the review paper. the main
mixed material most probably like aluminium alloy and then it group, silicon carbide fly ash, graphite, boron carbide, RHA, fly ash
cenospehere, silicon nitride, silicon carbide, etc..., in this material we fabricated by using different method with respected to

the grain size the generally we go for the stir and GPIT technique were check the material distribution by using SEM
analyzed with FEA model. Finally they going to tested the mechanical properties like tensile strength, ductility
compressive strength. Hardness elongation etc..,
2. LITERATURE REVIEW

Rama rao et al[1] . examined that aluminium alloy-boron carbide composites were fabricated by liquid metallurgy
techniques with different particulate weight fraction (2.5, 5 and 7.5%). Phase identification was carried out on boron
carbide by x-ray diffraction studies microstructure analysis was done with SEM a composites were characterized by
hardness and compression tests. The results shows increase the amount of the boron carbide. The density of the
composites decreased where as the hardness is increased. Whereas The compressive strength of the composites was
increased with increase in the weight percentage of the boron carbide in the composites.
Balasivanandha prabu et al [2]. Investigated that better stir process and stir time. The high silicon content
aluminium alloy silicon carbide MMC material, with 10% SiC by using a variance stirring speeds and stirring times. The
microstructure of the produced composite was examined by optical microscope and scanning electron microscope. The
results with respected to that stirring speed and stirring time influenced the microstructure and the hardness of composite.
792

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

also they investigate that at lower stirring speed with lower stirring time, the particle group was more. Increase in
stirring time and speed resulted in better distribution of particles. The mechanical test results also revealed that stirring
speed and stirring time have their effect on the hardness of the composite. The uniform hardness valued was achieved at
600 rpm with 10min stirring. but above this stir speed the properties degraded again. This study to establish the trend
between processing parameters such as stirring speed and stirring time with microstructure and hardness of composite.
Karunamoorthy et al [3]. Analysed that A 2D microstructure-based FEA models were developed to study the
mechanical behaviour of MMC. The model has taken into account the randomness and clustering effects. The particle
clustering effects on stress-strain response and the failure behaviour were studied from the model. The optimization of
properties was carried out from analysis of microstructure of MMC since the properties depend on particles arrangement
in microstructure. In order to model the microstructure for finite element analysis (FEA), the micro-structures image
converted into vector form from the raster than it conversion push to IGES step and mesh in FEA model in ANSYS 7.
The failure such as particle interface decohesion and fracture the predicted for particle clustered and non-clustered micro
structures. they analyzed that failure mechanisms and effects of particles arrangement.
Sozhamanna et al [4]. Analysed that the methodology of microstructure based elastic-plastic finite element
analysis of PRMMC. This model is used to predict the failure of two dimensional microstructure models under tensile
loading conditions. Hence analyses were carried out on the microstructure of random and clustered particles to determine
its effect on strength and failure mechanisms. The FEA models were generated in ANSYS using SEM images. The
percentage of major failures and stress-strain responses were predicted numerically for each microstructure. Here the
mixture material Al alloy, SiC
Rohatgi et al[5]. Analysed that A356-fly ash cenosphere composites can be synthesized using gas pressure
infiltration technique over a wide range of reinforcement volume fraction from 20 to 65%. The densities of Al356-fly ash
cenosphere composites, made under various experimental conditions, are in the range of 1250-2180 kg/m3 corresponding
to the volume fraction of cenosphere in the range 20-65%. The density of composites increased for the same cenosphere
volume fraction with increasing size of particles, applied pressure and melt temperature. This appears to be related to a
decrease in voids present near particles by and enhancement of the melt flow in a bed of cenosphere. The compressive
strength Plateau stress and modulus of the composites increased with the composite density.
Venkat prasat et al [6]. Investigated that tribological behaviour of aluminium alloy reinforced with alumina and
graphite this are fabricated by stir casting process. The wear and frictional properties of the hybrid metal matrix
composites was studied by performing dry sliding wear test using a pin on- test wear test. Experiments were conducted
based on the plan of experiments generated through taguchis technique. AL27 orthogonal array was selected for analysis
of the data. Investigation to find the influence of wear rate sliding speed applied load sliding distance, as well as the
coefficient of friction. The results show that sliding distance has the highest influence followed by load and sliding speed.
Finally confirmation test were carried out to verify the experimental results and scanning electrons microscopic studies
were done on the wear surfaces. The incorporation of graphite as primary reinforcement increases the wear resistance of
composites by forming a protective layer between pin counter face and the inclusion of alumina as a secondary
reinforcement also has a significant effect on the wear behaviour. The regression equation generated for the present model
was used to predict the wear rate and coefficient of friction of HMMC for intermediate conditions with reasonable
accuracy.
Keshavamurthy et al [7]. Experimented that Al6061 matrix composite reinforced with nickel coated silicon nitride
particles were fabricated by liquid metallurgy. Microstructure and tribological properties of both matrix alloy and
developed composites have been evaluated. wear tests and dry sliding friction were carried out using pin on disk type
machine over a load range of 20-100N and sliding velocities is 0.31-1.57m/s. Results revealed that, coated of nickel in
silicon nitride partical are uniformly distributed throughout the matrix alloy. Al6061-Ni-p-si3N4 composite exhibited
793

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

lower wear rate and coefficient of friction compared to matrix alloy. the coefficient of friction decreased with increased in
load up to 80N. Further increase in the load, also increasing coefficient of friction and sliding velocity.
Mahendra boopathi.M et al[8]. Experimented to Development of hybrid metal matrix composites has become an
important area of research interest in materials science. In view of this, the present study was aimed at evaluating the
physical properties of aluminium 2024 in the presence of fly ash, silicon carbide and its combinations. Consequently
aluminium MMC combination the strength of the reinforcement with the toughness of the matrix to achieve a combination
of desirable properties not available in any single conventional material. stir casting method was used for the fabrication
of aluminium MMC. Structural characterization was carried out on MMC by x-ray diffraction studies and optical
microscopy was used for the micro structural studies. The mechanical be haviours of MMC like density, elongation,
hardness, yield strength and tensile test were ascertained by performing carefully designed laboratory experiments that
replicate as nearly as possible the service conditions. In the presence of fly ash and silicon carbide [sic (5%) + fly ash
(10%) and fly ash (10%) +sic (10%)] with aluminium, the result show that the decreasing the density with increasing
harness and tensile strength was also observed but elongation of the hybrid MMC in comparison with unreinforced
aluminium was decreased. The hybrid metal matrix composites significantly differed in all of the properties measured.
Aluminium in the presence of sic (10%)-fly ash (10%) was the hardest instead of aluminium sic and aluminium-fly ash
composites.
Bienias et al [9]. Experimented that microstructure characteristics of aluminium matrix Ak12 composites
containing of fly ash particles, obtained by gravity and squeeze costing techniques, pitting corrosion behaviour and
corrosion kinetics are presented and discussed. It was found that one in the comparison with squeeze casting, gravity
casting technology is advantageous for obtaining higher structural homogeneity with minimum possible porosity levels,
good interfacial bonding and quite a uniform distribution of reinforcement, second one the fly ash particles lead to an
enhanced potting corrosion of the Ak12/9%flyash (75-100 m fraction) composite in comparison with unreinforced
matrix (Ak12 alloy), and third one the presence of nobler second phase of fly ash particles, cast defects like pores, and
higher silicon content formed as a result of reaction between aluminium and silica in Ak12 alloy and aluminium fly ash
composite determine the pitting corrosion behaviour and the properties of oxide film forming on the corroding surface.
Anilkumar et al [10]. Investigation that mechanical properties of fly ash reinforced aluminium alloy (Al 6061)
composites fabricated by stir casting. They are three sets of composites with fly ash particle sizes of 75-100, 45-50 and 425 m were used. Each set had three types of composite samples with the reinforcement weight fractions of 10 15 and
20%. The mechanical properties studied were the compressive strength, tensile strength, ductility and hardness.
Unreinforced Al6061 samples also tested the mechanical properties. It was found that the compressive strength, tensile
strength and hardness of the aluminium alloy composites decreased with the increase in particle size of reinforced fly ash.
Increase in the weight fractions of the fly ash particles the ultimate tensile strength, compressive strength, hardness and
decreases the ductility of the composite. The SEM of the samples indicated uniform distribution of the fly ash particles in
the matrix without any voids.
Dora siva Prasad et al [11]. Investigated that Hybrid metal matrix composites with up to 8% rice husk ash and sic
particles could be easily fabricated using double stir casting process. The uniform distribution of rice husk ash and sic was
observed in the matrix. The porosity and hardness increases with the increase in percentage of the reinforcement whereas
the density of hybrid composites decreases, The yield strength and ultimate tensile strength increase with the increase in
RHA and sic content. It was found that in comparison to that of base aluminium alloy, the precipitation kinetic was
accelerated by adding the reinforcement. This effect obtaining the maximum hardness by the aging heat treatment where
also reduced time.
Udhaya prakash et al[12]. Experimentally investigated on machinability of aluminium alloy (A413)/flyash/B4C
hybrid composites using wire EDM. The objective of this work is to investigate the effect of parameters like pulse off
794

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

time, wire feed pulsed on time, gap voltage, and percentage reinforcement on the responses material removal rate as well
as surface roughness while machining aluminium alloy(A413)/flyash/B4C hybrid composites using wire EDM.
Experimentation has been done on taguchis L27 orthogonal array under different combinations of parameters. Analysis
of variance has been used to determine the design parameters significantly influencing the response. the responses has
been evaluated using signal to noise ration analysis. The experimental result proposed optimal combination of parameters
which give the maximum material removal rate and minimum surface roughness.
Smith et al [13]. Investigated that Measurements and predictions of residual stresses were made on four thicksection steel components, created by electron beam (EB) welding. All components were measured in the welded state,
with one ferritic steel component then subjected to PWHT. In two ferritic components, the peak residual stresses, for the
as-welded state, were found to about equal to the yield strength of the parent material. At the entrance and exit positions
of the ferritic steel EB welds compressive residual stresses were found. This was in contrast to the stainless steel EB
welds, during the welded state the tensile strength is measured. After PWHT of a ferritic EBW component the measured
peak stresses reduced from about 600 MPa to 90 MPa. Numerical simulations of the EBW process predicted overall
profiles of the residual stresses that matched the measurements, but FE analyses always predicted peak values. It was
found that the measured distribution of residual stresses across the ferritic steel components was very similar irrespective
of component thickness, and confined to distances of about 40% of the product thickness. In contrast in a stainless steel
component the stresses are much more broadly distributed about the weld centreline.
Weglewski et al [14]. Analysed that Effect of grain size on thermal residual stresses and damage in sintered
chromiumalumina composites. the results of experimental measurements and numerical modelling of the effect of
particle size on the residual thermal stresses arising in sintered metalmatrix composites after cooling down from the
fabrication temperature. On example of novel Cr(Re)/Al2O3 composites processed by (i) spark plasma sintering and (ii)
hot pressing the residual thermal stresses are measured by neutron diffraction technique and determined by a FEM model
based on micro-CT scans of the material microstructure. Then numerical model of micro cracking induced by residual
stresses is applied to predict the effective Young modulus of the damaged composite. Comparison of the numerical results
with the measured data of the residual stresses and Youngs modulus is presented and fairly good agreement is noted.

3.

CONCLUSION

From literature review related to the Aluminium alloy metal matrix Composite material we have concluded that,
the pure aluminium mixed with sum other material through the process like stir, GPIT, and followed by different
fabrication procedure. That the result show that increasing better mechanical properties than reducing the weight and cost.

REFERENCES:[1]
[2]

[3]
[4]
[5]
795

S.Rama rao. G. Padmanabhan, Fabrication and mechanical properties of aluminium-boron carbide composites,
international journal of materials and biomaterials applications 2(2012) 15-18
S.balasivanandhaprabu,l. Karunamoorthy, s. Kathiresan, b. Mohan, Influence of stirring speed and stirring time on
distribution of particles in cast metal matrix composite. Journal of materials processing technology 171(2006)
268-273
S. Balasivanandha prabu, l. Karunamoorthy Microstructure- based finite element analysis of failure prediction in
particle-reinforced metal-matrix composite, journal of materials processing technology 207(2008)53-62
G.g. sozhamannan. S. Balasivanandha prabu, r. Paskaramoorthy. Failures analysis of particle reinforced metal
matrix composites by microstructure based models, materials and design, 31(2010) 3785-3790
P.k. rohatgi , j.k. kim, N.gupta, simonalaraj, A.Daoud Compressive characteristics of A356/fly ash cenospehere
composites synthesized by pressure infiltration technique, science direct 37(2006) 430-437
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[6]

[7]
[8]

[9]
[10]
[11]
[12]
[13]

[14]

796

N.Radhia, r. Subramanian, sivenkat prasat, Tribological behaviour of aluminium/alumina/graphite hybrid metal


matrix composite using taguchis techniques, journal of minerals and materials characterization and engineering
10(2011) 427-443
C.s.ramesh, r. Keshavamurthy, b.h. channabasappa, s. Pramod, Friction and wear behaviour of Ni-P coated Si3N4
reinforced Al6061composites, tribology international 43(2010)623-634
Mahendra boopathi, k.p. arulshri N. Iyandurai, Evaluation of mechanical properties of aluminium alloy 2024
reinforced with silicon carbide and fly ash metal matrix composites, American journal of applied sciences,
10(2013),219-229
J. Bienias, m. Walczak, b. Surowska,j.Sobczak,Microstructure and corrosion behaviour of aluminium fly ash
composites, journal of optoelectronics and advanced materials, 5(2003), 493-502
H.c. anilkumar, h.s. hebbar and k.s. ravishankar, Mechanical properties of fly ash reinforced aluminium alloy
(Al6061) composites, international journal of mechanical and materials engineering 6(2011) 41-45
Dora siva Prasad, chintada shoba, nallu ramanainah, Investigations on mechanical properties of aluminium
hybrid composite, mater res technol. 3(2014) 79-85
J. Udaya prakash, t.v. moorthy, j. Milton peter, Experimental investigation on machinability of aluminium
alloy(A413)/flyash/B4C hybrid composites using wire EDM, procedia engineering 64(2013) 1344-1353
D.J. Smith , G. Zheng, P.R. Hurrell , C.M. Gill, B.M.E. Pellereau , K. Ayres D. Goudar c, E. Kingston, Measured
and predicted residual stresses in thick section electron beam welded steels, International Journal of Pressure
Vessels and Piping 120-121 (2014) 66-79
W. Weglewski , M. Basista , A. Manescu , M. Chmielewski , K. Pietrzak Th. Schubert , Effect of grain size on
thermal residual stresses and damage in sintered chromiumalumina composites, Composites: Part B 67 (2014)
11912

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SIMULATION AND EXPERIMENTAL VALIDATION OF FEEDING


EFFICIENCY IN FG 260 GREY CAST IRON CASTINGS
Sarath Paul*, Rathish R **
*M

**

.Tech research scholar Department of Mechanical Engineering, SCMS School of Engineering and Technology, India,

sarathpp7@gmail.com
Assistant Professor Department of Mechanical Engineering, SCMS School of Engineering and Technology, India

Abstract Significant amount of cast iron are being used to fabricate components such as engine blocks, cylinder heads etc., in
response to consumer demands for the increase in performance with excellent cast ability and machinability, the use of cast iron has
grown dramatically from the past. Production of cast iron castings is a complex process which involves many parameters that affect
the quality of castings. In this study, a new approach is attempted to produce sound FG 260 grey iron castings by computer simulation
through experimental validation in a cast iron foundry. The entire casting process is simulated by means of finite element simulation
software and then the results are compared with shop floor trials. A simple rectangular plate casting of dimension 200x100x15 mm is
produced with different combinations of riser dimensions. The aim is to increase the yield as well as to reduce defects that occur in the
cast iron castings due to design parameters for which a cylindrical riser of hemispherical bottom with h/d=1.3 is considered for
analysis. Solidification simulation is made with ANSYS software to compute solidification time and optimal riser combinations to
obtain defect free castings in the shop floor. The experimental results revealed that the simulation performed using ANSYS holds
good to produce defect free cast iron castings in the foundry.

Keywords Grey cast iron casting, Feeder Design, Solidification simulation, Experimental verification
INTRODUCTION

Metal casting is one of the ancient techniques used to manufacture metal parts. Metal casting is the process of producing
desired shaped metal parts by pouring the molten metal into the prepared mould and then allowing the metal to cool and solidify. The
casting is then removed from the mould and excess metal is removed by shot blasting, grinding or welding processes. The product
undergoes a wide range of processes such as heat treatment, polishing and surface coating or finishing and inspection [2].
Use of cast iron in the automotive industry has grown dramatically in recent years as a fast track response to consumer demands due to
its machinability and cast ability with increased performance, fuel economy and labor dependents. Grey cast iron is widely used to
fabricate components such as engine blocks, cylinder heads, motor casings and flywheels etc.

Figure 1-Cross Section of Typical Casting


797

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In order to produce a sound grey iron casting in an economical manner, a new approach in the riser design is needed to avoid defects
such as shrinkage, porosity, hot spots and unfilled mould etc, since most of the defects in casting occur due to poor riser design.
LITERATURE REVIEW

A detailed literature review was carried out for identifying the major defects which leads to rejection of castings. An attempt
is made to eliminate these defects in the design stage using simulation model. Previous literature has shown that there are few research
studies related to rejection control of castings in foundry using different simulation models and also some research papers which gives
solutions to solve the major defects during feeding process in castings
The sand casting process and other casting process, melting of metals and alloys, fluid flow and gating design solidification and
processing of metal castings accounts for defects in metal casting[10]. The main objective of this study is to provide an introduction to
techniques used to compensate for the solidification shrinkage of castings and explain the basic principles of how to design a feeder
system to produce a shrinkage-free casting [11].
A well-designed feeding system will provide better quality castings. Design of feeding system involves the decision about the exact
location of risers and number of risers used in the casting process. Riser design is to performed for new castings and modification in
castings with high rejection rates These modifications are done manually which involves huge time, cost and other resources [15].
An attempt has been made to produce FG 260 grey iron castings by optimizing the feeding system using simulation software in this
study. A simple standard plate casting of dimension 240x150x25 mm is produced with the combination of different riser dimensions
in the foundry. Cylindrical riser of hemispherical bottom with h/d=1.3 is considered for the analysis. Solidification simulation is made
with ANSYS software then the solidification time and optimal riser diameters are compared with experimental results [16].
Use of computers in foundry is nascent spanning less than three decades. Computer application in foundry is a knowledge intensive
process and metal casting stands to benefit greatly from the development and deployment of tailor-made software tools. The study also
describes the evolution of software tools for casting design, broadly segregated into three phases spanning the decades 1980s (basic
CAD), 1990s (desktop simulation) and 2000s (intelligent design). The latest software tools combine heuristic knowledge, geometric
reasoning and information management [17].
A theoretical study has made about the factors which control the rate of heat flow from solidifying sand and chill casting. The
investigation has been focused on three shapes of high symmetry: spherical, cylindrical and slab shaped castings.an evaluation of
Chvorinovs rule for sandcasting indicates that this simplified relationship is valid only for the comparing casting having similar
shapes for the purpose of calculation, the study shows that (i) superheat may be added accurately and calorifically to the latent heat of
fusion, (ii) Alloys which freeze over the wide range of temperatures and evolve their latent heat uniformly over this range have got an
effective melting point one-fourth of the way from the solids to liquid stage[19].
The literature review is a background work that is made to hold up this project work. It is based on various journals. The topics of
literature surveys are selected so that they support the study. The results achieved from literature review act as backbone of this project
work.

OBJECTIVE OF THE STUDY


The objective of this study is to optimize the feeding parameters required for producing defect free FG 260 grey cast iron
castings by means of simulation and to verify the same by conducting experimental trials.

SCOPE OF THE STUDY


The study provides a better understanding of casting production and process performance in a foundry shop. It will also
provide an insight on the drawbacks of foundry process and reasons behind high rejection rate due to improper understanding of
solidification mechanism and riser design. The methodology adopted in this study may be applied to almost all the foundries to
produce good quality castings.

PROBLEM IDENTIFICATION
During casting process, high rejection rate occurs due to improper design of riser and gating system. High consumption of
materials, high expenses, wastage of human resources, power and time consumption may lead to heavy losses in a foundry. Hence it is
an important criterion to minimize losses and improve the acceptable rate of castings in a foundry to minimize unwanted rework.

798

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

PROCESS DESCRIPTION OF CASTING


The process of casting involves the basic operations of pattern making, sand preparation molding, melting of metal pouring
of moulds, cooling, shake-out, fettling, heat treatment, finishing and inspection. In casting process, the solidification of liquid metal in
the mould cavity plays a major role. Such a phase change from liquid to solid state involves the phenomena like changes in fluidity,
volumetric shrinkage, segregation, evolving of gases absorbed and the size of the grains which have profound influence on the quality
of the final casting obtained. A proper understanding of solidification mechanism helps in avoiding major casting defects. The general
process diagram for casting production in a casting industry is shown below [2].

Figure 2-Process Flow Chart


A. CASTING DESIGN
Initially design of the casting is made from discussions between design engineer and foundry engineer to evolve a good design at a
minimum cost of casting.
B. DRAWING
Drawing of the casting component is prepared. A detailed pattern drawing should be made to avoid any difficulties.
C. PATTERN MAKING
A pattern is made of wood, aluminium or other metals or plastic. Pattern has slightly different dimensions from the component
drawing due to pattern allowance.
D. MOULDING
Cores are prepared if necessary and finally the mould is produced. For sand casting methods, specially prepared sands are used. For
permanent mould or die castings, metal moulds or dies are required.
E. MELTING AND POURING
The metal is then melted in furnace, alloying additions are made, composition is adjusted and adequate pouring temperature is
attained. Then the metal is transferred to ladles and subsequently poured into moulds.
F. FETTLING
Once the casting is cooled, it is taken out of the mould to remove unwanted portions such as sprue, riser and runner etc., . The surface
of the casting is then cleaned.
G. INSPECTION

799

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The casting may require certain heat treatment. Then it is inspected. Inspectors check the dimensions, mechanical properties,
soundness or pressure-tightness as per specifications and may employ non-destructive tests such as die penetrant test, X-ray or gamma
ray radiography if required.
H. SHIPPING
The approved castings are then weighed, packed and shipped according to the need.

COMPOSITION OF FG 260 GREY CAST IRON


Table 1 Average Composition of FG 260 Grey Cast Iron [21]
Elements

Composition

Iron, Fe

90%

Carbon, C

3 to 3.5%

Silicon, Si

1 to 2.8%

Magnesium, Mg

0.5 to 1%

Sulphur, S

0.02 to 0.15%

METHODOLOGY OF WORK

Fig.3 Methodology of Work

RISERING AND GATING IN CASTING


A good risering and gating system of castings have been recognized as a major factor in producing good castings. In order to
function properly, good gating and feeding systems must take into consideration to overcome certain characteristics, namely [16].

Gas entrainment
Gas absorption
Solidification shrinkage (feeding requirements)
Difficult in eliminating macro shrinkage
Slag formation tendency

RISERING
800

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A riser, also known as a feeder, is a sand made passage in the mould during ramming the cope. The primary function of the
riser (attached with the mould) is to act as a reservoir of molten liquid metal in order to feed molten metal properly to the solidifying
casting so that shrinkage cavities are gets rid of during the solidification of a casting [3]. Since shrinkage is found to be one of the
common defect in casting. In order to eliminate these defects in the casting, a riser is added to provide enough quantity of liquid metal
to compensate liquid shrinkage and solid shrinkage within the casting. Thus a sound casting can be produced without internal
shrinkage voids or porosity and external shrinkage defects like sink etc.
A riser may be larger than the casting it feeds, because it must supply feed metal for as long as the casting is solidifying. Riser size is
to be reduced by making solidification more directional. Besides riser size, riser location is also important as regards directional
solidification [16]. A proper sized and located riser performs economically well in sand casting, this is because we know that a proper
sized-located riser provides molten metal to the mould cavity to compensate for shrinkage. In sand casting, the cooling rate is
comparatively low, such that the cooling rate can be effectively manipulated by size and placement of a riser [25].Using risers would
of course slow the cooling time, which is economically undesirable and therefore a riser perform its function in the most economical
manner.

DRAWBACKS OF THE RISER SIZE CONDITIONS


a) If too large:
The material in the riser is scraped and must be recycled.
The riser has to be cut off and a larger riser will cost more to machine
An excessive large riser slows solidification [25] & [26].
b) If too small:
It is mainly associated with defects in the casting, either due to insufficient feeding of liquid to compensate for solidification
shrinkage, or shrinkage pores because the solidification front is not uniform [25] & [26].

TYPES OF RISERS
There are two types of risers [3]:
A. OPEN RISER
Open risers are exposed to the atmosphere and are easy to mould. Open risers helps to know whether the mould is completely filled or
not. Open riser can be top risers or side risers. Open riser must be large in size. The advantage of top riser is that the pressure occurred
due to the height of the metal causes feeding through thin sections and is preferred for light metals such as aluminium. And in the case
of side riser it should be placed at a higher level for proper feeding which help the riser to receive hot metal.
B. BLIND RISER
Blind risers are fully enclosed in the mould. Blind riser loses heat slowly and must be smaller in size than open riser. It is known that
risers act as reservoirs of liquid metal for a casting in regions where shrinkage is expected to occur i.e. areas which are the last to
solidify. Thus, risers must be made large enough that the riser should solidify completely only after the casting has solidified. If a riser
solidifies before the cavity it is to feed, it is useless and produces unsound casting. As a result, an open riser in contact with air must
be larger to ensure that it will not solidify first. A blind riser which is in contact with the mould on all surfaces; thus a blind riser may
be made smaller. Blind riser reduces the energy and time required in removing the riser from the casting [3].

CASTING PLATE DIMENSIONS


Table 2 Casting Plate Dimensions

801

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.4 Casting Plate

RISER-NECK DIMENSIONS

Riser-neck dimension is an important factor because it determines, how well the riser can feed the casting
How readily the riser can be removed from the casting formed?
It also control the depth of the shrinkage cavity solidifying just before the riser freezes, thereby preventing the cavity from
extending into the casting.

FORMULAS INVOLVED FOR CALCULATING THE RISER NECK DIMENSIONS

802

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 5 Riser Neck Dimensions (Side and Top View)


HN = (0.6 to 0.8) t
Max. LN = D/3
WN = 2.5 LN + 0.18 D
Where HN = Height of the gate, LN = Length of the gate, WN = width of the gate, D= diameter of riser, t= plate thickness [1].

CALCULATION OF RISER NECK DIMENSIONS


a)

Case (i)

Height of gate, HN1= 0.8*15 = 12 mm

Length of gate, LN1= 60/3 = 20mm

Width of gate, WN1 =2.5*20-0.18*60= 39.2mm

Similarly we can calculate the riser neck dimensions for rest of the cases.

THE BASIC REQUIREMENTS OF A FEEDER SYSTEM FOR A CASTING


Feeder must be thermally adequate
The solidification time of metal in feeder must be greater than the solidification time of metal in mould. So that it can feed enough
metal to the casting in order to compensate volume shrinkage during solidification. Therefore the shrinkage in the riser/casting system
must be concentrated in the riser, which can be removed from the finished casting [3]
According to Chvorinovs equation,
Freeze time, t = K [V/SA]2
Where, t = freeze time of casting (sec)
V = volume of casting (mm3)
SA = surface area of casting (mm2)
K = solidification constant (sec/mm2)
Type of Riser Used
Open riser type for experimentation purpose. Since Open risers are exposed to the atmosphere and are easy to mould and also helps to
know whether the mould is completely filled or not.

RISER DESIGN
According to Chvorinovs equation [3],
Solidification time (t) [Volume / Surface Area] 2
It indicates that for a feeder to have a solidification time equal to or greater than that of the casting, the minimum feeder size would be
obtained from a sphere. Sphere are usually difficult to mould, and would present feeding problem as well, since the last metal to freeze
would be near the centre of the sphere, where it could not be used to feed a casting. Practicalities dictate the use of cylinders for most
risers. So the cylindrical riser with hemispherical base is used to provide the smallest possible surface area -volume ratio.

RISER DESIGN PARAMETERS


The riser design involves the deciding of following parameters

803

Total volume of the riser which depends on the shrinkage characteristics of the metal and the shape of the casting etc.
The number of risers and their functions in relation to the casting.
The types, shape and size of the risers etc.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CALCULATION OF RISER DIMENSIONS


From Chvorinovs equation [3]
Solidification time (t) [Volume / Surface Area] 2
i.e Using the modulus method , its empirical relation is
Mriser = 1.2 Mcasting (1)
From casting plate dimensions,

Mcasting = 6.12 mm

Therefore it can be used to calculate the value of Mriser using eq. (1)
i.e

Mriser

= 1.2*6.12
= 7.344 mm

Now lets take the volume and area of hemisphere:


-Volume of hemisphere= d3/12
-Area of hemisphere

= 3 (d/2)2

Modulus of riser, Mriser = volume/ area


Mriser =

(d3/12)
3 (d/2)2

Mriser = 0.11d (2)


Equating equations (1) & (2),
7.314=0.11d
d = 65mm (approx.)(a)

i.e

The ratio of height to diameter (h/d) varies from 1 to 1.5 for cylindrical risers [foundry rule].
Considering condition of h/d ratio =1.3
h=1.3d (b)

Therefore
Now substituting equation (a) & (b),
i.e

h= 1.3*65
h= 85mm (approx.)

Finally calculated riser dimension to design an optimum riser. Similarly assume the rest of the riser dimensions.

RISER DIMENSIONS
Table 3- Riser Dimensions
804

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A cylindrical riser with a hemispherical bottom is used in this project since the hemispherical bottom consumes 16-17% less metal
than the standard cylindrical side riser [16].

Figure 6- Feeder with Hemispherical Bottom


Thus dimensions of casting plates, riser & gates and are found.

COMPUTER SIMULATION
In the past, the optimal casting design was achieved by trial and error method. Conventional method is time consuming and
ineffective, can no longer satisfies the needs of the foundry. This problem can be addressed with the use of computer aided
design/engineering technique.
Computer design and casting simulation have gradually become popular in recent years. With the help of the computer a number of
CAD/CAM commercial packages for the simulations of the casting have been developed and implemented in the foundry.
In this work, ANSYS 10.0 APDL version software has been used to find the optimum dimensions of the riser for rectangular plate FG
260 grey iron castings with cast iron end-chill. Solidification simulation process facilitates to visualize the temperature distribution in
various locations of the casting. Cooling curve obtained from simulation helps to find out the solidification time of casting and riser.

ASSUMPTIONS INCORPORATED
It is difficult to incorporate all the environmental condition existing in the foundry; hence the following assumptions were
made during simulation.
805

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The mould is filled instantaneously and uniform temperature is assumed at all points of the casting and mould, at that instant.
Heat transfer by convection takes place to the atmosphere from the outer surface of the mould and top surface of riser and
runner.
The convective heat transfer within the liquid metal is neglected for sand moulds.

PREPROCESSING
The pre-processing process contains the following commands to create a finite element model. They are as follows:
1.
2.
3.
4.
5.
6.

Defining Element and Options


Defining Element Real constants
Defining Material Properties
Creating Model Geometries
Defining Meshing controls
Applying Boundary conditions.

MODEL
The model is created in solid works and it is saved in .iges format and then it is imported to ANSYS for the solution to
obtain.

Figure 7-Casting Model Imported into ANSYS Software in .iges Format


Once the model has been subjected to various boundary conditions ANSYS solves the set of equations generated by Finite Element
Model.

COMPUTER SIMULATION USING ANSYS


The casting and mould assembly is modelled in the Pre-processor stage of simulation. One half of the casting and mould
assembly is modelled as it is symmetric. The convection takes place through all surfaces of sand except bottom surface. The mould is
assumed to be instantaneously filled with the molten metal at a pouring temperature of 1400 oC.The mould outside surface is assumed
to be convection with film coefficient of 40 oC because moisture in the sand will affect the cast metal. So, the sand is preheated up to
40oC.
a) Mould Material: Silica Sand
Density: 1580 x 10 9 kg/mm3
Thermal conductivity: 0.393 x 10 3 W/mm oC
Specific heat: 1046 J/kg oC
b) Initial Conditions:
Casting temperature:1400o C
806

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Mould temperature: 40oC

Figure 8- Solution Obtained From ANSYS Software


c) Boundary Conditions:
Sand top surface convection: 3.48 W/m2 K
Sand side surface convection: 4.09 W/m2 K

CAST METAL: FG 260 GREY CAST IRON


Table 4-Thermo Physical Properties of FG 260 Grey Cast Iron

RESULTS OBTAINED FROM COMPUTER SIMULATION


Solidification simulation process facilitates to visualize the temperature distribution in various locations of the casting.
Cooling curve obtained from simulation helps to find out the solidification time of casting and riser and thereby the optimum result.
From the results obtained through ANSYS, cooling curves are constructed and temperature at different time for different configurations
is plotted in the graph. This is an indirect method of finding out the soundness of castings.

COOLING CURVES
Cooling curves are obtained by plotting solidification time versus temperature. Cooling curve obtained from simulation helps
to find out the solidification time of casting and riser and thereby the optimum result. In this experiment, cooling curves are taken at
five different nodes on the casting placed namely from:

807

Top of the riser


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

On the gating system


On the left side of plate
On the right side of plate
On the corner of the plate

EFFECTS OF FEEDER CONFIGURATIONS ON SOLIDIFICATION IN CASTING OF PLATE


The solidification cooling curves obtained from the five locations identified are presented in Figure 9 with diameter-65mm.
Since our aim in this project is to delay the solidification of riser and thereby to obtain the optimum feeder configuration.
The foundry rules says that in order to obtain an best sound casting in an casting process, the solidification of riser should be delayed
i.e the last part to be solidified should be at the riser (Node 1). Here cooling curves are taken at five different nodes on the casting
placed namely from:

Top of the riser


On the gating system
On the left side of plate
On the right side of plate
On the corner of the plate

Figure 9- Cooling Curve for Riser of Diameter 65mm & Height 85mm

In figure 9, riser of diameter- 65mm & height- 85mm is used and is observed that the temperature at the node on riser (node1) is
steady, so that the last part getting solidified should be at the riser which led to the formation of sound castings.

INFERENCE FROM COMPUTER SIMULATION

From the obtained results of solidification cooling curves and temperature distribution at five different nodes in different riser
configurations , it is observed that temperature at the node 1 is steady (Fig.10) in the case of riser size (diameter-65mm & height85mm) so that the last part getting solidified should be the riser which lead to the formation of sound castings.
So it can be conclude that riser size of diameter-65mm & height-85mm is proved to be optimum best solution to produce sound
casting.

EXPERIMENTAL VERIFICATION
Experimental verification is done to verify the optimum simulation results obtained through computer simulation as well as Caines
analysis for providing sound casting thereby to confirm the optimum riser size for defect free castings. Experimental results are also
validated with casting properties such as ultimate tensile strength (UTS) tests and hardness (Brinell hardness) tests.

EXPERIMENTAL PROCEDURE
808

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A. MOULDING SAND
Silica sand moulds are prepared using sand mix with a composition of 8% calcium based bentonite, 5% moisture (approx.) and 2%
saw dust and coal powder are added.
B. MELTING AND POURING
The FG 260 Grey cast iron is melted in a crucible furnace. As soon as the molten metal reaches a temperature of 1400C, it is taken
out and degassed using the required quantity of degassing tablets of ALDEGAS (Hexa Chloro Ethane) in order to remove the
dissolved hydrogen gas. At 1400C, cooling and solidification takes place which result in evolution of gas bubbles, pinholes, and
microscopic porosity. Hence degassing is to be performed to minimize such defects [24].
C. CASTING EXPERIMENTAL SETUP
A standard rectangular plate casting size of 200 x 100 x 15 mm is selected as test casting for conducting the experiment. A wooden
pattern for the test casting and runner and cylindrical riser with hemispherical bottom of h/d ratio= 1.3 are made. Castings are
collected from the experiments, cut into small pieces and then checked for internal and external defects [16] & [24].

Figure 10-Cast Plate

TENSILE STRENGTH TEST SPECIMEN


Tensile testing is carried out in the casting using an universal testing machine. The casting is made into a a tensile test
specimen in a conventional lathe. The ultimate tensile strength and hardness is determined experimentally. The standard tensile
specimen is prepared as shown in the Fig 11. The minimum ultimate tensile strength and hardness of each piece is determined in order
to categorise the castings whether sound or unsound.

Figure11- Dimensions of the Tensile Test Specimen

809

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 12-Ultimate Tensile Strength Testing

HARDNESS TESTING
In order to conduct the hardness test, the plate casting should be cleaned properly and then placed on the work piece holder
and undergoes Brinell hardness testing (BHN).

TENSILE AND HARDNESS TEST RESULTS


Table 5- Experimental Results Obtained

INFERENCE FROM EXPERIMENT


The experimental results are compared with theoretical results for validation. It is observed that there is a closeness of
experimental values with the theoretical values .The optimal riser size of diameter-65mm and height-85mm produced a tensile
strength of 260 MPa and its corresponding hardness in the Brinell scale is 300BHN.
It is found that these optimal settings also produced sound casting which reveals the quality consciousness in the foundry.
ACKNOWLEDGMENT
First of all I should thank God almighty for blessing us with the wisdom to complete this work. I should also thank my
project guide Mr. Rathish R, Assistant Professor for his kind support and encouragement. Finally I should also thank my Department
staffs, parents and friends who always supported me in this work.
CONCLUSION
The optimal riser combination to produce sound casting for a standard rectangular plate (200mmx100mmx15mm) is determined as
diameter65mm and riser height-85mm with h/d =1.3 and riser with cylindrical bottom. The experimental results confirm the simulated
results obtained from ANSYS software. It is confirmed that the optimum riser combination promoted directional solidification which
810

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

is responsible for casting soundness. The degree of solidification is obtained from the cooling curves from the ANSYS software.
Hence the cooling curve is an effective tool to predict the degree of solidification in any casting.
REFERENCES:
Riser Design Casting Design and Performance, ASM International, p.6172.
Dr. N K Srinivasan, Casting Introduction, Foundry Technology, 3 rd Edition, p.1-5, 2012.
Dr. N K Srinivasan, Riser Design, Foundry Technology, 3 rd Edition, p.105-114, 2012
R C Creese, Cylindrical Top Riser Designs Relationship for Evaluating Insulating Materials, AFS Transactions, Vol. 89, p.354
348, 1981.
[5] R C Creese, An Evaluation of Cylinder Riser Designs with Insulating Materials, AFS Transactions, Vol. 87, p. 665 668, 1979.
[6] Piotr Mikolajczak, Zenon Ignaszak, Feeding Parameters for Ductile Iron in Solidification Simulation p.5, 2007.
[7] R A Johns, Risering Steel Castings easily and efficiently, AFS Transactions, Vol.88, p-77- 96.
[8] E N Pan, C S Lin and C R Lopper, Effects of Solidification Parameters on the feeding efficiency of A356 Aluminum alloy,
AFS Transactions, Vol.98, p.135 146,1990.
[9] Haider Hussain, Dr. A I Khandwawala, Optimal Design of Two Feeder System: Simulation Studies for Techno-Economic
Feasibility American Journal of Mechanical Engineering, Vol. 2, No. 3, 93-98, 2014.
[10] Prof. Karl B. Rundman, METAL CASTING, Reference Book for MY4130, 1986.
[11] John Campbell and Richard A. Harding, The Feeding of Castings, Training in Aluminum Application Technologies, (TALAT)
Lecture 3206, 1994.
[12] V V Mane, Amit Sata and M Y Khire, New Approach to Casting Defects Classification and Analysis Supported by Simulation.
[13] Rajesh Rajkolhe, J. G. Khan Defects, Causes and Their Remedies in Casting Process: A Review International Journal of
Research in Advent Technology, (IJRAT) Vol.2, No.3, March 2014, E-ISSN: 2321-9637.
[14] Clyde Melvin Adams, Jr. Heat Flow in the solidification of Casting, 1953.
[15] Harshil Bhatt, Rakesh Barot, Kamlesh Bhatt, Hardik Beravala, Jay Shah, Design Optimization of Feeding System and
Solidification Simulation for Cast Iron, Procedia Technology 14 ( 2014 ) 357 364,Science Direct, ICIAME 2014 .
[16] V Gopinath, N Balanarasimman, Effect of Solidification Parameters on the Feeding Efficiency of Lm6 Aluminum Alloy
Casting, Journal of Mechanical and Civil Engineering, (IOSR-JMCE), Volume 4, Issue 2, p.32-38, Dec. 2012R C Creese, An
Evaluation of Cylinder Riser Designs with Insulating Materials, AFS Transactions, Vol. 87, p. 665 668, 1979.
[17] Dr. B. Ravi, Computer-Aided Casting Design Past, Present and Future, Indian Foundry Journal, 2010. Durgesh Joshi, 10year Survey of Computer Applications in Indian Foundry Industry Indian Foundry Journal, January 2010.
[18] Durgesh Joshi, 10-year Survey of Computer Applications in Indian Foundry Industry Indian Foundry Journal, January 2010.
[19] Clyde Melvin Adams, Jr. Heat Flow in the solidification of Casting, 1953.
[20] R A Johns, Risering Steel Castings easily and efficiently, AFS Transactions, Vol.88, p-77- 96.
[21] Indian Standard Grey Iron Castings - IS 210 :( superseding is 6331: 1987) Specification -Bureau of Indian Standards (BIS) fifth revision-2009.
[22] Nimesh A Khirsariya, M S Kagthara , P J Mandalia, Reduction of Shrinkage Defect in Valve Body Casting Using Simulation
Software ,International Journal Of Engineering Science & research Technology (IJESRT), April,2014,ISSN:2277-96551.
[23] P J. Mandaliya , J T Dave Study of Piston Sleeve Manufactured By Sand Casting Process to
Reduce Rejection Rate Using Simulation Software, International Journal of Mechanical and Production Engineering Research
And Development (IJMPERD) ISSN 2249-6890 Vol. 3, Issue 2, Jun 2013, 161-168.
[24] Pouring, Cooling, Shakeout, Iron, Step Core, Beach Box Binder, Comparison to Test GZ 1412-116 HD, Casting Emission
Reduction Program (CERP), US Army Contract W15QKN-05-D-0030, November 2006.
[25] ME355 Winter 2014 Introduction to Manufacturing Processes HW #3 solutions 1/31/2014.
[26] Metal Casting Process, Qualitative Problems, Chapter 11
[1]
[2]
[3]
[4]

811

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Homomorphic Authenticable Ring Signature (HARS) mechanism for Public


Auditing on Shared Data in the cloud (Oruta)
Ms.Sonam M. Kamble#1, Prof.A.C.Lomte*2
#
*

Studen , Department Of Computer Engineering, University of Pune, BSIOTR Wagholi, Pune, Maharashtra, India

Professor, Department Of Computer Engineering, University of Pune, BSIOTR Wagholi, Pune, Maharashtra, India

Abstract Users in a particular group need to compute signatures on the blocks in shared data,so that the shared data integrity can
be confirmed publicly,.Various blocks in shared data are usually signed by various vast number of users due to data alterations
performed by different users. Once a user is revoked from the group, an existing user must resign the data blocks of the revoked user
in order to ensure the security of data. Due to the massive size of shared data in the cloud, the usual process, which permits an existing
user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient.The new public auditing
scheme for shared data with efficient user revocation in the cloud is proposed so that the semi-trusted cloud can re-sign the blocks that
were previously signed by the revoked user with the valid proxy re-signatures, when a user in the group is revoked.

Keywords Public auditing, privacy-preserving, shared data, Digital Signature, cloud computing,
INTRODUCTION

With cloud computing and storage, users are able to access and to share resources offered by cloud service providers at a lower
marginal cost. It is routine for users to leverage cloud storage services to share data with others in a group, as data sharing becomes
standard feature in most cloud storage offerings, including Dropbox, iCloud and Google Drive. The integrity of data in cloud storage,
however, is subject to skepticism and scrutiny, as data stored in the cloud can easily be lost or corrupted due to the inevitable
hardware/software failures and human errors.
The traditional approach for checking data correctness is to retrieve the entire data from the cloud, and then verify data
integrity by checking the correctness of signatures (e.g., RSA) or hash values (e.g., MD5) of the entire data. Certainly, this
conventional approach able to successfully check the correctness of cloud data. However, the efficiency of using this traditional
approach on cloud data is in doubt.
The main reason is that the size of cloud data is large in general. Downloading the entire cloud data to verify data integrity
will cost or even waste users amounts of computation and communication resources, especially when data have been corrupted in the
cloud. Recently, many mechanisms have been proposed to allow not only a data owner itself but also a public verifier to efficiently
perform integrity checking without downloading the entire data from the cloud, which is referred to as public auditing.
In these mechanisms, data is divided into many small blocks, where each block is independently signed by the owner; and a
random combination of all the blocks instead of the whole data is retrieved during integrity checking .A public verifier could be a data
user (e.g. researcher) who would like to utilize the owners data via the cloud or a third-party auditor (TPA) who can provide expert
integrity checking services. Existing public auditing mechanisms can actually be extended to verify shared data integrity and data
freshness. However, a new significant privacy issue introduced in the case of shared data with the use of existing mechanisms is the
leakage of identity privacy to public verifiers .To protect the confidential information, it is essential and critical top reserve identity
privacy from public verifiers during public auditing.
In our model, privacy is accomplished by allowing the parties to upload their data in multi clouds and data is split into multiple parts
so it gives more protection.The critical reasons due to which our above system is beneficial as:
1. Current working scenario involves paper based work for Data analysis and verification.
2. Data Storage is one way to mitigate the privacy concern.
3. Unauthorized users can leak or misuse the data, this problem still remains due to the paper based work.
These are the above reasons which compel us to propose Oruta, a novel privacy preserving public auditing mechanism. More
specifically, we utilize ring signatures to construct homomorphic authenticators in Oruta, so that a public verifier is able to verify the
812

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

integrity of shared data without retrieving the entire data while the identity of the signer on each block in shared data is kept private
from the public verifier. In addition, extend this mechanism to support batch auditing, which can perform multiple auditing tasks
simultaneously and improve the efficiency of verification for multiple auditing tasks. Mean while, Oruta is compatible with random
masking; Oruta stands for One Ring to Rule Them All.
For the first time data inserting the Encryption service generate encryption key and this key is stored separately on Key
Storage area, and encrypted data is stored on the cloud storage area.In decryption process when the user request for the data, then key
and data are collected at the Decryption service but the service will not immediately decrypt the data, until and unless user insert the
OTP sent on his mail. When user will enter this OTP correctly then the data is decrypted by Decryption service and data is provided to
the user. Some researchers have suggested that user data stored on a Service- providers equipment must be encrypted. Encrypting
data prior to storage is a common method of data protection, and service providers may be able to build firewalls to ensure that the
decryption keys associated with encrypted user data are not disclosed to outsiders. However, if the decryption key and the encrypted
data are held by the same service provider, it raises the possibility that high-level administrators within the service provider would
have access to both the decryption key and the encrypted data, thus presenting a risk for the unauthorized disclosure of the user
data.Existing methods for protecting data stored in cloud environment are user authentication, building secure channel for
transmission of data. For this procedure they use various Cryptographic as well as Security based algorithm such as AES (Advance
Encryption Standard), DES (Data Encryption Algorithm), Triple DES, RSA algorithm with digital signature.
We know that there is a method to build trusted computing environment for cloud computing system by integrating a trusted
computing platform into the security of cloud computing system.System in which a cloud computing system is combined with trusted
computing platform with trusted platform module that consists of some important security services, including authentication,
confidentiality and integrity, are provided in cloud computing system. This tends to provide extra level of security. In their proposed
model the Encryption-Decryption service will only the encrypt the data and this encrypted data is send to cloud storage and then
original data is deleted from Encryption-Decryption service.The need of our system to overcome the flaws and include some of the
above advantages has led to the implementation of our proposed algorithm.

LITERATURE SURVEY
For some years, tools for defending against hackers have been in the form of software to be installed on each device being protected
or appliances deployed on-premise. However, to be effective, such protection needs to be constantly updated. Common methods for
ensuringsecurity of data in cloud consist of data encryption (cryptographic process) before storage, authentication process before
storage or retrieval and constructing secure channels for data transmission. The protection methods find their routes in cryptographic
algorithms and digital signature techniques.
The cryptographic algorithms are classified into two categories: symmetric and asymmetric algorithms. Symmetric algorithm uses a
single key known as secret key both for encryption and decryption process whereas asymmetric algorithm uses two keys; one is the
public key made available publically and the other one is the private key, which is kept secret used to decrypt the data. Breaking the
private key is rarely possible even if the corresponding public key is known well in advance. Examples of symmetric algorithm
comprise of Data encryption standard (DES), International data encryption algorithm (IDEA), advanced encryption standard (AES) on
the other hand asymmetric key algorithm include RSA algorithm.Asymmetric algorithms are best suited for real world use and
provides undeniable advantages in terms of functionality whereas symmetric algorithms is ideally suited for security applications like
remote authentication for restricted websites which do not require full-fledged asymmetric set up.The use of passwords for
authentication process is popular among the users but the transmission of messages containing password may be vulnerable to illegal
recording by the hackers hence posing a security breach in the system. Some more advanced authentication techniques may employ
the concept of single-usage-password where the system may generate challenge token expecting the user to respond with an encrypted
message using his secret key which converts the password to some derived value enabling.
While using the cryptographic techniques for ensuring data security care should be taken for storing encryption and decryption
keys. Rigorous methods should be adopted to prevent insiders and privileged user from gaining access to the encrypted data and
decryption key simultaneously. Thus, the importance of SLAs is recognized in this context. The policies responsible for user data
protection must be clearly mentioned in the providers contract. After reviewing the data security requirements following
recommendations have been included in multiparty SLA suggested at the end to ensure data security in cloud:
1. Encrypted data and decryption key must not be stored at the same place
813

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. Access control techniques should be applicable for malicious insiders and privileged users
3. Independent audits must be conducted to access the effectiveness of techniques employed for data storage
4. Service providers must abide the ethics and legal laws and should be responsible for discrepancies if any
5. Backup and reset methods against system crash and failures.
In many applications, it is desirable to work with signatures that are both short and yet where many messages from different
signers are verified very quickly. RSA signatures satisfy the latter condition, but are generally thousands of bits in length. Recent
developments in pairing based cryptography produced a number of short signatures which provide equivalent security in a fraction of
the space. Unfortunately, verifying these signatures is computationally intensive due to the expensive pairing operation. In an attempt
to simultaneously achieve short and fast signatures, it was proved how to batch verify two pairing-based schemes so that the total
number of pairings was independent of the number of signatures to verify. On the theoretical side, we introduce new batch verifiers
for a wide variety of regular, identity based, group, ring and aggregate signature schemes. Our goal is to test whether batching is
practical; that is, whether the benefits of removing pairings significantly outweigh the cost of the additional operations required for
batching, such as group membership testing, randomness generation, and additional modular exponentiations and multiplications.

MATERIAL AND METHODS


Users are able to access and to share resources offered by cloud service providers at a lower marginal cost. It is routine for users to
leverage cloud storage services to share data with others in a group, as data sharing becomes standard feature in most cloud storage
offerings, including Dropbox, iCloud and Google Drive. The integrity of data in cloud storage, however, is subject to skepticism and
scrutiny, as data stored in the cloud can easily be lost or Corrupted due to the inevitable hardware/software failures and human errors.
The traditional approach for checking data correctness is to retrieve the entire data from the cloud, and then verify data integrity by
checking the correctness of signatures (e.g., RSA) or hash values (e.g., MD5) of the entire data. Certainly, this conventional approach
ableto successfully check the correctness of cloud data. However, the efficiency of using this traditional approach on cloud data is in
doubt.

Figure 1: System Arichitecture

The main reason is that the size of cloud data is large in general. Downloading the entire cloud data to verify data integrity will
cost or even waste users amounts of computation and communication resources, especially when data have been corrupted in the
cloud. Recently, many mechanisms have been proposed to allow not only a data owner itself but also a public verifier to efficiently
perform integrity checking without downloading the entire data from the cloud, which is referred to as public auditing . In these
mechanisms, data is divided into many small blocks, where each block is independently signed by the owner; and a random
combination of all the blocks instead of the whole data is retrieved during integrity checking .A public verifier could be a data user
(e.g. researcher) who would like to utilize the owners data via the cloud or a third-party auditor (TPA) who can provide expert
integrity checking services. Existing public auditing mechanisms can actually be extended to verify shared data integrity and data
freshness. However, a new significant privacy issue introduced in the case of shared data with the use of existing mechanisms is the
leakage of identity privacy to public verifiers .To protect the confidential information, it is essential and critical top reserve identity
privacy from public verifiers during public auditing.
814

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

To solve the above privacy issue on shared data, we propose Oruta, a novel privacy preserving public auditing mechanism. More
specifically, we utilize ring signatures to construct homomorphic authenticators in Oruta, so that a public verifier is able to verify the
integrity of shared data without retrieving the entire data while the identity of the signer on each block in shared data is kept private
from the public verifier. In addition, extend this mechanism to support batch auditing, which can perform multiple auditing tasks
simultaneously and improve the efficiency of verification for multiple auditing tasks.
In the idea of cloud computing the user of cloud outsources its data on to the cloud, and then the third party auditor is going to
check authorization of that user to access the cloud.In cloud if it is found that the unauthorized user is trying to access data of any
other authorized user then the third party comes in picture, the third party auditor gives the notification to the authorized user that
some unauthorized user is trying to access its private data. The concept of cloud computing represents a shift in thought, in those end
users need not know the details of a specific technology. The service is fully managed by the provider. This on demand service can be
provided at cloud service providers are making a substantial effort to secure their systems, in order to minimize the threats ofinsider
attacks, and reinforce the confidence of customers. In the cloud scenario if third party auditor itself get hacked then the authorized will
not receive any notification of unauthorized access of its data. So in the propose method the service will eliminates the third party
auditor.

DESIGN PROCESS
Some of the features those are included in our design feature are as follows:
1.

Ring Signatures:

The concept of ring signatures is first proposed by Rivest et al. in 2001. With ring signatures, a verifier is convinced that a signature is
computed using one of group members private keys, but the verifier is not able to determine which one. This property can be used to
preserve the identity of the signer from a verifier. The ring signature scheme introduced by Boneh et al. (Referred to as BGLS in this
paper) is constructed on bilinear maps.We will extend this ring signature scheme to construct our public auditing mechanism.
2.

Integrity Threats:

Two kinds of threats related to the integrity of shared data are possible. First, an adversary may try to corrupt the integrity of shared
data and prevent users from using data correctly. Second, the cloud service provider may inadvertently corrupt (or even remove) data
in its storage due to hardware failures and human errors. Making matters worse, in order to avoid jeopardizing its reputation, the cloud
server provider may be reluctant to inform users about such corruption of data.
3.

Privacy Threats:

The identity of the signer on each block in shared data is private and confidential to the group. During the process of auditing, a semitrusted TPA, who is only responsible for auditing the integrity of shared data, may try to reveal the identity of the signer on each block
in shared data based on verification information. Once the TPA reveals the identity of the signer on each block, it can easily
distinguish a high-value target (a particular user in the group or a special block in shared data).

OBSERVATIONS
The obserations of our proposed system led to the happening of the results mentioned below:
1. Encrypted data and key are stored separately on different storage media.
2. Before decrypting the data the user have to enter OTP which is sent on his mail and combination of OTP, key and encrypted data
are used to generate original data.
3. For accessing the data the user is restricted in read only mode and for insert, modify and delete the notification is sent to admin.
4. After encryption or decryption the original data is deleted.

5. For securing the Account and Service Hijacking, we are eliminating the TPA. The work of TPA will be done by admin
and our proposed system.
815

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ACKNOWLEDGMENT
We would like to sincerely thank Prof.A.C.Lomte, our mentor (Professor, at Computr Department of BSIOTR, Wagholi, Pune),
for her support and encouragement.

CONCLUSION`
The security aspect in cloud is major concern thus we have proposed novel system which can process the request in grouping or
batch manner which can enhance performance and efficiency of data transfer/system.The algorithm clearly shows improvements to its
predecessor in various fashions like security, transfer of data, scalability and other perspective.

REFERENCES
[1] M. Armbrust, A. Fox, R. Griffith, A. D.Joseph, R. H.Katz, A. Konwinski,G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M.
Zaharia, A View of Cloud Computing, Communications of the ACM, vol. 53, no. 4, pp. 5058, April 2010
[2] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, Provable Data Possession at Untrusted
Stores, in Proc. ACM Conference on Computer and Communications Security (CCS), 2007, pp. 598610.
[3] C.Wang, Q.Wang, K. Ren, andW. Lou, Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing, in
Proc. IEEE International Conference on Computer Communications (INFOCOM), 2010, pp.525533.
[4] R. L. Rivest, A. Shamir, and Y. Tauman,How to Leak a Secret, in Proc. International Conference on the Theory and Application
of Cryptology and Information Security (ASIACRYPT). SpringerVerlag, 2001, pp. 552565.
[5] D. Boneh, C. Gentry, B. Lynn, and H. Shacham, Aggregate and Verifiably Encrypted Signatures from Bilinear Maps, in Proc.
International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT). Springer-Verlag, 2003, pp.
416432.
[6] H. Shacham and B. Waters, Compact Proofs of Retrievability,in Proc. International Conference on the Theory and Application of
Cryptology and Information Security (ASIACRYPT). SpringerVerlag, 2008, pp. 90107.
[7] 7. M. Blaze, G. Bleumer, and M. Strauss, Divertible Protocolsand Atomic Proxy Cryptography, inthe Proceedings of
EUROCRYPT 98.Springer-Verlag, 1998, pp. 127144.
[8] Y. Zhu, H. Wang, Z. Hu, G.-J. Ahn, H. Hu, and S. S.Yau, Dynamic Audit Services for Integrity Verification of Outsourced
Storage in Clouds, in Proc. ACM Symposium on Applied Computing (SAC), 2011, pp.15501557

816

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Video Shot Detection Techniques Brief Overview


Mohini Deokar, Ruhi Kabra
#

Student, Department of Computer Engineering, University of Pune, GHRCOEM, Ahmednagar, Maharashtra, India.
1

deokar.mohini@gmail.com,

Asst. Professor, Department of Computer Engineering, University of Pune, GHRCEM, Pune, Maharashtra, India.
2

ruhi.kabra@raisoni.net

Abstract The first step in video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be
either abrupt (e.g. cut) or gradual (e.g. fades, dissolves, wipes).Video segmentation is essential for video structure analysis and content
based video management. One of the most challenging domains for shot boundary detection is sports video. Here we present a
classification of shot boundary detection algorithms, including those that deal with gradual shot transitions. A current research topic
on video includes video abstraction or summarization, video classification, video annotation and content based video retrieval.

Keywords.: Video segmentation, cut detection, gradual transition detection,Shot,frames,Threshold.

I. INTRODUCTION
In the multimedia environment the use of digital data is increasing rapidly, so that tools to handle large volumes of video data are
required. Temporal video segmentation is the first step towards automatic annotation of digital video sequences. Its goal is to partition
video into a manageable segment. A shot is defined as an unbroken sequence of frames taken from one camera. There are two basic
types of shot transitions: abrupt and gradual. A gradual transition means Fade, Wipe, and Dissolve. The simplest method for cut
detection is to calculate the absolute sum of pixel differences and compare it against a threshold [2]. The problem with this method is
its sensitivity to camera and object movements. A better method is to compare block regions instead of individual pixels. Sensitivity to
camera and object movements can be further reduced by comparing histograms of successive images. The idea behind histogram
based approaches is that two frames with the same background and the same object (although moving) will have little difference in
their histograms. The techniques mentioned so far are single threshold based approaches for cut detection and they are not suitable to
detect gradual transitions. A simple and effective two threshold approach for gradual transition detection is the twin comparison
method [3].These approaches for video segmentation process uncompressed video, but it is desirable to use methods that can operate
directly on the encoded stream.
This paper is organized as follows. Section II describe brief literature survey on shot boundary detection methods. Section III
illustrate comparison of shot boundary detection methods. Section IV describes shot change detection techniques. Section V represents
classification techniques.

II. LITERATURE SURVEY


Ravi Mishra et al[2014] proposed a paper on a Comparative study of block matching algorithm and dual tree complex wavelet
transform for shot detection in videos. This paper presents a comparison between the two detection methods in terms of various
parameters like false rate, hit rate, miss rate tested on a set of different video sequence.
Sowmya R et al [2013] proposed a paper on Analysis and Verification of Video Summarization using Shot Boundary Detection. The
analysis is based on Block based Histogram difference and Block based Euclidean distance difference for varying block sizes. Zhe
Ming Lu et al [2013] present a Fast Video Shot Boundary Detection Based on SVD and Pattern Matching. It is based on segment
selection and singular valuedecomposition (SVD). Ravi Mishra et al[2013] proposed a paper on Video shot boundary detection using
dual-tree complex wavelet transform, an approach to process encoded video sequences prior to complete decoding. The proposed
algorithm first extracts structure features from each video frame by using dual tree complex wavelet transform and then spatial domain
structure similarity is computed between adjacent frames. Sandip T et al[2012] proposed a paper on Key frame Based Video
Summarization Using Automatic Threshold & Edge Matching Rate. Firstly, the Histogram difference of every frame is calculated, and
then the edges of the candidate key frames are extracted by Prewitt operator. Goran J. Zaji et al [2011] proposed a paper on Video
shot boundary detection based on multifractal analysis. Low-level features (color and texture features) are extracted from each frame
817

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

in video sequence then are concatenated in feature vectors (FVs) and stored in feature matrix. Donate et al[2010] presented Shot
Boundary Detection in Videos Using Robust Three-Dimensional Tracking. The proposal is to extract salient features from a video
sequence and track them over time in order to estimate shot boundaries within the video. LihongXun et al [2010] proposed a paper on
A Novel Shot Detection Algorithm Based on Clustering. This paper present a novel shot boundary detection algorithm based on Kmeans clustering. Color feature extraction is done first and then the dissimilarity of video frames is defined. The video frames are
divided into several different sub-clusters through performing K-means clustering. Jinchang Ren et al [2009] proposed a paper on Shot
Boundary Detection in MPEG Videos using Local and Global Indicators operating directly in the compressed domain. Several local
indicators are extracted from MPEG macro blocks, and Ada Boost is employed for feature selection and fusion. The selected features
are then used in classifying candidate cuts into five sub-spaces via pre-filtering and rule based decision making, then the global
indicators of frame similarity between boundary frames of cut candidates are examined using phase correlation of dc images.
III. SHOT BOUNDARY DETECTION METHODS
A lot of research is going on, for automatic content Based Video retrieval. Previous techniques focused on cut detection, and more
current work has focused on gradual transitions detection. The major methods for shot boundary detection are pixel differences,
statistical differences, histogram comparisons, edge differences, and motion vectors.
A. Pixel Comparison
This is the very primitive way to find the change in scene. Here the two frames are taken as input and the intensity of pixels are
calculated. If the intensity of pixels is greater than a certain threshold value, then scene change is declared. There exist some modified
algorithms for pixel comparisons. [1] [2] used 3 X 3 averaging filter. They are adjusting the threshold value manually.
But these methods are comparatively slow and setting manually threshold is not the best idea. Some author [5] divided a frame
into 12 regions and each region is compared with same region in next frame. But, this method is not suitable for scene having camera
and object motion.
B. Transform-Based Difference
It represents compression difference computation using different transformation methods. A discrete Cosine Transformation (DCT)
coefficient is example.
C. Histogram-Based Difference
It computes the color histogram of each frame and compares it to detect shot boundaries.
D. Edge Based Difference
In this method the edges of successive aligned frames are detected first and then the edge pixels are paired with nearby edge pixels in
the other image to find out if any new edges have entered the image or if some old edges have disappeared.
E. Statistical Difference
This method is an expansion of pixel comparison method. [6]
And [8] used this method. Here they divided the frame into
blocks and for each block mean deviation and standard deviation is calculated. This method is not noise tolerant, and it is bit slow as it
calculates statistics.
F. Motion Vector
In every image processing application we need to find the vector fields. In vector fields we can see the changes in the image with time.
There exists a third dimension but it has a drawback. That is called as aperture problem. The input frame is divided into blocks and
motion vectors are extracted from those blocks [9] used this method. It is the transformation of pixel in one frame into other frame.
And it is 2-dimensional vector (u, v).

IV. SHOT CHANGE DETECTION METHOD


818

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A. Thresholding
Thresholding means comparing the computed discontinuity value with a constant threshold [4, 10, and 2]. This method only performs
well if video content exhibits stationarity with time, and only if the threshold is adjusted by hand.
B. Adaptive Thresholding
The obvious solution to the problems of the simple thresholding is to vary the threshold depending on the average discontinuity within
a temporal domain, as in [1, 10].
C. Probabilistic Detection
A rigorous way to detect shot changes is to model the pattern of specific types of shot transitions and perform optimal a posteriori shot
change estimation, presupposing specific probability distributions for shots. This is demonstrated in [3,6].

V. SHOT CLASSIFICATION ALGORITHM


A. Spatial Feature Domain
The size of the region from which individual features are extracted plays a great role in the performance of shot change detection. A
small region tends to reduce detection invariance with respect to motion, while a large region tends to miss
transitions between similar shots.
1) Single frame pixel per feature:
Some algorithms use per feature a single frame pixel. This feature can be color [9], edge strength [4] or other. However, such an
approach results in a very large feature vector and is very sensitive to motion.
2) Rectangular block:
To segment each frame into equal-sized blocks, and extract a set of features per block [13, 3, and 6] rectangular method is used. This
approach is invariant to small camera and object motion. By computing block motion it is possible to enhance motion invariance, or to
use the motion vector itself as a feature.
3) Arbitrarily shaped region:
This is another method in which Feature extraction can also be applied to arbitrarily shaped and sized regions [12]. This exploits the
most homogeneous regions, enabling better detection of discontinuities. Object-based feature extraction
is also included in this category. The main disadvantage is high computational complexity and instability due to the complexity of the
algorithms involved.
4) Whole frame:
The algorithms that extract features from the whole frame at once [2, 9] have the advantage of being very resistant to motion, but have
poor performance at detecting the change between two similar shots.
B. Temporal Domain of Continuity Metric
Another important aspect of shot boundary detection algorithms is the temporal window of frames which is used to perform shot
change detection. These can be one of the following.

1) Two Frames:
Two frame approaches not work proper when there is large variation in activity among different parts of the video, or when certain
shots contain events that cause short-lived discontinuities. To look for a high value of the discontinuity metric between two successive
frames is the simplest way to detect discontinuity.[ 10, 3, 8,7, ].
819

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2) N-frame Window:
The most common technique for solving the above problems is to detect the discontinuity by using the features of all frames within a
temporal window [1,5]. This is either by computing a dynamic threshold or by computing the discontinuity metric directly on the
window.

3) Interval since last shot change:


In this method we need to compute one or more statistics from last detected shot change upto current shot. The problem with this
approach is variability in shot.[9,10]

ACKNOWLEDGMENT
With immense pleasure, author presenting this paper on Video Shot Boundary Detection techniques. It gives me proud privilege to
complete this under the valuable guidance of Prof. Ruhi kabra.I am also extremely grateful to all members who gives me support for
writing this paper.

CONCLUSION
The different techniques are discussed to detect a shot boundary depending upon the video contents and the change in that video
content. As the key frames needs to be processed for annotation purpose, the important information must not be missed.

REFERENCES:
[1] Ravi Mishra ,S.K.Singhai,M. Sharma Comparative study of block matching algorithm and dual tree complex wavelet transform
for shot detection in videos Electronic system, signal processing and computing technologies(ICESC), 2014 International
Conference, Jan 2014
[2] Zhe Ming Lu and Yong Shi Fast Video Shot Boundary Detection Based on SVD and Pattern Matching-Image processing IEEE
Transactions (Volume:22 , Issue: 12 ), Dec. 2013
[3] Mr. Sandip T. Dhagdi, Dr. P.R. Deshmukh Key frame Based Video Summarization Using Automatic Threshold & Edge
Matching Rate International Journal of Scientific and Research Publications, Volume 2, Issue 7, July 2012
[4] Goran J. Zaji, Irini S. Reljin, Senior Member, IEEE, and Branimir D. Reljin, Senior Member, IEEE, Video Shot Boundary
Detection based on Multiracial Analisys Telfor Journal, Vol. 3, No. 2, 2011.
[5] Arturo Donate and Xiuwen Liu[2010] Member IEEE, Shot Boundary Detection in Videos Using Robust Three-Dimensional
Tracking. IEEE Transactions on Consumer Electronics.
[6] Wu, and Takayuki Yoshigahara , Enhanced Sports Video Shot Boundary Detection Based on Middle Level Features and a
Unified Model , IEEE Transactions on Consumer Electronics, Vol. 53, No. 3, AUGUST 2007
[7] Costas Cotsaces, Nikos Nikolaidis, Member IEEE, and Ioannis Pitas, Senior Member, IEEE, Video Shot Boundary Detection
and Condensed Representation: A -Review.
[8] Purnima.S.Mittalkod, Dr. G.N Srinivasan,Shot Boundary Detection Algorithms and Techniques: A Review, Research Journal
of Computer Systems Engineering An International Journal, Vol 2, Issue 02, June(2011).
[9] Swati D. Bendale, Bijal.J.Talati, Analysis of Popular Video Shot Boundary Detection Techniques in Uncompressed Domain,
International Journal of Computer Applications (0975 8887) Volume 60No.3, (2012)
[10] S. Y. Lu, Z. Y. Wang, M. Wang, M. Ott, and D. Feng, Adaptive reference frame selection for near-duplicate video shot
detection, in Proc. IEEE 17th Int. Conf. Image Process., Sep. 2010, pp. 23412344

820

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Privacy-Preserving in Outsourced Transaction Databases from Association


Rules Mining
Ms. Deokate Pallavi B.1, Prof. M.M. Waghmare2
1

Student of ME (Information Technology), DGOFFE, University of Pune, Pune


Asst. Professors, Department of Computer Engineering, DGOIFE, Bhigwan University of Pune, Pune

Abstract Data mining-as-a-service has been selected as considerable research issue by researchers. An organization (data owner)
can outsource its mining needs like resources or expertise to a third party service provider (server). However, both the association
rules and the items of the outsourced transaction database are private property of data owner. The data owner encrypts its data, send
data and mining queries to the server, and accepts the true patterns from the encrypted patterns received from the server to protect the
privacy. The problem of outsourcing transaction database within a corporate privacy framework is studied in this paper. We propose
an attack model based on previous knowledge and devise a scheme for privacy preserving outsourced data mining. Our scheme
ensures that each transformed data is different with respect to the attackers previous information. The experimental results on real
transaction database prove that our techniques are scalable, efficient and protect privacy.
Index Terms Privacy-preserving outsourcing, Association rule mining

Introduction
With the arrival of cloud computing and its model for IT services supported the net and large knowledge centers, the outsourcing of
knowledge and computing services is getting a completely unique connation, that is predicted to skyrocket within the close to future.
Business intelligence and data discovery services, square measure expected to be among the services amenable to be externalized on
the cloud, thanks to their knowledge intensive nature, further because the complexness of knowledge mining algorithms. Thus, the
paradigm of mining and management of knowledge as service can presumptively grow as quality of cloud computing grows [1]. This
can be the information mining-as-a-service paradigm, geared toward sanctionative organizations with restricted process resources
and/or data processing experience to source their data processing has to a 3rd party service supplier [2], [3]. We adopt a conservative
frequency-based attack model within which the server is aware of the precise set of things within the owners knowledge and to boot,
it conjointly is aware of the precise support of each item within the original knowledge. In this paper, our goal is to plan encryption
scheme that permits formal privacy guarantees to be well-tried, and to validate this model over large-scale real-life dealing databases
(TDB)

Fig.1. Architecture of mining-as-service paradigm

821

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The design behind our model is illustrated in Fig.1.The client/owner encrypts its information victimisation associate degree
encrypt/decrypt (E/D) module. Whereas the main points of this module are going to be explained in Encryption/Decryption Section.
The server conducts data processing and sends the (encrypted) patterns to the owner. Our coding theme has the property that came
supports aren't true supports. The E/D module recovers truth identity of the came patterns additionally their true supports
Contributions:
First, associate attack model is outlined for attacker and makes the background the attacker might possess precise. Our notion
of privacy needs that, for every cipher text item, there area unit a minimum of k1 distinct cipher things that area unit
indistinguishable from the item relating to their supports.
Second, we have developed an encryption scheme, known as RobFrugal. The E/D module will use to remodel consumer
information before its shipped to the server.
Third, to permit the E/D module to recover verity patterns and their correct support, we have a tendency to propose that it
creates and keeps a compact structure, referred to as synopsis. We have a tendency to additionally offer the E/D module with an
economical strategy for incrementally maintaining the abstract against updates within the type of appends.
Related work is represented within the next section. The pattern mining task is reviewed then. Our privacy model is given in
next section. Then next section develops the encryption/decryption theme we tend to use. Finally, we conclude this paper and discuss
directions for future analysis in last Section.

Related Work
The particular drawback attacked in our paper is outsourcing of pattern mining inside company privacy. Not only the
underlying knowledge however conjointly the mined results don't seem to be meant for sharing. Once the server possesses background
and conducts attacks thereon basis, it is unable to guess the proper candidate item or itemset equivalent to a given cipher item or item
set.
Another issue is secure multiparty mining over distributed datasets. Knowledge on that mining is to be performed is
partitioned and distributed among many parties. This body of labor was pioneered by [7] and has been followed up by many papers
since [8]. The partitioned off knowledge can't be shared and should stay personal however the results of mining on the union of the
information are shared among the participants, by means that of multiparty secure protocols [9][11]. They don't think about third
parties. This approach partly implements company privacy; however it's too weak for our outsourcing problem, because the ensuing
patterns are disclosed to multiple parties.
The works that are most associated with ours are [2] and [12]. A recent paper [5] has formally evidenced that the encryption
system in [2] are often broken while not victimization context-specific info. The success of the attacks in recent paper [5] the main
depends on the existence of distinctive, common, and faux things, outlined in [2]; our theme doesn't produce any such things. Tai et al.
[12] assumed the wrongdoer is aware of actual frequency of single things, equally to United States of America. Compared with these 2
works, our theme will invariably bring home the bacon obvious privacy guarantee with regard to the background of attacker.
TDB
Item
Sup
Apple
Orange Apple
Apple
5
Apple Orange
Orange
3
Banana Orange
Banana
2
Apple Milk
Milk
1
Apple Chocolate
Chocolate
1
Banana
(a)
(b)
Fig.2. Example of TDB and its support table. (a) TDB. (b) Item support table.

Pattern Mining Task


We let I = i1,, in be the set of things and D = t1, ...,tm a TDB of transactions, every of that could be a set of things. We
tend to denote the support of associate itemset S I as suppD(S) and also the frequency by freqD(S). Recall that freqD(S) =
822

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

suppD(S)/|D|. For every item i, suppD(i) and freqD(i) denote, severally, the individual support and frequency of i. The perform
suppD(.), projected over things, is additionally known as the item support table of D described in tabular kind [Fig. 2(b)]. The wellknown frequent pattern mining downside [13] is: given a TDB D and a support threshold , notice all itemsets whose support in D is a
minimum of .

Privacy Model
We let D denote the initial TDB that the owner has. to safeguard the identification of individual things, the owner applies AN
cryptography perform to D and transforms it to D*, the encrypted info. We tend to discuss with things in D as plain things the
encrypted info. We tend to discuss with things in D as plain things and things in D* as cipher things.
A. Adversary Knowledge
The server in Nursing trespasser who gains access to that might possess some information victimization that they'll conduct
attacks on the encrypted information D*. We tend to generically sit down with Associate in Nursingy of those agents as a wrongdoer.
We tend to adopt a conservative model and assume that the wrongdoer is aware of precisely the set of (plain) things I within the
original TDB D and their true supports in D, i.e., suppD(i), i I. The wrongdoer might have access to similar information from a
competitory company, might scan printed reports, etc.
B. Attack Model
The data owner (i.e., the corporate) considers actuality identity of: 1) each cipher item; 2) each cipher transaction; and 3) each
cipher frequent pattern because the material possession that ought to be protected. We take into account the subsequent attack model.
1) Item-based attack: cipher item e E, the offender constructs a collection of candidate plain things Cand(e) I. The
likelihood that the cipher item e may be broken prob(e) = 1/|Cand(e)|.
2) Set-based attack: Given a cipher itemset E, the offender constructs a collection of candidate plain itemsets Cand(E),
wherever X Cand(E), X I, and |X| = |E|. The likelihood that the cipher itemset E may be broken prob(E) = 1/|Cand(E)|.

Encryption/Decryption Scheme
A. Encryption
In this section, we have introduced the cryptography theme, referred to as RobFrugal that transforms a TDB D into its encrypted
version D*. Our theme is constant with reference to k > zero and consists of 3 main steps: 1) Using 11 substitution ciphers for every
plain item; 2) Using a selected item kgrouping method; and 3)Using a way for adding new pretend transactions for achieving kprivacy. The made pretend transactions area unit else to D to make D*, and transmitted to the server. A record of pretend transactions,
i.e., DF = D*\D, is hold on by the E/D module within the variety of a compact precis, as mentioned in Sections C and D.
B. Decryption
When the consumer requests the execution of a pattern mining question to the server, specifying a minimum support threshold ,
the server returns the computed frequent patterns from D *. Clearly, for each itemset S and its corresponding cipher itemset E, we've
that suppD(S) suppD* (E). For every cipher pattern E came back by the server beside suppD* (E), the E/D module recovers the
corresponding plain pattern S. It must reconstruct the precise support of S in D and choose on this basis if S may be a frequent pattern.
to attain this goal, the E/D module adjusts the support of E by removing the result of the faux transactions. suppD(S) = suppD* (E)
suppD*\D(E). This follows from the very fact that support of Associate in Nursing itemset is additive over a disjoint union of dealing
sets. Finally, the pattern S with adjusted support is unbroken within the output if suppD(S) . The calculation of suppD*\D(E) is
performed by the E/D module victimisation the precis of the faux transactions in D* \D.
C. Grouping Items for k-Privacy
Given the things support table, many ways will be adopted to cluster the things into teams of size k. We tend to begin from an
easy grouping methodology known as sparing. The item support table is sorted in descendant order of support and check with cipher
things during this order as e1, e2, etc. Assume e1, e2. . . en is that the list of cipher things in descendant order of support (with
reference to D), the teams created by sparing square measure. The last cluster, if but k in size, is united with its previous cluster. We
tend to denote the grouping obtained mistreatment the on top of definition as Gfrug. For instance, contemplate the instance TDB and
its associated (cipher) item support shown in Fig. 2. For k = 2, Gfrug has 2 groups. This corresponds to the partitioning teams shown
in Table I(a). Thus, in D* the support of e4 are delivered to that of e2; and also the support of e1 and e3 delivered to that of e5.
For example, given the item support table in Fig. 2, the grouping illustrated in Table I(b), obtained by exchanging e4 and e5
within the 2 teams of stinting, and is currently robust: none of the 2 teams, thought of as itemsets, is supported by any dealings in D.
TABLE I
Groping With k = 2
823

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

(a) Frugal
Item

(b) RobFrugal

Support

Item

Support

e2

e2

e4

e5

e5

e4

e1

e1

e3

e3

TABLE II
Noise Table and Its Hash Table
(a) Noise table
(b) Hash tables
For k = 2
Table 1
Item
Support
Noise
< e5,1,2>
e2
5
0
< e3,2,0>
e5
2
3
e4

e1

e3

Table 2
< e1,2,0>

D. Constructing Fake Transactions


Given a noise table specifying the noise N(e) required for every cipher item e, we have a tendency to generate the pretend
transactions as follows. First, we have a tendency to drop the rows with zero noise, admire the foremost frequent things of every
cluster or to different things with support capable the most support of a bunch. Second, we have a tendency to kind the remaining rows
in descendent order of noise.Continuing the instance, think about cipher things of nonzero noise in Table II(a). The subsequent 2
pretend dealings square measure generated: 2 instances of the transaction and one instance of the dealing.

ACKNOWLEDGMENT
I wish to express my sincere thanks to our Principal, HOD and Professors and staff members of Computer Engineering
Department at Dattakala faculty of Engineering, Swami Chincholi, Bhigawan. Last but not the least, I would like to thank all my
Friends and Family members who have always been there to support and helped me to complete this research work.

CONCLUSION
In this paper, we studied the problem of (corporate) privacy-preserving mining of frequent patterns on an encrypted
outsourced transaction database. We have considered that the attacker knows the domain of items and their exact frequency and can
use this knowledge to identify cipher items and cipher itemsets. An encryption scheme, called RobFrugal, is proposed that is based on
11 substitution ciphers for items and adding fake transactions. It makes use of a compact synopsis of the fake transactions from
which the true support of mined patterns from the server can be efficiently recovered. We also proposed a strategy for incremental
maintenance of the synopsis against updates. This method is robust against an adversarial attack based on the original items and their
exact support.
Currently, our privacy analysis is based on the assumption of equal likelihood of candidates. It would be interesting to
enhance the framework and the analysis by appealing to cryptographic notions such as perfect secrecy [14]. Moreover, our work
considers the ciphertext-only attack model, in which the attacker has access only to the encrypted items. It could be interesting to
consider other attack models where the attacker knows some pairs of items and their cipher values. We will investigate encryption
824

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

schemes that can resist such privacy vulnerabilities. We are also interested in exploring how to improve the RobFrugal algorithm to
minimize the number of spurious patterns.

REFERENCES:
[1] Fosca Giannotti, Laks V. S. Lakshmanan, Anna Monreale, Dino Pedreschi, and Hui (Wendy) Wang, Privacy-Preserving Mining
of Association Rules From Outsourced Transaction Databases, in IEEE SYSTEMS JOURNAL, VOL. 7, NO. 3, SEPTEMBER 2013.
[2] W. K. Wong, D. W. Cheung, E. Hung, B. Kao, and N. Mamoulis, Security in outsourcing of association rule mining, in Proc.
Int. Conf. Very Large Data Bases, 2007, pp. 111122.
[3] L. Qiu, Y. Li, and X. Wu, Protecting business intelligence and customer privacy while outsourcing data mining tasks,
Knowledge Inform. Syst., vol. 17, no. 1, pp. 99120, 2008.
[4] C. Clifton, M. Kantarcioglu, and J. Vaidya, Defining privacy for data mining, in Proc. Nat. Sci. Found. Workshop Next
Generation Data Mining, 2002, pp. 126133.
[5] I. Molloy, N. Li, and T. Li, On the (in)security and (im)practicality of outsourcing precise association rule mining, in Proc. IEEE
Int. Conf. Data Mining, Dec. 2009, pp. 872877.
[6] F. Giannotti, L. V. Lakshmanan, A. Monreale, D. Pedreschi, and H. Wang, Privacy-preserving data mining from outsourced
databases, in Proc. SPCC2010 Conjunction with CPDP, 2010, pp. 411426.
[7] R. Agrawal and R. Srikant, Privacy-preserving data mining, in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2000, pp. 439
450.
[8] S. J. Rizvi and J. R. Haritsa, Maintaining data privacy in association rule mining, in Proc. Int. Conf. Very Large Data Bases,
2002, pp. 682693.
[9] M. Kantarcioglu and C. Clifton, Privacy-preserving distributed mining of association rules on horizontally partitioned data,
IEEE Trans. Knowledge Data Eng., vol. 16, no. 9, pp. 10261037, Sep. 2004.
[10] B. Gilburd, A. Schuster, and R. Wolff, k-ttp: A new privacy model for large scale distributed environments, in Proc. Int. Conf.
Very Large Data Bases, 2005, pp. 563568.
[11] P. K. Prasad and C. P. Rangan, Privacy preserving birch algorithm for clustering over arbitrarily partitioned databases, in Proc.
Adv. Data Mining Appl., 2007, pp. 146157.
[12] C. Tai, P. S. Yu, and M. Chen, K-support anonymity based on pseudo taxonomy for outsourcing of frequent itemset mining, in
Proc. Int. Knowledge Discovery Data Mining, 2010, pp. 473482.
[13] R. Agrawal and R. Srikant, Fast algorithms for mining association rules, in Proc. Int. Conf. Very Large Data Bases, 1994, pp.
487499.
[14] C. E. Shannon, Communication theory of secrecy systems, Bell Syst.Tech. J., vol. 28, pp. 656715, 1948

825

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Study ofMashQLQuerying Language for the Data Web


Ms .Taware Priyanka1, Ms. M.M.Waghmare2
1

Student of ME (Information Technology), DGOIFE, University of Pune, Pune


Asst.Professors, Department of Computer Engineering, DGOIFE, Bhigwan University of Pune,Pune

Abstract : The paper formulate the language for the data on the web. This language formulation helps to easily query data on the
web. The main use of MashQL is that it allows people with limited knowledge of IT. It helps to explore and query one or more data
sources. It does not require the knowledge about the schema, structure or any technical details of these sources. Most important, it is
robust to the untrained users. And not necessary that data source should have an offline or inline. To illustrate the query formulation
we choose the Data web scenarios. It also use RDF for query. RDF is the most primitive data model, so the MashQL used for the
querying XML and relational databases. The two implementation of MashQL can be presented here that is a firefox add on and online
mashup editor. The paper illustrate how MashQL can be used to query the data on web and mash up data web simply
Keywords : Query formulation, RDF, SPARQL, indexing methods, data web

INTRODUCTION:
The main challenge is to allow users to search and consume structured data. That receives attention from the web 2.0 and the
data web communities. The increasing growth of structured data on the web has generate a more demand for make this content most
reusable and consumable. Many companies are competing for gathering structured data and make it publicly.Companies are yahoo
Local,Freebase, Flicker, Google Base, eBay, Amazon, LinkedIn etc.Above companies made their content or data publicly accessible
using APIs.And also companies started to adopt standards of web metadata. For example, Yahoo started to support web sites
embedding RDF and micro formats. And for profile and data portability MySpace also started to adopting RDF.Google, Upcoming,
SlideShare, Digg, the whitehouse are started to publish their content in aRDFa. Inside the webpages W3C standard are used for
embedding RDF. Hence content of data can be better understand, searched and filtered. Now the trend of structured data on the web
i.e. (Data Web) is shifting towards the new paradigms of structured data retrieval. The search engines which are traditional cannot
gives the results of data as a keyword based query which will not be accurate or clean. Therefore, the query itself is ambiguous. To
serve the structured data on the web with full of potential, it must be able to query by the people. Main need is that Formulating
queries should be fast and not requires programming skills.

1 Challenges :
Initially formulating a query, with the knowledge of structured data and the attribute labels nothing but the schema is the
main challenge. At the time of search or filter the information or data it is not expected by the end users to investigate the schema or
structure of the data. A data schema were dynamic in many cases because the many items of data with different schema are being
changed that is it is added or dropped. Especially when query have multiple sources, allow end users to query structured data on the
web flexibly and effectively is a challenge.

2 Contribution :
In this paper, considering the web as a database, where each and every data source is represented as table. This paper mainly
focuses on the scenario of data web. It focus on querying RDF[1], and RDF is referred as primitive data model, hence other models
can be easily mapped into it that is XML and relational databases.

Query language :
It supports to make an expressive MashQL and intuitive query language.

Query formulation algorithm :


MashQL editor uses the query formulation algorithm that help to query a data graph without the knowledge of structure or
schema of database.
826

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Graph-signature (GS) index :


The query formulation algorithm has to query the data in real time, which degrades yhe performance because queries
involves self-joins. Hence, new way of indexing RDF called the Graph signature and it is smaller than original graph and generate the
fast response time queries.

RELATED WORK :
Formulating query is nothing but the art of allowing people to easily query a data source. Query are translated into formal
languages.

Query-by-form :
It is the simple but not flexible querying method. In this method, query by form is need to be developed form for each and
every query. And when changes made to query will change its form. Some methods have been proposed to semi-automate generation
of form [16] and modification [17] but they may fail.

Query-by-example :
In this method, query formulate as filling tables[2]. Hence, it needs to be schematized data.

Conceptual Queries :
It requires the good knowledge of conceptual schema. Databases are modeled using EER, ORM or UML diagrams.

Natural language queries :


They allow users to write queries as natural language into a formal language (example, SQL [3], XQuery [4]). The problem is that it
require language ambiguity.

Visualize queries :
The many web approaches propose to formulate an SPARQL query by visualizing its triple patterns, so it requires less
technical skills. Some tools that helps to formulate XQueriesgraphically are AltovaXMLSpy [5], Bea XQuery Builder [6], QURSED
[7], Stylus Studio [8] and XML-GL [9].

Visual Scripting and MashupEditor :


People write query scripts inside a module, then visualize these modules and their inputs and outputs. Example of mashup
editors are Popfly [10], Yahoo Pipes [11], and Smash [12]. User uses the formal language of that editors when it needs to write query.
In the semantic web community two approaches (DeriPipes [13], sparqlMotion [14]) are inspired by this visual scripting.

Interactive Queries :
Lorel [15], was developed for querying schema free XML and does not required knowledge about a schema. Lorel is closest
to MashQL. There are difference between Lorel and MashQL approach. First, Schema free queries are partially handles by the Lorel.
Lorel uses a summary of the data, which is called as DataGuide, where asMashQL uses the Graph Signature. DataGuide groups
unrelated data items, which resulted to incorrect query formulation. To solve this, strong DataGuide were proposed, but size can be
grow in case of data graph; so the DataGuide can be larger than original graph. Second, it does not support to write multiple queries on
database. Third, it is expressed in basic way. Path conjuction, disjunction and negation, union, reverse properties, variables this
properties supported by the MashQL approach.
Interactive searching box[18], here a new user directly write a keyword. This approach is simple and not require any
knowledge about the structure of data.

DESCRIPTION OF MASHQL :
1 Data Model :
827

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Editor assumes query of data set is constructed as directed labeled graph, and it is similar to RDF syntax. A data set D is a set of
subject, predicate, Object that is <subject S, predicate P, object O>. Object can be a identifier I OR Literal L and subject and a
predicate can be only be unique identifier.XML mapped into a primitive data model. Fig shows a simple example of viewing database
into a graph.
Author
Book
B1
B1
B2

Person
A1
A2
A2

ID

Title

Year

B1

ABCD

2011

B2

EFGH

2010

Person
ID
A1
A2
Country
ID
at
st

Name
Ram
Shyam

Name
Alta
Systa

RDF Format
S
:B1
:B1
:B1
:B2
:B2
:B2
:B1
:B1
:B2
:A1
:A1
:A1
:A2
:A2
:A2
:SoR
:SoR
:at
:at
:SoR
:SoR
:st
:st

P
:Type
:Title
:Year
:Type
:Title
:Year
:Author
:Author
:Author
:Type
:Name
:Affiliation
:Type
:Name
:Affiliation
:Type
:Country
:Type
:Name
:Type
:Country
:Type
:Name

Affiliation
SoR
SoS

Affiliation
ID
SoR
SoS

Country
at
St

O
:Book
ABCD
2011
:Book
EFGH
2010
:A1
:A2
:A1
:Person
Ram
:SoR
:Person
Shyam
:Sos
:University
:at
:Country
alta
:University
:st
:Country
Systa

Fig1. Mapping relational database to RDF


828

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2 Intuition of MashQL :
A MashQL question letter of the alphabet is seen as a tree. The foundation tree is termed the question subject Q(S), that is
that the material being inquired. Asubject matter may be a specific instance I or a user variable V. Every branch could be a restriction
R, on a property of the topic. Branches may be distended to permit sub trees, referred to as question methods. During this case, the
worth of a property is that the subject of sub question. This permits one to navigate through the underlying information set and build
complicated queries. As are explained, every level a question is distended, it prices a be a part of once this question is executed; so,
the deeper the question path is, the execution complexness will increase.

PROPOSED SYSTEM :
When gives input as a RDF data source, then it is buffered into an oracle 11g, and then the signature of graph is generated.
AJAX are used to dispatch background query and translate the MahQL queries using SPARQL for execution. Oracle 11g support RDF
queries and storage.

Fig2. Block Diagram of Proposed system

Alternatively develop the MashQL editor as an add on firefoxbrowser.It has same functionality of online editor. And no RDF
indexing are used for indexing and questioning the data sources, but inside the editor SPARQL queries are implemented. So, the input
source is limited.

ACKNOLEDGEMENT
I wish to express my sincere thanks to our Principal, HOD and Professors and staff members of Computer Engineering
Department at Dattakala faculty of Engineering, Swami Chincholi, Bhigawan. Last but not the least, I would like to thank all my
Friends and Family members who have always been there to support and helped me to complete this research work.
CONCLUSION
The system designed and formally such as the syntax and therefore the linguistics of MashQL, as a language, not just a
single-purpose interface. The system conjointly such as the question formulation algorithmic rule, by that the quality of understanding
an information supply (even its schema free) is touched to the question editor. And tend to self-addressed the challenge of achieving
interactive performance throughout question formulation by introducing a brand new approach for categorisation RDF information.
Furthermore,tend to collaborating with colleagues to use MashQL as a business rules language, therefore embrace many reaction and
production operators. Arrange to conjointly support aggregation functions, as shortly as their linguistics are outlined and standardized
in SPARQL.
829

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES :
[1]Mustafa Jarrar and Marios D. Dikaiakos, A Query Formulation Languagefor the Data WebMember, IEEE
Computer Society, VOL. 24, NO. 5, MAY 2012.
[2]M. Zloof, Query-by-Example: A Data Base Language, IBM Systems, vol. 16, no. 4, pp. 324-343, 1977.
[3] A.Popescu, O. Etzioni, and H. Kautz, Towards a Theory of Natural Language Interfaces to Databases, Proc.Eighth Intl Conf.
Intelligent User Interfaces, 2003.
[4]Y. Li, H. Yang, and H. Jagadish, NaLIX: An Interactive Natural Language Interface for Querying XML, Proc. ACM SIGMOD
Intl Conf. Management of Data, 2005.
[5]AltovaXMLSpy,http://www.altova.com/solutions/xquery- tools.html, Feb. 2010.
[6]BEA Systems, Inc., BEA AquaLogic Data Services Platform - XQuery Developers Guide. Version 2.5, 2005.
[7]M. Petropoulos, Y. Papakonstantinouy, and V. Vassalos, Graphical Query Interfaces for Semistructured Data,ACM Trans. Internet
Technology, vol. 5, no. 2, pp. 390-438, 2005.
[8]Stylus Studio, http://www.stylusstudio.com/xquery_editor. html, Feb. 2010.
[9] S. Comai and E. Damiani, Computing Graphical Queries over XML Data, ACM Trans. Information Systems, vol. 19, no. 4, pp.
371-430, 2001.
[10] E. Griffin, Foundations of Popfly. Springer, 2008.
[11] Yahoo Pipes, http://pipes.yahoo.com/pipes, Feb. 2010.
[12]F. De Keukelaere, S. Bhola, M. Steiner, S. Chari, and S. Yoshihama, SMash: Secure Component Model for Cross-Domain
Mashups on Unmodified Browsers, Proc. Intl Conf. World Wide Web (WWW), 2008.
[13]G. Tummarello, A. Polleres, and C. Morbidoni, Who the FOAF Knows Alice? Proc. Intl Semantic Web Conf. (ISWC), 2007.
[14]SparqlMotion, http://www.topquadrant.com/sparqlmotion,Feb.2010.
[15]R. Goldman and J. Widom, DataGuides: Enabling Query Formulation and Optimization in SemistructuredDatabases, Proc. Intl
Conf. Very Large Data Bases (VLDB), 1997.
[16]M.Jayapandian and H. Jagadish, Automated Creation of aForms-Based Database Query Interface, Proc. VLDB Endowment,vol.
1, no. 1, pp. 695-709, 2008.
[17] M.Jayapandian and H. Jagadish, Expressive Query Specificationthrough Form Customization, Proc. Intl Conf. Extending
DatabaseTechnology: Advances in Database Technology (EDBT), 2008.
[18]A. Nandi and H. Jagadish, Assisted Querying Using Instant-Response Interfaces, Proc. ACM SIGMOD Intl Conf.
Managementof Data, 2007

830

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Web Based System for Classification of Remote Sensing Data


Dhananjay G. Deshmukh
ME CSE, Everest College of Engineering, Aurangabad.
Guide: Prof. Neha Khatri Valmik.
Abstract-The availability of satellite imagery has been expanded over the past few years, and the possibility to perform fast
processing of massive databases comprising this kind of imagery data has opened ground-breaking perspectives in various different
fields. Remote sensing data have become very widespread in recent years, and the exploitation of this technology has gone from
developments mainly conducted by government intelligence agencies to those carried out by general users and companies This paper
describes a web-based system which allows an inexperienced user to perform unsupervised classification of satellite/airborne images.
The processing chain adopted in this work has been implemented in C language and integrated in our proposed tool, developed with
HTML5, JavaScript, Php, AJAX and other web programming languages. Image acquisition with the applications programmer
interface (API) is fast and efficient. An important added functionality of the developed tool is its capacity to exploit a remote server to
speed up the processing of large satellite/airborne images at different zoom levels. The ability to process images at various zoom
levels allows the tool an improved interaction with the user, who is able to supervise the final result. The previous functionalities are
necessary to use efficient techniques for the classification of images and the incorporation of content-based image retrieval (CBIR).
Several experimental validation types of the classification results with the proposed system are performed by comparing the
classification accuracy of the proposed chain by means of techniques available in the well-known Environment for Visualizing Images
(ENVI) software package.

Keywords-Remote Sensing data processing, remote server, Satellite/Airbone image classification, web based system.

I.INTRODUCTION
REMOTE sensing image analysis and interpretation have become key approaches that rely on the availability of web mapping
services and programs. This paper introduces a web based remote sensing application that can provide advanced image comparison
and processing functions for natural habitat conservation and environmental. With a web-based system, users only require a simple
web browser to access remotely sensed imagery and perform spatial analyses without the requirements or costs of installing GIS and
image processing software packages. This resourceful increase has led to the exponential growth of the user community for
satellite/airborne images, not long ago only accessible by government intelligence agencies [1], [2]. In particular, the wealth of
satellite/airborne imagery available from Google Maps which now provides high-resolution images from various locations around the
Earth1, has opened the appealing perspective of performing classification and retrieval tasks via the Google Maps application
programming interface (API). Given this rather general definition, the term remote sensing has come to be associated more
specifically with the gauging of interactions between earth surface materials and electromagnetic energy. The combination of an easily
searchable mapping and satellite/airborne imagery tool such as Google Maps, with advanced image classification and retrieval
features [3], can expand the functionalities of the tool and also allow end-users to extract relevant information from a massive and
widely available database of satellite/airborne images (this service is also free for non-commercial purposes). It should be noted that
the current version of Google Maps does not allow using maps outside a web-based application (except with a link to Google Maps).
Here we use Google Maps purely as an example to demonstrate that if we have a data repository we can use the tool we propose, and
the logo and Google Maps terms of service2 are always in place. The characteristics of Yahoo Maps are similar to Google Maps
(though the spatial resolution of the satellite/ airborne imagery in Yahoo Maps is generally lower than Google Maps . Development of
Internet mapping facilities providing analytical tools, and imagery archives when combined with basic Internet access can become an
effective management tool for environmental monitoring programs. Such capabilities and analytical resources can be easily shared by
multiple users from different locations at the same time. Rest of the paper is organized in the following manner: In Section II we
discussed about the related work done on the outlier detection method. In section III we discussed about the System Architecture and
in section IV the proposed approached. In section V we discussed about the conclusion and at the last references are given.

II.LITERATURE SURVEY
For illustrative purposes, the Table I show a comparison between the main functionalities of the previous map servers. As shown by
Table I, Google Maps offers important competitive advantages, such as the availability of high resolution satellite/airborne imagery,
831

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the smoothness in the navigation and interaction with the system, the availability of a hybrid satellite view which can be integrated
with other views (e.g., maps view), and adaptability for general-purpose web applications. It should be noted that other open standards
for geospatial content such as those included within the open geospatial consortium (OCG) cannot currently provide complete world
coverage at high spatial resolution as it is the case of Google Maps. That is why we have decided to use Google Maps service as a
baseline for our system. On the other hand, a feature which is currently lacking in Google Maps is the unsupervised or supervised
classification of satellite/airborne images at different zoom levels [4], [5], even though image classification is widely recognized as
one of the most powerful approaches in order to extract information from satellite/airborne imagery [6][8].
TABLE I
Comparison between the Main Functionalities of Google Maps, Yahoo Maps and Openstreetmap.

III. SYSTEM ARCHITECTURE


This section describes the architecture of the system, displayed in Fig. 1. It is a web application which contains several layers or
modules. Each module serves a different purpose, and the technology adopted for the development of the system is based on open
standards and free software. A combination of these has been used for the development of the system. As shown by the architecture
model described in Fig. 1, the proposed system can be described from a high level viewpoint using three different layers, which are
completely independent from each other. Due to the adopted modular design, any of the layers can be replaced. Also, the system is
fully scalable. The communication between two layers is carried out over the Internet via the hypertext transfer protocol (HTTP) 6. As
a result, the performance of the system will depend largely (as expected) on the available bandwidth. Both the map layer (currently
provided by Google Maps) and server layer (by us) are available from any location in the world.

A.Map Layer The Map layer contains the source imagery data to be used by the system, i.e., the image repository. Google Maps is
used in the current version by means of the Google Maps API V3 as a programming interface intended for accessing the provided
maps. The current framework is limited to the types of maps provided by Google Maps. All types of maps provided by the API V3 can
be used, including roadmaps (2D mosaics), satellite/airborne images, hybrid view (mixed satellite/airborne images and roadmap,
superimposed), or terrain (physical relief). Also, all the potentials and functionalities provided by the Google Maps API V3 are
included (this comprises management of zoom levels, image centering, location by geo-spatial coordinates.

B. Server Layer
The server layer is one of the main components in the system. It is created by two sub-modules: web server and compute server. The
former is the part of the system hosting the source code of the application (developed using HTML5, Php, JavaScript and CSS) and
deal with the incoming traffic and requests from client browsers. We have used the Apache web server due to its wide acceptance,
performance, and free-of-charge license. Further, Php is used both in the server layer and also for managing the communications
between the clients and the web server (mainly dominated by the transmission of satellite/airborne imagery to be processed), and the
web server and the compute server (intended for the processing of satellite/airborne images).

832

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

C. Client Layer
The client layer defines the interactions between the user (through an internet browser) and our system. Only one web page is needed
as user interface thanks to the adopted AJAX and JavaScript technologies, which allow for the web interface update without the need
for interactions with the web server. At this point, it is important to emphasize that AJAX is a programming method (not a piece of
software) and that it is built on Javascript (not a standalone programming language).

Fig. 2. Interactions between the three main layers (map, client and server) of our system.

In order to understand the interactions between the different layers of our system, an example is provided next about the flow of a
processing request started by the client in the system and the different steps needed until a processing result is received by the enduser. The following steps are identified in
1) First, the client starts the use of the system from the local internet browser by requesting a web page. This results in an HTTP
request to the web server.
2) The web server receives this request from client and provides the client with an HTML web page and all necessary references
(JavaScript libraries, CSS, etc.)
3) At this point, the client requests from the map server the information needed to perform the map modification locally (i.e.,
zooming). This operation is transparent to the system, and the requests are performed via messages from the client to the map server.
4) The map layer sends the information requested by the client in the form of updated maps that will be locally managed by the enduser.
5) A capture with all the URL addresses associated to each portion that compose the full map is performed to send this information to
the web server. This process is locally managed at the client by means of JavaScript functions. We emphasize that the end-user can
decide the image view (street, satellite, hybrid etc) and the zoom level and of the map image to be processed.
6) Now, the Universal Resource Locator (URL) addresses associated to each portion of the full image are sent to the web server by
means of AJAX functions and asynchronous requests. In this way, the interaction with the application at the client layer can continue
when the packet is being transferred to the server.
7) The web server composes the full image by accessing to the Google Maps repository
8) The web server provides the image to be processed to the compute server. Our system thus delegates the processing task to an
independent remote server system that takes care of the processing task independently from other layers in the system.
9) Once the image has been processed, the compute server returns the obtained result to the web server. In our current implementation
both the web server and the computer server are implemented in the same machine, hence in this case the communications are
833
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

minimized. 10) Finally, the processing result is returned to the client so that it can be saved to disk as the final outcome of the adopted
processing chain.
11) The client can save processed image to local disk.

IV. CONCLUSION
In this paper we proposed A Web Based System for classification of Remote Sensing Data. This demonstrates that there is a large
potential opportunity for web based mapping services and image analytical tools applied to natural habitat preservation and
environmental resource management and illustrates possible approaches to implement web based remote sensing applications. Two
important lessons were learned from the implementation experiences of this paper. First, web-based mapping facilities are capable to
combine remotely sensed imagery, GPS data, and GIS databases together to provide an integrated Framework, Second, data security
and system stability will be the major considerations for the Design of web-based remote sensing applications. Many GIS data and
remotely sensed images are very sensitive and require protection mechanisms. The combined powers of data collection through
remote sensing and on-line, geo-spatial analysis tools through the Internet can significantly reduce the high cost and labor associated
with field monitoring.

REFERENCES
[1] R. A. Schowengerdt, Remote Sensing: Models and Methods for Image Processing, 2nd Ed. New York: Academic Press, 1997.
[2] D. A. Landgrebe, Signal Theory Methods in Multispectral Remote Sensing. New York: Wiley, 2003.
[3] J. A. Richards and X. Jia, Remote Sensing Digital Image Analysis: An Introduction. New York: Springer, 2006.
[4] M. Fauvel, J. A. Benediktsson, J. Chanussot, and J. Sveinsson, Recent advances in techniques for hyperspectral image
processing, Remote Sens. Environ., vol. 113, pp. 110122, 2009.
[5] A. Plaza, J. A. Benediktsson, J. Boardman, J. Brazile, L. Bruzzone, G. Camps-Valls, J. Chanussot, M. Fauvel, P. Gamba, J.
Gualtieri, M.Marconcini, J. Tilton, and G. Trianni, Spectral and spatial classification of hyperspectral data using svms and
morphological profiles, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 11, pp. 38043814, 2008.
[6] E. Quirs, A. M. Felicsimo, and A. Cuartero, Testing multivariate adaptive regression splines (MARS) as a method of land cover
classification of Terra-Aster satellite images, Sensors, vol. 9, no. 11, pp. 90119028, 2009.
[7] A. Cuartero, A. M. Felicsimo, M. E. Polo, A. Caro, and P. G. Rodriguez, Positional accuracy analysis of satellite imagery by
circular statistics, Photogramm. Eng. Remote Sens., vol. 76, no. 11, pp. 12751286, 2010.
[8] D. Tuia, F.Ratle, F. Pacifici, M. F. Kanevski, and W. J. Emery, Active learning methods for remote sensing image classification,
IEEE Trans. Geosci. Remote Sens., vol. 47, no. 7, pp. 22182232, 2009.
[9] A. Plaza, J. Plaza, and A. Paz, Parallel heterogeneous CBIR system for efficient hyperspectral image retrieval using spectral
mixture analysis, Concurrency and Computation: Practice and Experience, vol. 22, no. 9, pp. 11381159, 2010.
[10] S. Bernab, A. Plaza, P. R. Marpu, and J. A. Benediktsson, A new parallel tool for classification of remotely sensed imagery,
Comput. Geosci, vol. 46, pp. 208218, 2012.
[11] G. Ball and D. Hall, ISODATA: A Novel Method of Data Analysis and Classification, Stanford University, 1965, Technical
Report AD-699616.
[12] J. A. Hartigan and M. A. Wong, Algorithm as 136: A k-means clustering algorithm, J. of the Royal Statistical Soc., Series C
(Appl. Statist.), vol. 28, pp. 100108, 1979.
[13] P. Gamba, F. DellAcqua, F. Ferrari, J. A. Palmason, and J. A. Benediktsson, Exploiting spectral and spatial information in
hyperspectral urban data with high resolution, IEEE Geosci. Remote Sensing Lett., vol. 1, pp. 322326, 2005.
834

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[14] A. Plaza and C.-I. Chang, High Performance Computing in Remote Sensing. Boca Raton, FL: Taylor & France
[15] A. Plaza, D. Valencia, J. Plaza, and P. Martinez, Commodity cluster based parallel processing of hyperspectral imagery, J.
Parallel and Distributed Computing, vol. 66, no. 3, pp. 345358, 2006.
[16] A. Plaza, J. Plaza, A. Paz, and S. Sanchez, Parallel hyperspectral image and signal processing, IEEE Signal Process. Mag., vol.
28, no. 3, pp. 119126, 2011.
[17] S. Kalluri, Z. Zhang, J. JaJa, S. Liang, and J. Townshend, Characterizing land surface anisotropy from AVHRR data at a global
scale using high performance computing, Int. J. Remote Sens., vol. 22, pp. 21712191, 2001

835

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Review on Frequent Pattern Mining


Vivek B. Satpute
M.E. Computer-II
Vidya Pratisthan`s College Of Engineering-Baramati.

Abstract - Data mining is the technique that discovers hidden pattern in data sets and association between the patterns. In this
association rule mining one of the techniques used to achieve the objective of data mining. These rules are effectively used to uncover
unknown relationships producing result that can give us a basis for forecasting and decision making. To discover these rules we have
to find out the frequent item sets because these item sets are the building blocks to obtain Association rules with a given confidence
and support.

Keywords Data mining, association rule mining, frequent item sets, pattern, support, closed pattern, representative patterns,

INTRODUCTION
Frequent itemsets has a necessary part in a lot of data mining tasks that attempt to find interesting patterns from databases, like
association rules, correlations, sequences, episodes, classifiers, clusters and lot of which the mining of association rules is one of the
majority well-known problems. The novel inspiration for investigating association rules came from the required analyze so called
supermarket transaction data, i.e., to study customer activities in the form of the purchased products. Association rules illustrate how
frequently items are bayed together. E.g., an association rule beer => chips (80%) tells that four out of five customers that bought
beer, they bought chips too. Such rules may become useful for decisions regarding product pricing, promotions, store outline and
many others. As their introduction is in 1993 by Argawal et al. [1], the frequent itemset and association rule mining problems got a
huge part of consideration. In the past two decades, more than hundreds of research papers are published giving novel algorithms or
expansion on existing algorithms to resolve these mining problems much proficiently. Here, agenda is to explain some etchings of
frequent itemset mining and give a comprehensive survey of the most influential algorithms that were proposed during the last two
decade.

High Utility Itemset Mining


Vincent S. et al [2] focused on mining the high utility itemsets from the transactional database that refers to the detection of
itemsets with high utility like profits. They utilizes a compact tree structure, named UP-Tree to make easy the mining performance
and keep away from scanning original database frequently and to retain the information of transactions and high utility itemsets. To
reduce the overestimated utilities stored in the nodes of global UP-Tree two different approaches are used, UP-growth (Utility Pattern
Growth) and UP-Growth+
After building the structure of UP-Tree, for producing PHUIs(potential high utility itemsets) a fundamental thing is mine UPTree by using the FP-Growth. But in that, lot of candidates gets generated. So, they proposed an algorithm UP-Growth by adding two
more strategies i.e. DLU (Discarding Local Unpromising) and DLN (Decreasing Local Node Utilities) into the structure of FPGrowth. By the strategies, utilities of itemsets that are overestimated can be reduced and therefore the number of PHUIs can be extra
reduced.
Performance of UP-Growth is better than performance of FP-Growth by using DLU and DLN to decrease overestimated
utilities of itemsets. However, they proposed an enhanced method, named UP-Growth+, for reducing overestimated utilities more
effectively. In UP-Growth, minimum item utility table is used to reduce the overestimated utilities. In UP-Growth+, minimal node
utilities in each path are used to make the estimated pruning values closer to real utility values of the pruned items in database.

836

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Normally, UP-Growth+ do better than UP-Growth even though they have trade-offs on memory utilization. The cause behind
that is UP-Growth+ makes use of minimal node utilities for further lessening miscalculated utilities of itemsets. Even if it uses time
and memory to verify and store minimal node utilities, they are more efficient particularly when there are numerous longer
transactions in databases. In difference, UP-Growth do better just when min-util is small. This is for the reason that when amount of
candidates of the two algorithms is similar, UP-Growth+ takes added calculations and is therefore slower. Then, high utility itemsets
are proficiently recognized from the set of PHUIs which is a lot lesser than HTWUIs (high-transaction weighted utility itemset)
produced by IHUP (Incremental High Utility Pattern). So due to this, UP-Growth and UP-Growth+ complete improved performance
than IHUP algorithm.

Figure 1. An IHUP-Tree when min_util=40.


Advantages:
4.1. More effective especially when there are a lot of longer transactions present in databases.
4.2. Performance is high over IHUP.
Disadvantages:
1) UP-growth and UP-Growth+ produce greatly less candidates than FP-growth.
2) Requires more time and memory.

Top-k Patterns
Han et al. [3] (Han, Wang, Lu TzVetkov, 2002) proposed most primitive algorithm of this type. Their top-k patterns are k most
frequent closed patterns with a user-specified minimum-length, minl. Restriction of minimum-length was necessary, because without
it only length-1patterns (or their corresponding closed super-pattern) will be reported, as they often have the peak frequency. They
projected their implementation by using FP-Tree. This method is a support-aware summarization as support is a measure for choosing
summary patterns;
Afrati et al. [4] (Afrati, Gionis Mannila, 2004) proposed another top-k summarization algorithm. For this algorithm, pattern
support is not a summarization criterion; rather high compressibility with maximal coverage is its main objective. To achieve this, it
reports maximal patterns and also, allows few false positive in the summary pattern set. E.g. if the frequent patterns containing the sets
ABC, ABD, ACD, AE, BE, and all their subsets, a specific setting of their algorithm may report ABCD and ABE as the summarized
frequent patterns. Here point is, this covers all the original sets, and there are only two false positive (BCD, ABCD). Afrati et al.
proved that, the problem of finding the best approximation of a frequent itemsets that can be spanned by k set is NP-Hard. They also
used a greedy algorithm for keeping least approximation ratio. The interesting thing about this algorithm is that it gives high
compressibility; authors reported that they could cover 70% of the whole frequent set collection by using only 20 sets (7.5% of the
maximal patterns), with only 10% false-positive. In such a high compression, there is not much redundancy in the reported patterns;
but they certainly are very dissimilar from each other.
Disadvantages:
837
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[1] It is a post-processing algorithm, i.e., all maximal patterns first need to be obtained to be used as an input to this algorithm.
[2] The algorithm will not generalize well for complex patterns, like tree or graph, as the number of false-positives will be much
higher and the coverage will be very low. So the approximation ratio mentioned above will not be achieved.
[3] It will not work if the application scenario does not allow false-positive.

Xin et al. [5](Xin, Cheng, Yan Han, 2006) proposed another top-k algorithm. For being most effective, the algorithms struggle
for the set of patterns that collectively offer the best significance with minimal redundancy. Importance of a pattern is measured by a
real value that encodes the level of usefulness of it and redundancy between a pair of patterns is measured by incorporating pattern
likeness. Xin et al. proved that finding of a set of top-k patterns is NP-Hard even for itemset, which is the easiest kind of pattern. They
also proposed a greedy algorithm that approximates the optimal solution with O(ln k) performance bound.

Figure 2: Find k patterns from ( u , l ) - pair


Disadvantages:
[1] The users need to find all the frequent patterns and evaluate their significances before the process of finding top-k patterns is
initiated.
[2] For itemset pattern the distance calculation is very straightforward, this might be very costly for complex patterns.

A Profile Based Approach


Yan et al.[6] (Yan, Cheng, Han, Xin, 2005) proposed profile-based pattern summarization for itemset patterns. A profile-based
summarization discovers k representative patterns, here they called it Master patterns and builds a model under which the sup-port of
the rest of patterns can be easily recovered. A Master pattern is the combination of a set of awfully alike patterns. To cover the span of
the whole frequent patterns, a Master pattern is very different from another Master pattern. Similarity considers both the pattern space
and their support, so that they can be representative in the sense of Frequent Pattern Mining. The authors also built a profile for each
Master pattern. Using the profile, the support of each frequent pattern that the corresponding Master pattern represents can be
approximated with no consulting the original dataset. Profile based summarization is graceful due to its use of probabilistic model.
Advantages:
[1] This is effective to solve the interpretability matter caused by the vast amount of frequent patterns.
[2] This profile model is capable for recovering frequent patterns plus their supports.
Disadvantages:
[1] It makes opposing assumptions. On one side, the patterns represented by the identical profile are supposed to be similar.
Whereas on the other side, based on how the supports of patterns are intended from a profile, the items in the identical profile
838
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

are expecting to be independent. It is tough for balancing two opposing necessities.


[2] The suggested algorithm for generating profiles is awfully time-consuming as it requires to scan the novel dataset frequently.
[3] The periphery between frequent patterns and infrequent patterns cannot be resolute by profiles.

FP-Growth
Han et al. (2004) [7] invented an FP-growth method that mines the whole set of frequent itemsets with no candidate generation.
FP-growth is founded on theory of the divide and-conquer. The first scan of the database gives a list of frequent items. In that list the
items are arranged by descending order of frequency. According to that list of descending frequency, the database is placed in a
frequent pattern tree, or it can be called as FP-tree, which preserves the itemset association information. Starting from the every
frequent length-1 pattern (as initial suffix pattern), the FP-tree is mined, building its conditional pattern base (a sub-database, that
contents the set of prefix paths in the Frequent Pattern-tree co-occurring with the suffix pattern), then building its conditional FP-tree
and performing mining recursively on such a structure of tree. The pattern expansion is attained by the concatenation of the suffix
pattern with the frequent patterns produced from a conditional FP-tree. The major difficulty in FP-tree is that the creation of the
frequent pattern tree is lengthy process, it is very time consuming process. Next, FP-tree based approach never gives flexibility and
reusability of computation throughout process of mining. The FP-growth algorithm transforms the trouble of finding lengthy frequent
patterns to hunting for the shorter ones recursively and then after concatenating the suffix. It utilizes the least frequent items as a
suffix, offering excellent selectivity. Performance studies show that the system significantly decrease search time. Lot of alternatives
and extensions are there to the FP-growth approach, together with depth-first generation of frequent itemsets by the Agarwal et al.
(2001) H-Mine and Pei et al. (2001a) which discover a hyper-structure mining of the frequent patterns; constructing alternative trees;
exploring the top-down and the bottom-up traversal of such trees in pattern-growth mining by Liu et al. (2002, 2003) and an arraybased implementation of prefix-tree- structure for efficient pattern growth mining by Grahne and Zhu (2003)

Figure 3: A Sample FP- Tree


Advantage:
1. It requires only 2 passes over data-set.
2. Efficiently it compresses data-set.
3. There is no candidate generation.
Disadvantage:
1. Sometimes tree structure becomes so big that FP-Tree may not fit in memory.
839

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. FP-Tree is expensive to build

Cluster-Based Representative Pattern-Set


Xin et al. [8] put forward the idea of -covered to simplify the concept of frequent closed pattern. The aim is to discover a
minimum set of representative patterns that can -cover all frequent patterns. They conclude that the set cover problem can be relate to
the main problem, and they build up two algorithms, RPglobal and RPlocal. RPglobal first generates the set of patterns that can be covered by each pattern, and then employs the well-known greedy algorithm for the set cover problem to discover representative
patterns. First, both RPglobal and RPlocal are clever to discover a subset of representative patterns; second, even if RPlocal gives
extra patterns than RPglobal, the feat of RPlocal is awfully close to RPglobal. Nearly all the outputs of RPlocal are contained by two
times of RPglobal. The outcomes of RPglobal are partial as minimum support becomes near to the ground, the number of closed
patterns grows awfully speedy, and the running times of RPglobal go beyond the time limit (30 minutes). RPglobal does not size in
good health w.r.t. the number of patterns, and is a lot slower than RPlocal However, RPglobal is extremely slow and space consuming.
It is practicable as only the number of frequent patterns is not big. RPlocal is developed based on FPclose [12]. It integrates frequent
pattern mining with representative pattern finding. RPlocal is very eficient, but it produces more representative patterns than RPglobal.
Advantage:
1. Combining both algorithms RPcombin gives balanced quality and efficiency.
Disadvantage:
1. Any single algorithms do not give adequate performance.

Probabilistic Model
Wang et al. [9] formulate generalization on different brief depiction of frequent patterns non-derivable patterns. In their work,
probabilistic graphical models are utilized for the summarization job. More expressly, items are acquired like arbitrary variables and
Markov Random Field (MRF) models on these variables are build founded on frequent itemset patterns and their support information.
The summarization carried on in a level-wise style. Statistics of smaller itemsets are utilized for building an MRF model, and then
supports of larger itemsets are concluded from the model. If the estimation error is within a user-specified patience, then there can
option like sidestepping these itemsets, or else utilization of these itemsets to enhance the MRF model. Towards the end of the
process, every itemset patterns in the resultant model bear a brief depiction of the original set of itemset patterns. wide experiential
study on real datasets proved that the new approach is able to efficiently summarize great number of frequent itemset patterns.

Figure 4: An MRF Example


840

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Advantage:
1. When datasets are dense and mostly assure the conditional independence supposition, there typically exists a great quantity of
redundancy in the consequent itemset patterns in which case this approach will be tremendously efficient and effective.
Disadvantages:
1. Markov Random Field model is not as easy as profiles; moreover it is also tough to understand.
2. When datasets happen to sparser and do not assure conditional independence assumption fine, and the summarization job will
turn into more hard. So there will require more space and time on summarizing the consequent itemset patterns.

CONCLUSION
Pattern mining in recent times achieved major importance in the data mining community for the reason of its ability of being
used as very important tool for the knowledge discovery and its applicability in the other data mining jobs like classification and
clustering. Association rules are always of interest to the both database community as well as data mining users. Here a survey have
provided of previous studies made in this area and recognize some vital gaps available in the current knowledge.

REFERENCES:
[1] R. Agrawal, T. Imielinski, and A. N. Swami, Mining association rules between sets of items in large databases in Proc.
SIGMOD, Washington, DC, USA. , 1993, 207-216.
[2] Vincent S. Tseng, Bai-En Shie, Cheng-Wei Wu, and Philip S. Yu, Fellow, IEEE, Efficient Algorithms for Mining High Utility
Itemsets from Transactional Databases, IEEE transactions on knowledge and data engineering, VOL. 25, NO. 8, AUGUST 2013.
[3] Han, J., Wang, J., Lu, Y., Tzvetkov, P. Mining top-k frequent closed patterns without minimum support, In ICDM., 2002, 211218.
[4] Afrati, F.N., A. Gionis and H. Mannila,. Approximating a collection of frequent sets, Proceedings of the 2004 ACM SIGKDD
International Conference Knowledge Discovery in Databases, Aug. 22-25, Seattle, WA., USA., 2004, 12-19.
[5] Xin D, Cheng H, Yan X, Han J Extracting redundancy-aware top-k patterns, In: Eliassi-Rad et al 2006, 444-453 .
[6] X. Yan, H. Cheng, J. Han, and D. Xin, Summarizing itemset patterns: A profile-based approach, in Proc. KDD, Chicago, IL,
USA., 2005, 314-323.
[7] Han J, Pei J, Yin Y, Mao R Mining frequent patterns without candidate generation: A frequent-pattern tree approach, Data Min
Knowl Discov., 2004 ,8(1):53-87
[8] D. Xin, J. Han, X. Yan, and H. Cheng, Mining compressed frequent-pattern sets, in Proc. 31st Int. Conf. VLDB, Trondheim,
Norway., 2005, 709-720.
C. Wang and S. Parthasarathy, Summarizing itemset patterns using probabilistic models, in Proc. KDD, Philadelphia, PA, USA.,
2006, 730-735

841

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Generic Visual Perception Processor (GVPP)


Suhas Uttamrao Shinde, Asst Prof. Seema Singh Solanki

Abstract-TheGeneric visual perception processor is a single chip modeled on the perception capabilities of the human brain, which
can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eyes neural networks and
the brain, the chip can handle about 20 billion instructions per second. This electronic eye on the chip can handle a task that ranges
from sensing the variable parameters as in the form of video signals and then process it for co-Generic visual perception processor is a
single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then
locate and track them in real time. Imitating the human eyes neural networks and the brain, the chip can handle about 20 billion
instructions per second.[1]

Keywords-Generic Visual Perception Processor, Pattern matching, Neuron network.


INTRODUCTION

The generic visual perception processor (GVPP)' has been developed after 10 long years of scientific effort . Generic Visual
Perception Processor (GVPP) can automatically detect objects and track their movement in real-time.The GVPP, which crunches 20
billion instructions per second (BIPS), models the human perceptual process at the hardware level by mimicking the separate temporal
and spatial functions of the eye-to-brain system. The processor sees its environment as a stream of histograms regarding the location
and velocity of objects. GVPP has been demonstrated as capable of learning-in-place to solve a variety of pattern recognition
problems. It boasts automatic normalization for varying object size, orientation and lighting conditions, and can function in daylight or
darkness. This electronic "eye" on a chip can now handle most tasks that a normal human eye can. That includes driving safely,
selecting ripe fruits, reading and recognizing things. Sadly, though modeled on the visual perception capabilities of the human brain,
the chip is not really a medical marvel, poised to cure the blind. The GVPP tracks an "object," defined as a certain set of hue,
luminance and saturation values in a specific shape, from frame to frame in a video stream by anticipating where it's leading and
trailing edges make "differences" with the background. That means it can track an object through varying light sources or changes in
size, as when an object gets closer to the viewer or moves farther away. The GVPP'S major performance strength over current-day
vision systems is its adaptation to varying light conditions. Today's vision systems dictate uniform shadow less illumination and even
next generation prototype systems, designed to work under normal lighting conditions, can be used only dawn to dusk. The GVPP
on the other hand, adapt to real time changes in lighting without recalibration, day or light. For many decades the field of computing
has been trapped by the limitations of the traditional processors. Many futuristic technologies have been bound by limitations of these
processors.
These limitations stemmed from the basic architecture of these processors. Traditional processors work by
slicing each and every complex program into simple tasks that a processor could execute. This requires an existence of an algorithm
for solution of the particular problem. But there are many situations where there is an inexistence of an algorithm or inability of a
human to understand the algorithm. Even in these extreme cases GVPP performs well. It can solve a problem with its neural learning
function. Neural networks are extremely fault tolerant. By their design even if a group of neurons get, the neural network only suffers
a smooth degradation of the performance. It won't abruptly fail to work. This is a crucial difference, from traditional processors as they
fail to work even if a few components are damaged. GVPP recognizes stores, matches and process patterns. Even if pattern is not
recognizable to a human programmer in input the neural network, it will dig it out from the input. Thus GVPP becomes an efficient
tool for applicationslike the pattern matching and recognitionlast references are given. [3]

LITERATURE SURVEY
There were two major ancient Greek schools, providing a primitive explanation of how vision is carried out in the body.The first was
the "emission theory" which maintained that vision occurs when rays emanate from the eyes and are intercepted by visual objects. If
an object was seen directly it was by 'means of rays' coming out of the eyes and again falling on the object. A refracted image was,
842

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

however seen by 'means of rays' as well, which came out of the eyes, traversed through the air, and after refraction, fell on the visible
object which was sighted as the result of the movement of the rays from the eye. This theory was championed by scholars like Euclid
and Ptolemy and their followers.The second school advocated the so-called 'intro-mission' approach which sees vision as coming from
something entering the eyes representative of the object. With its main propagators Aristotle, Galen and their followers, this theory
seems to have some contact with modern theories of what vision really is, but it remained only a speculation lacking any experimental
foundation. [4]
SYSTEM ARCHITECTURE
A visual perception processor for automatically detecting an event occurring in a multidimensional space (i, j) evolving over time with
respect to at least one digitized parameter in the form of a digital signal on a data bus, said digital signal being in the form of a
succession aijT of binary numbers associated with synchronization signals enabling to define a given instant (T) of the
multidimensional space and the position (i, j) in this space, the visual perception processor comprising: the data bus; a control unit a
time coincidences bus carrying at least a time coincidence signal; and at least two histogram calculation units for the treatment of the
at least one parameter, the histogram calculation units being configured to form a histogram representative of the parameter as a
function of a validation signal and to determine by classification a binary classification signal resulting from a comparison of the
parameter and a selection criterion C, wherein the classification signal is sent to the time coincidences bus, and wherein the validation
signal is produced from time coincidences signals from the time coincidence bus so that the calculation of the histogram depends on
the classification signals carried by the time coincidence bus. A visual perception processor according to claim 1, further comprising,
to process several parameters, several histogram calculation units organized into a matrix, wherein each of the calculation units is
connected to the data bus and to the time coincidences bus.A visual perception processor, comprising: data bus; a time coincidences
bus; and two or more histogram calculation units that receive the data DATA(A), DATA(B), . . . DATA(E) via the data bus and
supply classification information to the single time coincidences bus, wherein at least one of said two or more histogram calculation
unit processes data aijT associated with pixels forming together a multidimensional space (i, j) evolving over time and represented at a
succession of instants (T), wherein said data reaches said at least one calculation unit in the form of a digital signal DATA(A) in the
form of a succession aijT of binary numbers of n bits associated with synchronization signals enabling to define the given instant (T)
of the multidimensional space and the position (i, j) of the pixels in this space, to which the signal aijT received at a given instant (T)
is associated, said unit comprising: an analysis memory including a memory with addresses, each address associated with possible
values of the numbers of n bits of the signal DATA(A) and whose writing process is controlled by a WRITE signal; a classifier unit
comprising a memory intended for receiving a selection criterion C of the parameter DATA(A), said classifier unit receiving the signal
DATA(A) at the input and outputting a binary output signal having a value that depends on a result of the comparison of the signal
DATA(A) with the selection criterion C; a time coincidences unit that receives the output signal from the classifier unit and, from
outside the histogram calculation unit, individual binary enabling signals affecting parameters other than DATA(A), wherein said time
coincidences unit outputs a positive global enabling signal when all the individual time coincidences signals are positive; a test unit;
an analysis output unit includingoutput memory; an address multiplexer; an incrementation enabling unit; and a learning multiplexer;
wherein a counter of each address in the memory corresponds to the value d of aijT at a given instant, which is incremented by one
unit when the time coincidences unit outputs a positive global enabling signal; wherein the test unit is provided for calculating and
storing statistical data processes, after receiving the data aijT corresponding to the space at an instant T, a content of the analysis
memory in order to update the output memory of the analysis output unit, wherein the output memory is deleted before a beginning of
843

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

each frame for a space at an instant T by an initialization signal; wherein the learning multiplexer is configured to receive an external
command signal and initiate an operation according to a learning mode in which registers of the classifier unit and of the time
coincidences unit are deleted when starting to process a frame, wherein the analysis output unit supplies values typical of a sequence
of each of these registers. A visual perception processor according to claim 3, wherein the memory of the classifier is an addressable
memory enabling real time updating of the selection criterion C and having a data input register, an address command register and a
writing command register, receiving on its input register the output from the analysis memory and a signal End on its writing
command register, the processor further comprising a data input multiplexer with two inputs and one output, receiving on one of its
inputs a counting signal and on its other input the succession of data aijT to the address command of the memory of the classifier and
an operator OR controlling the address multiplexer and receiving on its inputs an initialization signal and the end signal END. A visual
perception processor according to claim 4, wherein the space (i, j) is two-dimensional and wherein the signal DATA(A) is associated
with the pixelsof asuccession of images. A visual perception processor according to claim 3, further comprising means for anticipating
the

value

of

the

classification

criterion

C.

A visual perception processor according to claim 6, wherein the means for anticipating the value of the classification criterion C
comprises memories intended for containing the values of statistical parameters relating to two successive frames T0 and T1. A visual
perception processor according to claim 7, wherein the statistical parameters are the average values of the data aijT enabled. A visual
perception processor according to claim 3, wherein the analysis output register stores in its memory at least one of the following
values: the minimum 'MIN', the maximum 'MAX', the maximum number of pixels for which the signal Vijt has a particular value
'RMAX', the particular value corresponding POSRMAX, and the total number of enables pixel'NBPTS'. A visual perception processor
according to claim 3, wherein the statistical comparison parameter used by the classifier is RMAX/2. A visual perception processor
according to claim 3, further comprising a control multiplexer configured to receive at its input several statistical parameters and
wherein the comparison made by the classifier depends on a command issued by the control multiplexer. A visual perception
processor according to claim 3, wherein the memory of the classifier includes a set of independent registers D, each comprising one
input, one output and one writing command register, wherein the number of these registers D is equal to the number n of bits of the
numbers of the succession Vijt, the classifier further comprising a decoder configured to output a command signal corresponding to
the related input value (address) and a multiplexer controlled by this input value, thus enabling to read the chosen register. A visual
perception processor according to claim 12, further comprising register input multiplexers, each being associated with the input of a
register, and combinatory modules connecting the registers to one another, wherein the register input multiplexers are configured to
choose between a sequential writing mode and a writing mode common to all the registers connected together by the combinatory
modules. A visual perception processor according to claim 13, wherein the combinatory modules comprise a morphological expansion
operator including a three-input logic unit 'OR', wherein the first input unit receives the output signal of the 'Q'-order register, wherein
the second input unit is connected to the output of a two-input logic unit 'AND' receiving respectively the output signal of the 'Q 1'order register and a positive expansion signal, and wherein the third input unit is connected to the output of a two-input logic unit
'AND' receiving respectively the output signal of the 'Q-1'-order register and a negative expansion signal. A visual perception
processor according to claim 14, wherein the combinatory modules comprise a morphological erosion operator including a three-input
logic unit 'AND', wherein the first input unit receives the output signal of the 'Q'-order register, wherein the second input unit is
connected to the output of a logic unit 'AND', wherein one four-input reverse receives respectively the output signal of the 'Q'-order
register, the output signal of the 'Q-1'-order register, the output signal of the 'Q 1'-order register and a negative erosion signal, and
wherein the third input unit is connected to the output of a four-input logic unit 'AND', wherein one reverse receives respectively the
output signal of the 'Q'-order register, the output signal of the 'Q-1'-order register, the output signal of the 'Q 1'-order register and a
844
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

negative erosion signal. A histogram calculation unit according to claim 14, wherein each combinatory module comprises a
multiplexer associating a morphological expansion operator and a morphologicalerosionoperator. A visual perception processor
according to claim 3, wherein the histogram calculation units are organized into a matrix. A device for detecting one or more events
including aural and/or visual phenomena, the device comprising: a controller coupled to a controller bus and a transfer bus; an input
portal adapted to receive data describing one or more parameters of the event being detected; and a data processing block coupled to
the input portal, the transfer bus and the controller bus, the data processing block including: a histogram unit coupled to the input
portal and configured to calculate a histogram for a selected parameter; a classification unit coupled to the input portal and the
histogram unit, and configured to determine the data in the histogram that satisfy a selected criterion, and to generate an output
accordingly, the classification unit supplying the output to the transfer bus; and a coincidence unit coupled to receive the output of the
classification unit from the transfer bus and to receive selected coincidence criteria from the controller bus, the coincidence unit being
configured to generate an enable signal for the histogram unit when the output of the classification unit satisfies the selected
coincidence criterion, wherein classification is performed automatically by processing statistical information associated with the
calculated histogram. The device of claim 18, wherein the classification unit includes a memory table for storing selection criteria, and
wherein automatic classification involves updating the selection criteria in the memory table based on the processed statistical
information. The device of claim 19, wherein the processed statistical information includes a value RMAX defining the number of
data points at the maximum of the calculated histogram, and wherein automatic classification involves updating the selection criteria
in the memory table based on the value RMAX. The device of claim 18, wherein the classification unit includes a memory table for
storing selection criteria, and wherein automatic classification involves changing an address input to the memory table based on the
processed statistical information. A device for detecting one or more events including aural and/or visual phenomena, the device
comprising:

controller

coupled

to

controller

bus

and

transfer

bus;

an input multiplexer adapted to receive data describing one or more parameters of the event being detected, and to output data
describing a selected one of the one or more parameters in response to a selection signal; and a data processing block coupled to the
multiplexer, the transfer bus and the controller bus, the data processing block including: a histogram unit coupled to the input portal
and configured to calculate a histogram for the selected parameter; a classification unit coupled to the input portal and the histogram
unit, and configured to determine the data in the histogram that satisfy a selected criterion, and to generate an output accordingly, the
classification unit supplying the output to the transfer bus; and a coincidence unit coupled to receive the output of the classification
unit from the transfer bus and to receive selected coincidence criteria from the controller bus, the coincidence unit being configured to
generate an enable signal for the histogram unit when the output of the classification unit satisfies the selected coincidence criterion. A
device for detecting one or more events including aural and/or visual phenomena, the device comprising: a controller coupled to a
controller bus and a transfer bus; an input portal adapted to receive data sets describing one or more parameters of the event being
detected, each data set being associated with an instant of time; and a data processing block coupled to the input portal, the transfer
bus and the controller bus, the data processing block including: a histogram unit coupled to the input portal and configured to calculate
a histogram for a selected parameter for a particular instant of time T1; a classification unit coupled to the input portal and the
histogram unit, and configured to determine the data in the histogram that satisfy a selected criterion, and to generate an output
accordingly, the classification unit supplying the output to the transfer bus; and a coincidence unit coupled to receive the output of the
classification unit from the transfer bus and to receive selected coincidence criteria from the controller bus, the coincidence unit being
configured to generate an enable signal for the histogram unit when the output of the classification unit satisfies the selected
coincidence criterion, wherein the classification unit automatically anticipates values associated with the selected parameter at a next
instant of time T2 based on statistical information associated with the calculated histograms at time T1 and at a previous time T0. The
845
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

device of claim 23, wherein the statistical information at each time T0 and T1 includes a value POSMOY defined as the value, for a
set of parameters, which is greater than or equal to half of the values of the set of parameters. The device of claim 24, wherein
automatic anticipation is based on a function of POSMOY at T0 minus POSMOY at T1 (P0-P1).The device of claim 25, wherein the
function includes one of Y=(P0-P1), Y=a(P0-P1) b, and Y=a(P0-P1)2, where a and b are predetermined constants.The device of claim
26, wherein two or more of the functions are multiplexed.A method of analyzing parameters associated with an event by an electronic
device, comprising:receiving data sets representative of one or more parameters of the event being detected, each data set being
associated with an instant of time; calculating, for each instant of time, a statistical distribution, defined as a histogram, of a selected
parameter of the event being detected;classifying the data set by comparing its parameter values to classification criteria stored in a
classification memory;enabling the calculating step when classified data satisfies predetermined time coincidence criteria;
anticipating values associated with the selected parameter for a next instant of time T2 based on statistical information associated with
the calculated histograms at an instant of time T1 and at a previous instant of time T0. A method of analyzing parameters associated
with an event by an electronic device, comprising: a) receiving data representative of one or more parameters of the event being
detected;
b) calculating, for a given instant of time, a statistical distribution, defined as a histogram, of a selected parameter of the event being
detected;
c)

classifying

d)

enabling

the
the

data

by

calculating

comparing
step

when

its

value

classified

to

classification

data

satisfies

criteria

stored

predetermined

in

time

classification

coincidence

memory;

criteria;

and

e) automatically updating, for each instant of time, the classification criteria stored in the classification memory based on statistical
information associated with the histogram.[7]

Fig: Architecture of GVPP

846

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CONCLUSION
In The generic visual perception processor can handle about 20 billion instructions per second, and can manage most tasks performed
by the eye. Modeled on the visual perception capabilities of the human brain, the GVPP is a single chip that can detect objects in a
motion video signal and then to locate and track them in real time. This is a generic chip, and we've already identified more than 100
applications in ten or more industries. The chip could be useful across a wide range of industries where visual tracking is important.
[9]

REFERENCES:
[1] G. L. Barrows, J. S. Chahl, and M. V. Srinivasan. Biomimetic visual sensing an flight control. The Aeronautical Journal, London:
The royal Aeronautical Society,107(1069): 159-168, 2003.
[2] P. Buser and M. Imbert, Psychologiesensorielle, 1986, ISBN 2 7056 5944 7.
[3] R. A. Brooks. A robust layered controls system for a mobile robot. IEEE Journal of Robotics and Automation, 2:14-23,2003.
[4]Barghout, Lauren, and Lawrence W. Lee. "Perceptual information processing system." U.S. Patent Application 10/618,543, filed
July 11, 2003.
[5] Yarbus, A. L. (1967). Eye movements and vision, Plenum Press, New York
[6] Marr, D (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. MIT
Press.
[7] Bruce, V., Green, P. &Georgeson, M. (1996). Visual perception: Physiology, psychology and ecology (3rd ed.). LEA. p. 110.
[8] Carlson, Neil R. (2013). "6". Physiology of Behaviour (11th ed.). Upper Saddle River, New Jersey, USA: Pearson Education Inc.
p. 170. ISBN 978-0-205-23939-9.
[9] Neil R. Carlson; C. Donald Heth; Harold Miller; John W. Donahoe; William Buskist; G. Neil Martin; Rodney M. Schmaltz
(2010). Psychology the Science of Behaviour. Toronto Ontario: Pearson Canada Inc. pp. 140141.
[10] Carlson, Neil R.; Heth, C. Donald (2010). "5". Psychology the science of behaviour (2nd ed.). Upper Saddle River, New Jersey,
USA: Pearson Education Inc. pp. 138145. ISBN 978-0-205-64524-4.
[11] Barghout, Lauren, and Lawrence W. Lee. "Perceptual information processing system." U.S. Patent Application 10/618,543, filed
July 11, 2003.
[12] Barghout, Lauren. "System and Method for edge detection in image processing and recognition" WIPO Patent No. 2007044828.
20 Apr. 2007

847

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey on ExFeature: A Feature Modelling and Recommending Technique


For Domain Oriented Product Listing
Kanifnath S. Hirave
M.E. Computer-II,
Vidya Pratisthan`s College Of Engineering-Baramati,pune
Prof. Dinesh Bhagwan Hanchate
Assistant Professor (Computer Dept.)
Vidya Pratisthan`s College Of Engineering-Baramati,pune
Abstract This The activity of detecting and documenting the similarities and differences in related software products in domain is
called as domain analysis. For efficient detection and utilization of commonality across related software systems, the effective
software reuse is required. Domain experts providing information about a domain under analysis. Feature oriented domain analysis
(FODA) is used for efficient detection and utilization of commonality across related software systems. Reuse of software product is
one of the encouraging resolutions to the software disaster. Feature Oriented Reuse Environment (FORM) model constructed during
the analysis is called a feature model, it captures commonality as an AND/OR graph. Domain analysis and Reuse Environment
(DARE) is CASE tool that supports domain analyst in carrying out a clear domain analysis method. DARE is useful for detection and
documenting domain information from documents and program. Recommender systems are used to find affinity among features
across products, for which two different techniques are used. First technique uses association rule mining algorithm, in which two
algorithms are used i.e. Apriori and AprioriTID. Another technique which is collaborative filtering recommender system is to analyze
neighborhoods of similar products to identify new feature of items
Keywords Domain analysis, FORM, FODA, DARE, Collaborative Filtering. Recommendations System,Association Rule Mining.
INTRODUCTION
The activity of detecting and documenting the similarities and differences in related software products in certain domain is
called as Domain Analysis. At the beginning, there is no any methodology for domain analysis, Domain analysis is conducted
manually. Domain analysis is carried out with help of data flow diagrams. Domain analysis can be considered as a process which is
occurring prior to system analysis. [1]. Organized detection and use of cohesion across related software systems is required for
successful software reuse. Domain analysis coffer, a general report of the necessities of that class of structures and a set of methods for
their implementation with the help of observing related software systems. FODA [2] will create methods for accomplishment a
domain analysis and define the products of the domain analysis process. The important technical condition for completing effective
software reuses efficient detection and use of unity across related software systems. In FORM [3] inspection of a class of related
systems and the cohesion of primary systems exist. It is possible to achieve a set of reference models. FORM starts with an analysis of
agreement among applications in a particular domain in terms of services, operating environments, domain tools. The feature
model(FM)[3] is defined as construction during the analysis is called feature model. Feature model captures commonality. Domain
Analysis and Reuse Environment[4] is CASE tool which helps in domain analysis of finding and recording the similarities and
differences of related software systems. DARE[4] helps to capture of domain information from experts in a domain. Captured domain
information is stored in a domain catalog, which is enclosed a general architecture for the domain and domain specific components.
We also studied the problem of finding out association rules amongst items in huge database of sales transactions. There are
two algorithms i.e. Apriori and Apriori-TID [5,6] algorithm for Association Rule Mining which is well known algorithm to find
Association rules which are used for affinities among items [5,6].The process of estimating items through the views of other people is
called as Collaborative filtering (CF)[7] .CF technology fetches organized views of large interrelated publics on the web which
supporting filtering of large amounts of data. We studied the very important part of collaborative filtering, its key uses for users of the
principle and exercise of CF [7] algorithms. We also studied challenges of a CF recommendation system and evaluation of
Collaborative Filtering [8].
RELATED WORK
G. Arango et al [1], states that domain analysis is information severe activity for which no idea or any sort of formalization is
accessible. Domain analysis is directed casually and all reported encounters focus on the result, not on the methodology. Two
technologies are broken down after analysis. Domain Analysis transforms in set of data flow diagrams the model distinguishes
transitional exercises and work items. The generation of reusable components that can be reused in applications is the central problem.
Previously, it was assumed that reusable components were readily available. Available components are hard to understand and adjust
to new applications, Instead software components are available for reuse then programmers have chosen to create their own. Domain
848

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

analysis as important step in making real reusable components have exposed better success in reusability. Components which are the
result of domain analysis are better suited for reusability because these components capture the vital functionality required in that
particular domain. So that, developers catch them easier to add new systems. Analysis is very important issue in the victory of
reusability. In this technique recommend, development and validation of model domain analysis (DA)process.
Output of domain analysis supports systems. The output of system analysis supports the designers tasks. In the predictable
waterfall model of software development, The task of systems designers is to produce a particular design from a set of requirements
and specifications. The system analyst task is to create model of an current system and propose replacements for automation or
improvements. Both activities focus on a specific model for particular system. In DA authors tried to simplify all systems in an
applications, DA is at a higher level of abstraction than system analysis. In domain analysis, similar features from similar systems are
generalized, objects and operations common to all systems within the same domain are identified, and a model is defined to describe
their relationships. If this process succeed in identifying the objects and operations in a domain specific language. Language becomes
our domain model and is used to describe objects and operations common in that domain.
Advantages
The domain analysis is very simple
Disadvantages
This is manual process for no any methodology is available so it vey time consuming process.
K. Kang et al [2] The efficient finding and utilization of commonality across related software systems is an important
technical requirement for achieving successful software reuse. By inspecting a associated software systems and the mutual essential
theory of those systems, domain analysis can offer a reference model for describing the class. The main aim of this report is to
establish the Feature-Oriented Domain Analysis (FODA) is used for accomplishment a domain analysis. The feature-oriented idea is
based on the importance of the method on finding those features a user commonly assumes in applications in a domain. This method is
depend on a study of other domain analysis methodologies which defines both the products and the process of domain analysis. This
thesis is focused toward three groups:
1. Domain experts give detail information about a domain.
2. Domain analysts performing the domain analysis.
3. Systems analysts and developers are the Consumers of domain analysis products.
For establishing requirements for software reuse domain analysis is a necessary
The analysis can help a variety of purposes toward these end specifications for reusable resources which imparts.
tool support such as catalogs
Process model for reuse
In overall, the analysis provides a complete overview of the problems which is solved by software in a given domain.
The Feature-Oriented Domain Analysis (FODA) method is built on the consequences of other successful domain analysis endeavors.
The method founds three stages of a domain analysis:
Context analysis: In order to establish scope.
Domain modeling: In order to define the problem space and
Architecture modeling: In order to characterize the solution space
Advantages
Communication language among participants
Easy to capture similarities & difference of requests on feature oriented models
Representative of application
Disadvantages
There is a no any organized method for use of reusable assets.
More man power is required for domain analysis.
K.C. Kang et al [3] Reuse of software product is one of the most favourable solutions to the software disaster. Previous work for
efficient reuse i.e. Feature oriented domain analysis (FODA) which focuses on creating reusable assets. FORM model is constructed
during the analysis is called a feature model, it captures commonality as an AND/OR graph, where AND nodes indicate mandatory
features and OR nodes indicate alternative features selectable for different applications. The goals of FORM are as follows,
Method to create reusable software artifact effectively
Method to develop new application from reusable artifacts
Domain analysis can be well-defined as examining a class of related systems and mining the commonalities and differences of these
systems. Feature oriented domain analysis can be defined as doing domain analysis based on features. Domain engineering can be
defined as using analysis results to create a set of reference models, Creating reusable software architectures and components. The
purpose of the FORM is to creating a set of reusable assets and also increasing new application from the assets. Feature model[3] is
studied and advanced to cover non-functional features and to support complete architecture.
Advantages
849

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Focus on reuse concept and introduce practical method.


Disadvantages
Meaning of feature is not clearly fixed among researches.
FORM is semiformal.
Provide construction language but not demanding analysis of Models.
More man power is required for domain analysis.
W. Frakes, R. Prieto-Diaz, and C. Fox[4] Software engineering has usually concerned with the development of custom built
single system. In order to support single system engineering, software engineering tools and methods have been developed. Modern
software engineering practice requires a move from creating one at a time to the use of formal engineering principles for generating
system and tactics. The practice of preplanned reuse of systematic software gives way for accomplishing such transition. Systematic
software reuse is depend upon the observation which helps to increasing quality and productivity meaningfully by moving the
attention of software engineering to domain centered view, and which helps to identify that most software industries do not build
totally new software applications. Systematic reuse is the important part in quality and productivity improvements lies accepting a
process for building multiples related systems in a problem domain. Infrastructure support is required for systematic reuse. Domain
engineering is the activity of making an infrastructure to support systematic reuse. In domain engineering various activities are
involved such as manufacturing plant, producing software applications. Domain engineering is divided into two phases
1.Domain analysis.
2.Domain Implementation
1.The activity of detecting and documenting the similarities and differences in related software products in domain is called as
Domain analysis
There are two types of domain analysis methods
A. Bottom-up analysis
In Bottom-up analysis validation of basic architecture and features through analysis of documents and source code.
B. To-down analysis
In Top-down analysis propose common architecture and features depend upon experience and knowledge of domain experts.
The use of information fetched in domain analysis is to grow reusable components for domain and creation of production process for
systematically reusing components to produce new systems. Domain analysis and Reuse Environment (DARE) is CASE tool that
provide domain analyst in carrying out a clear domain analysis method. DARE is useful for detection and recording domain
information from documents and code.
Advantages
DRAE overcome the limitations of previous domain analysis methods such as FODA[3] and FORM[4]
DARE is automatic process.
DARE provides method which is pointing on the extraction of high level domain information from domain experts.
Disadvantages
DARE focus on their efforts only on small software requirements specifications.
The scope of available specifications in extracted features is minimum.
R. Agrwal et al [5,6] states that the algorithm named association rule mining to discover affinities among items. association rule
problem has a formal statement of the association rule Mining Definition 1: Let I ={IT1 IT2, , ITn} be a set of n dissimilar
attributes. Let DB be a database, where each record TN has a distinctive identifier, and has a set of items such that TNIT An
association rule is an consequence of the form AB, where A, BIT, are sets of items called item sets, and A B=. Here, A is called
ancestor, and B successive. Two important measures for association rules, support (s) and confidence (), can be defined as follows.
Definition 2: The support (s) of an association rule is the ratio (calculated in percentage) of the records that contain A B to the total
number of records in the database So, if we say that the support of a rule is 15% then it means that 15% of the total records enclose
A B. Support is the statistical meaning of an association rule. Mathematically support represented as,
, =

Definition 3: For a given number of records, confidence () is the ratio (which is calculated in form of precent) of the number
of records that contain AB to the number of records that contain X. Thus, if we say that a rule has a confidence of 95%, it means that
95% of the records containing A also contain B. The confidence of a rule indicates the amount of correspondence in the dataset
between A and B, mathematically support is represented as
(, ) =
850



www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The Apriori algorithm [5,6] produced for an incredible accomplishment in the past of mining affiliation principle. It is far the
most well -known affiliation principle algorithm. This method utilizes the property that any subset of a huge item set must be an
extensive item set. Additionally, it is expected that item in a item set are kept in lexicographic request. The imparted item sets are
stretched out to other individual items in the exchange to produce competitor item sets. On the other hand, those individual items may
not be substantial. As we realize that a superset of one substantial item set and a little item set will bring about a little item set, these
methods create an excess of applicant item sets which end up being little. The Apriori algorithm addresses this paramount issue. The
Apriori creates the applicant item sets by joining the expansive item sets of the past pass and erasing those subsets which are little in
the past pass without considering the exchanges in the database. By just considering expansive item sets of the past pass, the quantity
of hopeful extensive item sets are fundamentally diminished. In the first pass, the item sets with stand out items are checked. The
found expansive item sets of the first pass are utilized to create the competitor sets of the second pass utilizing the apriori_gen () [5][6]
capacity. After the competitor item sets are discovered, their backings are checked to find the huge item sets of size two by filtering
the database. In the third pass, the huge item sets of the second pass are considered as the applicant sets to find vast item sets of this
pass. This iterative methodology ends when no new huge item sets are found. Each one pass i of the calculation filters the database
once and decides extensive item sets of size i. Li indicates vast item sets of size i, while Ci is competitors of size i. The apriori_gen()
work as follows
1. While performing the first step, Lk-1 is joined with itself to obtain Ck. ( competitors of size k)
2. In the second step, apriori_gen() deletes all item sets from the join result, which have some (k-1) subset that is not in Lk-1. Then, it
returns the remaining large k-item sets.[5,6]
Advantages of Apriori Algorithm
It uses large item set property
Apriori is easily parallelized
It is easy to implement
The Apriori algorithm implements level-wise search using repeated item property
Disadvantages of Apriori Algorithm
There is too much database scanning to calculate frequent item so it reduces performance.
It assumes that transaction database is memory resident
Generation of candidate item sets is expensive in both space and time
Support counting is expensive because of subset checking, and multiple database I/O scans.
Apriori-TID Algorithm [5] [6]
Apriori scans the complete database in every pass to count support. Scanning of the entire database may not be required in all passes.
Based on this estimation, proposed another algorithm called Apriori-TID. Similar to Apriori, Apriori-TID uses the Aprioris candidate
generating function to determine candidate item sets before the start of a pass. Apriori-TID does not use the database for counting
support after the first pass. Slightly, it uses an encoding of the candidate item sets used in the previous pass denoted by Ck.Each
member of the set Ck is of the form <TID, Xk> where Xk is a potentially large k-item set present in the transaction with the identifier
TID. In the first pass, C1 links to the database. Though, each item is replaced by the item set. In other passes, the member of Ck
corresponding to transaction T is <TID, c> where c is a candidate belonging to Ck contained in T. Therefore, the size of Ckmay be
smaller than the number of transactions in the database. Still, each entry in Ckmay be smaller than the corresponding transaction for
larger k values. This is because limited candidates may be enclosed in the transaction. It should be mentioned that each entry inCkmay
be larger than the corresponding transaction for smaller k values At first, the entire database is scanned and C1is obtained in terms of
item sets. That is, each entry of C1has all items along with TID. Large item sets with 1-item L1 are calculated by counting entries of C1
. Then, apriori_gen () is used to obtain C2. Entries of C2 corresponding to a transaction T is obtained by considering members of C2
which are present in T. To perform this task, C1 is scanned rather than the entire database. Afterwards, L2 is obtained by counting the
support in C2. This process continues until the candidate item sets are found to be empty.
Advantage of Apriori-TID Algorithm
It uses encoding function later passes the size of the encoding function becomes smaller than the database, thus saving much
reading effort.
Disadvantage of Apriori-TID Algorithm
Encoding function is more complex
J. Schafer, D. Frankowski, J. Herlocker, and S. Sen[7] Collaborative Filtering is the process of estimating items using the
views of other peoples. Collaborative filtering approaches depend upon the ratings of user. The term user tells that any individual who
provides ratings to a system. Generally we use this term to refer to the people using a system to receive information such as
recommendations though it, also refers to those who provided the data used in creating this information systems to determine the
quality of items. Items can consist of anything for which a human can provide a rating, such as art, books, CDs, journal articles, or
vacation destinations
851

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Ratings in a collaborative filtering system can take on a following of forms.


Scalar ratings can consist of either numerical ratings, such as the 1-5 stars provided in Movie Lens system[15] which is used
for CF system for Movie or ordinal ratings such as strongly agree, agree, neutral, disagree, strongly disagree.
Binary ratings model choices between agree/disagree or good/bad.
Unary ratings are used to indicate that a user has examined or purchased an item, or otherwise rated the item positively. The
absence of a rating indicates that we have no information relating the user to the item
Ratings may be collected through explicit means, implicit means, or both. If user is asked to provide an opinion on an item is
called as explicit ratings. If ratings concluded from a users actions is called as implicit ratings. For example, a user who brows a
product page possibly has some interest in that product while a user who afterwards purchases the product may have a much stronger
interest in that product. The vital idea is that the rating of user u for a new item i is expected to be similar to that of another user v, if u
and v have rated other items in a similar way. Likewise, u is likely to rate two items i and j in a similar fashion, if other users have
given similar ratings to these two items. Also, collaborative recommendations are based on the quality of items as calculated by peers.
Finally, collaborative filtering ones can recommend items with very different content, as long as other users have already shown
interest for these different items.
Collaborative filtering methods can be g classified in the two general classes
1.Neighborhood methods
2. Model-based methods.
Neighborhood method.
In neighborhood based memory-based or heuristic-based collaborative filtering the user-item ratings stored in the system are directly
used to predict ratings for new items. This can be done in two ways
user based
User based recommendation
User-based systems, such as evaluate the interest of a user u for an item using the ratings for this item by other users, called
neighbors that have similar rating patterns. The neighbors of user u are typically the users v whose ratings on the items rated by both u
and v, i.e. Iuv, are most correlated to those of u.
Item-based recommendation
Item-based approach on the other hand, predict the rating of a user u for an itemi based on the ratings of u for items similar
to i. In such approaches, two items aresimilar if several users of the system have rated these items in a similar fashion.
, =

Where,
u is User,
I is item,
ni is neighbor n`s rating item

Model-based methods.
In contrast to neighborhood based systems, which use the stored ratings directly in the prediction, model-based approaches
use these ratings to learn a predictive model. The general idea is to model the user-item interactions with factors representing latent
characteristics of the users and items in the system, like the preference class of users and the category class of items. This model is
then trained using the available data, and later used to predict ratings of users for new items.
Advantages of neighborhood-based Collaborative filtering method.
1.Simplicity
Neighborhood-based methods are in-built and quite simple to implement.in their simplest form.
2. Justifiability
This method also provides a small and normal justification for the computed predictions.
3. Efficiency
One of the strong points of neighborhood-based systems are their Efficiency. They require cheap training phases, which need to be
carried at frequent intervals in large commercial applications.
4. Stability
Another useful property of recommender systems based on this approach is that they are little affected by the constant addition of
users, items and ratings, which are typically observed in large commercial applications.
852

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Disadvantages of Collaborative filtering method


1. Privacy
In order to provide personalized information to users,CF system must know things about the user.so privacy is big challenge.
2. Security
Even system that maintains the security of users ratings can be exploited to reveal personal information.
3. Trust
Recommender system may break when malicious users give rating that is not representative of their true preferences.
J. Herlocker et al[8] states that, recommender systems have been evaluated in many ways. In this artifact, the main focus on
the key decisions in estimating collaborative filtering recommender systems the user tasks being estimated, the types of analysis and
datasets being used, the ways in which prediction quality is measured, the estimation of prediction attributes other than quality, and
the user-based evaluation of the system as a whole.
H. Dumitru et al[13] states that, in order to modeling and recommending product features in particular domain, a
recommender system is used. This approach mines product descriptions from openly existing online specifications, utilizes text
mining[12] and a new IDC algorithm to find out domain-specific features, generates a probabilistic feature model that represents
unities, variants, and cross-category features, and then uses association rule mining [5,6] and the k-Nearest- Neighbor (kNN)[7]
learning strategy to design a content-based recommender algorithm. This approach has been shown to perform well in the previous
work in forum recommendations [7,9,10]. The kNN algorithm computes a feature-based similarity measure between a new product
and each existing product. The top k most similar products are considered neighbors of the new product. It then uses information from
these neighbors to infer other potentially interesting features and to make recommendations. product similarity
productsim(p,n) between a new product p and each existing product n is computed using the binary equivalent of the cosine similarity
as follows
productsim(p,n)

| |

where Fp denotes the set of features of product p [12]. This metric generates numbers between 0 and 1, where products with identical
features score 1, and products with no common features score 0. productSim(p, n) is computed between the new product and all
previously mined products. A neighborhood is then computed for the new product p by selecting the top k most similar products.
Advantages
Potentially increasing opportunities for reuse.
Reducing time-to-market.
Delivering more competitive software products.
Disadvantages
This recommender system supports the relatively labor-intensive task of domain analysis.
J. Sandvig et al [15] states that open nature of collaborative recommender systems has a security problem. Standard memory-based
collaborative filtering algorithm[14], such as k-nearest neighbor, have been shown quite vulnerable to such attacks. Model-based
techniques have shown different degrees of improvement over kNN with respect to robustness in the face of profile injection attacks.
Here we examine the robustness of a model-based recommendation algorithm based on the data mining technique of association rule
mining particularly in Apriori algorithm [5,6]gives substantial enhancement in steadiness and robustness compared to k-nearest
neighbor and other model-based techniques.[7]
Advantages
It has more robustness i.e. There is no any vulnerability attack.
Chuan Duan and Jane Cleland-Huang[17] states that, automated trace tools dynamically produce links amongst several software
articles such as requirements, design elements, code, test cases. Trace algorithms usually make use of information retrieval
methods[11] to calculate similarity scores between pairs of artifacts. The similarity score between any pair of artifacts a 1 and a2 is then
computed based on the similarity s(a1,a2) of their vectors as follows,
S(a1,a2)=pr(a2|a1)=|

=1

2 1, |pr(1, )

Where ,
K= No. of Clusters
i =No. of artifacts
pr(a2|ti) is computed as = 1, =f2i / 2k and estimates the extent to which the information in tidescribes the concept a2 in
respect to the total number of terms in the artifact, while pr(a1,ti) is computed as pr(a1,ti) = f1i /ni, where nirepresents the number of
853
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

artifacts in the collection containing the term ti. Finally, pr(a1) is computed as pr(a1)=
similarity scores are computed for each potential pair of artifacts.
Advantages
The technique is fully automated in java so it is more secured.

( 1,ti).

During the clustering phase,

Disadvantages:
This method is able to evaluate cluster based tracing in a broader set of artifact types, and in larger and more complicated
datasets
V. Alves et al[19] states that domain analysis includes not only looking at standard requirements documents such as use case
specifications but also at customer information packets. Considering diagonally every documents and deriving, in a practical and
scalable way, a feature model that is comprised of coherent abstractions is a fundamental and non-trivial challenge. Here we focus on
conducting an exploratory study to examine the appropriateness of Information Retrieval (IR) techniques for scalable identification of
commonalities and variabilitys in requirement specifications for software product lines.
M. Acher et al [20] states that domain analysis is the process of evaluating related products to recognize their corporate and
movable features in product line engineering. This procedure is normally carried out by experts on the basis of existing product
descriptions, which are expressed in a more or less organized way. Demonstrating and reasoning about product descriptions are errorprone and time consuming tasks. Feature models[3] constitute popular means to specify product similarities and differences in a
compressed way, and to provide computerized support to the domain analysis process. Here main focus on simplification the change
from product descriptions stated in a tabular format to feature model accurately representing them. This procedure is parameterized
through a committed language and high-level directives i.e. features scoping. Here assurance that is the resulting feature model
represents the set of permitted feature groupings supported by the considered products and has a clear tree hierarchy together with
variability information.
S. Apel et al [21] states that feature-oriented software development (FOSD) is a model for the building, customization and
combination of significant software systems. The main focus of this article is to give an outline and a personal view on the roots of
FOSD, relations to other software development paradigms, and current expansions in this field. The aim is to point to relations
between different lines of research and to identify open issues.
ACKNOWLEDGMENT
I express great many thanks to Prof. D.B Hanchate for his great effort of supervising and leading me, to accomplish this fine
work. Also to college and department staff, they were a great source of support and encouragement. To my friends and family, for
their warm, kind encourages and loves. To every person gave us something too light my pathway, I thanks for believing in me.
CONCLUSION
In this paper we studied domain analysis is the process of identifying, organizing, analyzing, and modeling features common
to a particular domain .It is conducted in early phases of the software development life-cycle to generate ideas for a product, discover
commonalities and variants within a domain. Most domain analysis techniques, such as the feature-oriented domain analysis (FODA)
and feature-oriented reuse method (FORM) depend upon analysts manually reviewing the existing requirement specifications or
product brochures and websites, and are therefore quite labor intensive. The success of these approaches is dependent upon the
availability of relevant documents and/or access to the existing project repositories as well as the knowledge of the domain analyst.
Another approach such as the domain analysis and reuse environment (DARE) which use data mining and information retrieval
methods to provide automated support for feature identification and extraction, but tend to focus their efforts on only a small handful
of requirements specifications. The extracted features are therefore limited by the scope of the available specifications. In this paper
we also focus on recommender systems first technique uses Association Rule Mining algorithm, which uses two algorithms Apriori
and AprioriTid, for discovering all significant association rules amongst items in a huge database of transactions. Another technique
which is collaborative filtering recommender systems to analyze neighborhoods of similar products to identify new feature of items.
REFERENCES
[1] G. Arango and R. Prieto-Diaz, Domain Analysis: Acquisition of Reusable Information for Software Construction. IEEE CS Press,
May 1989.
[2] K. Kang, S. Cohen, J. Hess, W. Nowak, and S. Peterson, Feature Oriented Domain Analysis (FODA) Feasibility Study,
Technical Report CMU/SEI-90-TR-021, Software Eng. Inst., 1990.
[3] K.C. Kang, S. Kim, J. Lee, K. Kim, G.J. Kim, and E. Shin, FORM: A Feature-Oriented Reuse Method with Domain-Specific
Reference Architectures, Annals of Software Eng., vol. 5, pp. 143-168, 1998.
854

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[4] W. Frakes, R. Prieto-Diaz, and C. Fox, Dare: Domain Analysis and Reuse Environment, Annals of Software Eng., vol. 5, pp.
125-141, 1998.
[5] R. Agrawal and R. Srikant, Fast Algorithms for Mining Association Rules, Proc. 20th Intl Conf. Very Large Data Bases,1994.
[6] R. Agrwal and R. Srikant, Fast Algorithms for Mining Association Rules, Proc. 20th Intl Conf. Very Large Data Bases (VLDB
94), pp. 487-499, Sept. 1994.
[7] J. Schafer, D. Frankowski, J. Herlocker, and S. Sen, Collaborative Filtering Recommender Systems, The Adaptive Web, pp.
291-324, Springer, 2007.
[8] J. Herlocker, J. Konstan, L. Terveen, and J. Riedl, Evaluating Collaborative Filtering Recommender Systems, ACM Trans.
Information Systems, vol. 22, pp. 5-23, 2004.
[9]C. Castro-Herrera, C. Duan, J. Cleland-Huang, and B. Mobasher. A recommender system for requirements elicitation in large-scale
software projects.Proc. of the 2009 ACM Symp.on Applied Computing, pages 14191426, 2009.
[10] K. Chen, W. Zhang, H. Zhao, and H. Mei.An approach to constructing feature models based on requirements clustering.
Requirements Engineering, IEEE International Conference on, 0:3140, 2005.
[11]C.D. Manning, P. Raghavan, and H. Schutze, Introduction to Information Retrieval. Cambridge Univ. Press, 2008.
[12]E. Spertus, M. Sahami, and O. Buyukkokten. Evaluating similarity measures: a large-scale study in the orkut social network.pages
678684, Chicago, Illinois, USA, 2005. ACM.
[13]. H. Dumitru, M. Gibiec, N. Hariri, J. Cleland-Huang, B. Mobasher, C. Castro-Herrera, and M. Mirakhorli, On-Demand Feature
Recommendations Derived from Mining Public Software Repositories,Proc. 33rd Intl Conf. Software Eng., p. 10, May 2011
[14] Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proc. of the
14th Annual Conf. on Uncertainty in Artificial Intelligence, pp. 4352. Morgan Kaufmann (1998)
[15]J. Sandvig, B. Mobasher, and R. Burke, Robustness of Collaborative Recommendation Based on Association Rule Mining,
Proc. ACM Conf. Recommender Systems, 2007.
[16] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, Analysis of Recommendation Algorithms for e-Commerce, Proc. ACM Conf.
Electronic Commerce, pp. 158-167, 2000.
[17]Chuan Duan and Jane Cleland-Huang Clustering Support for Automated Tracing Center for Requirements Engineering DePaul
University 243 S. Wabash Avenue, Chicago IL 60604 312-362-8863{duanchuan,jhuang}@cs.depaul.edu
[18]J. Herlocker, J. Konstan, L. Terveen, and J. Riedl, Evaluating Collaborative Filtering Recommender Systems, ACM Trans.
Information Systems, vol. 22, pp. 5-23, 2004.
[19] V. Alves, C. Schwanninger, L. Barbosa, A. Rashid, P. Sawyer, P. Rayson, K. Pohl, and A. Rummler, An Exploratory Study of
Information Retrieval Techniques in Domain Analysis, Proc. 12 th Intl Software Product Line Conf., pp. 67-76, 2008.
[20] M. Acher, A. Cleve, G. Perrouin, P. Heymans, C. Vanbeneden, P Collet, and P. Lahire, On Extracting Feature Models
fromProduct Descriptions, Proc. Sixth Intl Workshop Variability Modeling of Software-Intensive Systems (VaMoS 12), pp. 45-54,
2012.
[21]S. Apel and C. Kastner, An Overview of Feature-Oriented Software Development, J. Object Technology, vol. 8, no. 5, pp. 4984, July/Aug. 2009

855

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Congruence Control Management for University Governance


Claudia-Georgeta CARSTEA
George Baritiu University of Brasov, Romania
15claudia.carstea@gmail.com

Abstract With a view to improving U-governance we created a collaborative networking based on academic partnership aimed at
conceiving an integrated informational system to be implemented and generalized. The informational system was developed in the
SOA context. Reducing funding designated to national education, creates big problems concerning the university management in
Romania. SIMUR is an integrated Informational System, designed as an instrument for university management (U-GOV). We
consider the fact that educational system is not an independent entity and exist in an environment. The system created allows
activating a series of controls on the planned appointments.

Keywords Congruence Control, University Management, Compatibility, Overlapping, SOA, Integrated System, Overlapping
1. INTRODUCTION
Specific objectives of the SIMUR project are: [1]
a) Designing and implementing a standard configuration in the five universities which should be adapted to the requirements of
Romanian universities to the Romanian legislation in force and to the European priorities;
b) Redesigning and optimizing the main existing informational flows;
c) Increasing the transparency level by providing up to date information at any moment;
d) Increasing the efficiency of the activities of the staff involved in university management;
e) In the context of Romanias integration in the European Union and according to the Lisbon priorities, Romanian universities
are faced with the necessity to meet the new requirements and standards. [12]
The management cycle of the logistics of the universitys resources can be divided into three macro phases, not necessarily
sequential, which can thus be considered, also simultaneously, by an operator:
- Planning the first drafting of the timetable, if possible in diagrammatic form in the case of a recurrent event or a simple
calendar of single dates.
- Management of variations once activities have started, the provisional calendar may certainly undergo single variations
over time, which can be managed in such a way as to provide a precise and punctual communication transmitted to the
persons involved and to simultaneously monitor the effective use of the resources.
- Monitoring defined as a final opportunity to detect the actual course of events planned and the actual employment
resources. These recordings normally registered on a sample basis can then be the subject of statistical analyses or used
by the decision-making support system.

2. CONGRUENCE CONTROL FOR UNIVERSITY MANAGEMENT


SIMUR system allows a context management. [7] There are three types of control:
- Availability: actual availability of resources-event-people is controlled by the system, comparing the planned events with the
availability calendar. For fixed resources, it will be opening timetable and closing days, for events, it will be the didactic
period, for people, it will be the attendance days agreed.
- Overlapping: the Program indicates the appointments overlapping on the same resources-event-people. In order to avoid
overlapping between different events, use the function of generation and management of links.
- Compatibility: the Program indicates that the characteristics of an event match the characteristics of the available fixed
resource.

The collaborative network includes universities from Italy and Romania. The Romanian universities have been selected in such a way
as to allow sensing different university fields (economic, technical, and pedagogical) as well as different education forms (private and
state universities). (Fig. 1)
856
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CINECA Bologna Italy

Babes Bolyai University,


Cluj Romania

S.C. CRYSTAL System

The Bucharest Academy of


Economic Studies

Polytechnic University
Timisoara Romania

Application platform

Application platform
User interface

Data acquisition server


Local devices

Data acquisition server

Process Orchestration

Local devices

Educational Services
Business objects
Persistence
Oracle Application, SOA, IBM solutions

George Baritiu University,


Brasov, Romania

Constanta Maritime University


Romania

Application platform

Application platform

Data acquisition server

Data acquisition server

Local devices

Local devices

Fig 1: Collaborative network architecture


The advantages of collaborative networks are the following:
Offers the possibility to reduce the duration of development and implementation of the system;
Increase the chances to achieve a performed system because it is based on previous achievements, verified in practice and on
the accumulated experience of partners along the time;
Offers the possibility of cost reduction for the development and implementation of the system because we can use partially or
totally the software already made (software reusability);
Decreases the risk of implementing the system because it is based on the achievements already verified in practice.
You can add the remaining content as it is but the heading must be Time New Roman Front of size 11 with bold and the content must
be as of introduction i.e time new roman of size 10 and must be justified alignment

3. ADAPT TO CHANGES AND DATA INTEGRATION


Starting from the observation that a main characteristic of each level is to finalizing with a check up and a validation in order
to eliminate certain anomalies, it is underlined the fact that a good security of the I.T. and the administration practices control of the
complex I.S. projects is essential.[4] Under these circumstances, one can search new managerial solutions in order to integrate:

Time control,

Cost control,

Quality control of the working team,

Obtained results control


There are many different personality and motivational models and theories, and each one offers a different perspective. [9] The more
models you understand, the better your appreciation of motivation and behavior. For organizational change that entails new actions,
objectives and processes for a group or team of people, use workshops to achieve understanding, involvement, plans, measurable
aims, actions and commitment. Encourage your management team to use workshops with their people too if they are helping you to
manage the change. [2,7]
You should even apply these principles to very tough change like making people redundant, closures and integrating merged or
acquired organizations. Bad news needs even more careful management than routine change. Hiding behind memos and middle
857

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

managers will make matters worse. Consulting with people, and helping them to understand does not weaken your position - it
strengthens it. Leaders who fail to consult and involve their people in managing bad news are perceived as weak and lacking in
integrity. Treat people with humanity and respect and they will reciprocate.
For complex changes, refer to the process project management and ensure that you augment this with consultative communications to
agree and gain support for the reasons for the change. Involving and informing people also creates opportunities for others to
participate in planning and implementing the changes, which lightens your burden, spreads the organizational load, and creates a sense
of ownership and familiarity among the people affected. [5, 8, 7]
The ethical values of one organization can not be better than those of the employees (the leading positions are included) that create
them, that make them work and supervise them! AND All this because there are some organizational factors that contribute to
unauthorized actions such as:
the inefficiency of controls within the company;
the inefficient decentralization of the report system;
the penalizations of some employees that werent announced to the entire company
But the employees ethical manner is not enough. [1, 3, 9] Their abilities are another essential element of the control medium.
The ability, that is knowledge and aptitudes necessary in every line of work, must be mentioned by the leaders. It is in the interest of
every company to have the best employees. (Fig.2)
We focused about a number of different levels of data integration like:
- shared files all tools recognize a single file format
- shared data structures- the details of data structure are agreed in advance by all tools and are hard-wired into the tool

Fig. 2: Activities Diagram


Problems come out in different ways. The Feedback can offer information regarding the gap between actual performance and
the desired one. We could say that feedback highlights the problems. Also the external feedback is extremely important and must not
be ignored..
The three most relevant and key elements in order for the project to be realize are:
Technical feasibility added to the present system, available technology for the users requests
Economic feasibility -the period of time in which a project is designed, the cost for its planning, the cost for the employees
time of study, the estimative cost of the hardware and software equipments and their development.
Operational feasibility The well function of the system after it has been installed, the usage of the designed project.

858

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

For the approval of each project there must be identified needs at first. It is also necessary that the project is possible from the
technical point of view. The question is if the present technology allows its accomplishment, and if these technologies exist, are
they accessible taking into consideration the knowledge, the abilities, the budgets, the human and material resources.
4. SISTEM REQUIREMENTS
The project manager is responsible for the correct investment of the resources, as well as for their usage in order to obtain the desired
result. (Fig.3)
The first mode to operate on the controls, the most immediate is to activate the engine and work directly on the open scheduler. Every
time the engine detects the problems on each appointment, a warning signal will appear. In this mode, the engine works indicating the
problems in sequence, so not all at the same time. You can configure and customize the controls by default that the procedure carries
out during activation of the appropriate panel in options. [1, 7, 10]

Fig.3: A possible solution for an efficient ERP system


Establishing system requirements was very difficult because of system complexity. We concentrate on the following types of
requirements:
- Functional requirements the basic functions that the system must provide. There was set out at an abstract level and then in
detail. Detailed functional requirements specifications took place at the sub-system level.
- System proprieties means the non-functional emergent system proprieties, such us availability, performance and safety.
These non-functional system proprieties affect the requirements of all sub-systems.
- Characteristics which the system must non exhibit what the system must not do.
An important part in requirements definition phase was to establish a set of over-all objectives which the designed system should
meet.
Some of the activities involved in SIMUR design was:
- partition requirements during this phase we analyzed requirements and collected them into related groups
- identify sub-systems we identified the different sub-systems that can , individually or collectively, meet the requirements.
We focused about influence of other organizational or environmental factors.
- Assign requirements to sub-systems as we know in practice is never a clean match between requirements partitions and
identified subsystems.
- Specify sub-systems functionality as a part of the system design phase.
- Define sub-systems interfaces
859

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5. ACKNOWLEDGMENT
Thanks to my Teacher and Mentor Professor Ph.D. Sabau Gheorghe for including me in his European Research Project and to the
Editor of IJERGS Journal and his team for publishing my research work.

6. CONCLUSION
In the actual context, analyzing the academic environment we have drawn a series of conclusions, among which we can mention:
The activities in the university environment are characterized by dynamism and complexity;
There are several factors that generates a series of changes; for this reason there is much more emphasis placed on strategic
planning and management control;
It is necessary to develop an efficient Informational System for U-GOV;
The development of an Informational System in the SOA context is highly advantageous.
Based on the conclusions reached in fig 6 we proposed our own solution of U-GOV. Our solution is SOA oriented.
The collaborative network of the system U-GOV has been achieved on the base of academic partnership, including several
universities and some companies specialized in developing and implementing software in the university domain.
The role of the universities within the partnership is that of making an inventory of the problems of the Higher Education and of
proposing solutions; the specialized companies are supposed to ensure the transfer of the logistic and technical support for the
implementation of the system.
In a partnership of this kind, the numerous entities involved constitute a collaborative network based on synergism. Within this context
synergism means simultaneous action or entities collaboration towards the same goal with a view to achieving a common objective
with economical material, financial and human means, which otherwise would have been difficult to achieve.

REFERENCES:
[1] Alexandra Florea, Claudia Carstea, Gheorghe Sabau, Information System Flexibility, iunie 2011, vol 12, Nr 12,
http://simpozion2011.academiacomerciala.ro/ , Revista Calitatea Acces La Succes, Academia Comerciala Din Satu
Mare, Cotat n Baze De Date Internaionale, http://calitatea.srac.ro/arhiva_revista.html
[2] Crstea C., Nicoleta David Solutions about Evaluation and Control Data For Complex Information Systems, pag.165-169, A4,
WSEAS Conference in Istanbul, Turkey, 7th International Conference on TELEINFO08, New Aspects of Telecommunications
and Informatics TELE-INFO 08, 27-30mai 2008, ISBN-978-960-6766-64-0, ISSN-1790-5117, www.wseas.org, indexed by ISI
(ISINET), http://www.worldses.org/books/index.html
[3] Claudia CARSTEA, Gheorghe SABAU, Project Follow-Up and Change Management Tips, Informatica Economica Journal, Vol.
17, No. 3 / 2013, http://www.crossref.org, http://revistaie.ase.ro/, http://revistaie.ase.ro/content/67/08%20%20Carstea,%20Sabau.pdf, Faculty of Cybernetics, Statistics and Economic Informatics University of Economic Studies,
Bucharest. ISSN 1453-1305, ISSN 1453-1305, EISSN 1842-8088, CNCSIS B+ Category
[4] Claudia Carstea, Desicion Support for software project management, Ed. Economica, 2010, ISBN 978-973-709-377-6
[5] Dan Wodds and Thomas Matten Enterprise SOA. Design IT for Business Inovation. Ed. OReilly, 2006, 10-79
[6] Sabau Gh., Cretan A., A Negotiation Approach for Inter-Organizational Alliances, International Conference on Future Networks,
Proceedings, 2009
[7] Gheorghe Sabau, Mihaela. Muntean, Ana-Ramona Bologa, Razvan Bologa, Analysis of Integrated Software Solutions Market for
Romanian Higher Education, Journal of Economic Computation and Economic Cybernetics Studies and Research, 2009, pp 197203, ISSN 0424 267 X, indexat ISI Thomson Reuters
[8] Laufer, A. & Hoffman, E; Project Management Success Stories: Lessons of Project Leaders, New York: John Wiley
& Son, 2000, ISBN: 0-471-36007-4
[9] Olinger, Charles, The Issues Behind ERP Acceptance and Implementation, APICS: The Performance Advantage, 2007
[10] Oracle SOA Suite 10g Services Orchestration, 2006, 1-99
[11] Sandberg J., Understanding Human Competences at Work: An Interteprative Approach, The Academy of Management Journal,
Vol.43, N1, 2005
[12] http://www.change-management.com
[13] http://www.businessballs.com/organizationalchange.htm
860

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Modelling and Analysis of Automated Guided Vehicle System (AGVS)


Anilkumar K, Srinivasa Chari.V, Hareesha.M.S
Department of Mechanical Engineering, M.S.Engineering College, Visvesvaraya Technological University, Belgaum-590014
Email: akanilkumarme@gmail.com, charimech123@gmail.com;harishheggere@gmail.com

Abstract-The Automated Guided Vehicle (AGV) is a mobile Robot used in industrial applications to move materials around a
manufacturing facility or a warehouse. An AGV can also be called a laser guided vehicle (LGV) or self-guided vehicle (SGV). In
Germany the technology is also called Fahrerlose Transport system (FTS).
AGVs are extensively adopted in Flexible Manufacturing System (FMS), AGVs Navigate on Their own with the help of
various guiding mechanisms; broadly these mechanisms are classified as wired & wireless guidance mechanisms. In this project we
mainly concentrate on wireless guidance mechanism which makes use if guide tape (colored in this case) for navigation. Basically this
type of AGVs act as a line follower robot which follows the guide tape which is imprinted on the shop floor. The guide tape is sensed
by the onboard sensors which are present on the AGV; the sensor used for this purpose is an optical sensor which uses Infrared LEDs.
AGVs can be controlled by one centralized or onboard computer which i present in the AGV; in this project we use of an
Onboard microcontroller which acts as a machine control unit (MCU) to control & coordinate all the parameters of an AGV.
Index TermAutomated Guided Vehicle
1. INTRODUCTION
Automated guided vehicles (AGVs) help to reduce costs of manufacturing and increase efficiency in a manufacturing system.
1.1. Problem Statement
One of the Major Challenges in Industries is the Internal Logistics; the problem arises when the right product doesnt reach right
destination at right time, in order to overcome these we make use of advance material handling technologies such as AGVs.

Flexibility, agility, and dependability are critical in business survival in 21 st century. Stock holders and customers demand value;
companies must offer more for less or risk losing market share. Product models come & go as the consumer preference change in a
single season. In retail operation change may happen even faster.
Due to the advancements in the science, technology & Process in Todays Business, the only real constant is change. In face of
unrelenting change how the factories distribution centres get the right product at the right place at right time. In growing numbers they
meet the challenge with internal plant logistics system based on advance material handling technology. These systems offer flexibility,
agility & dependability to help manage the pace of change.
1.2 scope of the thesis and literature review
An AGV is Mechatronic system. Designing a mechatronic system for mechanical engineers is a challenge, as it requires a synergistic
approach which involves multiple disciplines such as Mechanical, Electronics & Computer Science.
Modern day industries are looking for a Flexible Manufacturing System (FMS), which boost their production & profit
margin. AGVs are extensively used in FMS.
Future mechanical engineers have to work with such a mechatronic systems, as more & more companies are looking for these kinds of
technologies in meeting the internal logistics problems. This thesis helpful in designing such a mechatronic systems & improve upon
knowledge base.
2. THE FUNCTIONAL REQUIREMENTS FOR THIS PROJECT

861

Primary Function: - An AGV used in Flexible Manufacturing System (FMS) to carry the Load from one point to another
autonomously along the guide tape.

The Mechanical Structure: - An AGV with a Fork Lifter (double power Screw Mechanism), unit load carrier and A Multi
Wheel equipped with belt drive Vehicle System, Bumpers & Gearbox Coupled to Motors.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Power Transmission: - Gear Drive Transmission.

Driving System: - Electrical Actuators (High Torque Geared DC motor) were adopted.

Sensory System: - Non Contact Type Optical Sensors for Positioning, Path Tracking, & Obstacle Detection. Contact Type
Sensors for Fork Lift Positioning & Obstacle Detection.

Control System:- Onboard Machine Control Unit (Microcontroller is used as MCU)

Other features such as Dynamic LCD Display, Buzzer, Warning Lights etc was used to add more Features into the AGV
2.1 Functional Design Phase
2.1.1 Data taken for the numerical calculations of the power screw
Data Given:

Diameter (D1)

=10 mm

Pitch (p)

=2

Load to be lifted (w)

=100N

Co-efficient of friction ()

=0.173

Angle of thread ()

=60deg

2.1.2 Results obtained for the numerical calculations of the power screw:
a) Torque required to raise the load:
Formula to find the torque to raise the load is given by
T=w*Dm/2*(+tan/1- *tan)

N-mm ------(1)

Where,
Dm=Mean diameter
A=Lead angle or helix angle.
WKT;
Dm=D1-p/2
=10-2/2
=9 mm
Tan=p/*Dm
=2/ *9
=4.046
Now substituting all the values in eqn(1), we get
T=100*9/2*(0.173+tan4.406/1-0.173*tan4.406)
862

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

=450*(0.25/0.986)
T =114.1

N-mm

b) Tangential force acting on the mean radius


F=w (-tan/1+ *tan) N
=100(0.173-0.077/1+0.173*0.077)
=100(0.096/1.013)
=9.477 N

c)

Stress in power screw


c=w/Ac
Ac=/4*D22
D2=8 mm

whereD2= 2Dm-D1

Ac=/4*82
Ac=50.26 mm2
c=100/50.26
c=2.00 N/mm2
d) To find Torsional shear stress
WKT

=16*T/*D23
=16*114.1/ *83
=1.135

N/mm2

2.1.3 Fork Stress Distribution Analysis using


software ANSYS

863

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fork lift model designed using CATIA

Fig. von-mises(equivalent) stress at load 50N

Fig. total deformation at load 50N

Von-mises stresses at load of 100N


864

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Total deformation at load of 100N

Fork Analysis using Bending stress analytically

3,6
y-y

100

50

3,6

50

x-x

100

10000 N

Bending moment (M)= Force * Perpendicular distance= (100*100)*(50) = 5*10^5 N-mm


Maximum deflection(Y) = h/2=100/2=50 mm
To find Moment of Inertia (I):
A1=96.4*3.6=347.04 mm2
A2=100*3.6=360 mm2
Moment of Inertial along X-X axis
y1=50 mm

y1=

Ixx1=
865

y2=1.8 mm

(+)
^

. + .
(.+)

+A1*h12 =

..^

=25.45 mm

+347*24.452 = 4.76*10^5 mm4

where h1= (y1- y1)

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Ixx2=

+A2*h22 =

.^

+360*(-23.65)2. = 2.02*10^5 mm4

where h2= (y2- y1)

Ixx= Ixx1+ Ixx2=4.76*10^5 +2.02*10^5 =6.78*10^5 mm4

Moment of Inertial along Y-Y axis


x1= 1.8 mm
x1=

x2= 50 mm

+
(+)
^

Iyy1=

Iyy2=

.+
(+)
..^

+A1*h12 =
+A2*h22 =

.^

=26.34 mm
+347*(-24.54)2 = 2.09*10^5 mm4

+360*(23.66)2. = 5.01*10^5 mm4

where h1=(x1- x1)


where h2=(x2- x1)

Iyy= Iyy1+ Iyy2=2.09*10^5+5.01*10^5 =7.1*10^5 mm4


I= Ixx+ Iyy = 6.78*10^5 + 7.1*10^5 = 13.88*10^5 mm4

I= 13.88*10^5 mm4
=
=

= 18.01 N/mm

^
. ^

Since the both analytical stress and Von Misss stress in FEA are less than the yield stress of Aluminium (i.e. Y s of Al is 20 MPa), the
Design is safe.

3. CONCLUSION
AGVs is a proven technology; a lot of research is undertaken in this field as there is more scope for improvements. AGVs of modern
day are so intelligent that they can recharge their batteries on their own by docking into recharge station.
This project has given me a great exposure in designing such a mechatronics system. Prior to this project venture I didnt had any
experience in designing such a system. I approached this project with learning by doing mode. So I had learnt a great deal of things
right from detailed engineering design to final design.

ACKNOWLEDGMENT

866

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

This project was a great learning experience for me & I cherish this throughout my life. I would like to thank all the people who
helped me directly or indirectly in making this project successful. I thank my parents & friends for their moral support. I thank the
God for hearing my prayers & I seek his blessings for all my future endeavours.

REFERENCES:
(1) Automation, Production Systems and computerIntegrated Manufacturing
Author-Mikell.P.Groover
Design of Machine Elements
Authors-J.B.K.Das & P.L.Srinivasa Murthy
(3) Desing Data handbook
Authors-K.Mahadevan & K.Balaveera Reddy
(4) WWW.Wikipedia.org
(5) www.atmel.com
(6) www.HowStuffWorks.com
(7) www.Ikalogic.com

867

www.ijergs.org

(2)

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

DETECTION AND CLASSIFICATION OF PLANT LEAF DISEASES


Kshitij Fulsoundar1, Tushar Kadlag2, Sanman Bhadale3, Pratik Bharvirkar4
Prof S.P.Godse5
1,2,3,4

Student,5Guide Department of Computer Engineering, Sinhgad Academy of Engineering, Pune,


Maharashtra, India

Abstract- Medicinal plants used very much in herbalism to study the medicinal properties of the plants. The applications
of Near-Infrared Spectroscopy (NIRS) have expanded widely in the field of agriculture, plants and various other fields,
but the usage for identification of plant variety is still rare.In this project we describe the development of an Android
application that gives users the ability to identify plant species based on photographs of the plants leaves taken with a
mobile phone. At the heart of this application is an algorithm that acquires morphological features of the leaves, computes
well documented metrics such as the angle code histogram (ACH), then classifies the species based on a novel
combination of the computed metrics. The algorithm is first trained against several samples of known plant species and
then used to classify unknown query species. Aided by features designed into the application such as touch screen image
rotation and contour preview, the algorithm is very successful in properly classifying species contained in the training
library.

KeyWords:Android; Java;Oracle Database;Image Processing


1. INTRODUCTION
Plants play an important role in the cycle of nature. The number of plant species is estimated to be
around 400,000 however there still exist many species which are yet unclassified or unknown. Therefore, plant
identification is a very important and challenging task. With the rapid progress of information technologies,
many works have been dedicated to applying the technologies of pattern recognition and image processing to
plant identification. Since leaves are the organ of plants and their shapes vary between different species, the leaf
shape provides valuable information for plant identification. The paper aims to describe, leaf identification
based on shape. Leaf shape description is the key problem in leaf identification. Up to now, many shape
features have been extracted to describe the leaf shape.
In this project, we describe the development of an Android application that gives users the ability to
identify plant species based on photographs of the plants leaves taken with a mobile phone. At the heart of this
application is an algorithm that acquires morphological features of the leaves, computers well documented
metrics such as the angle code histogram (ACH), then classifies the species based on a novel combination of the
computed metrics. The algorithm is first trained against several samples of known plant species and then used to
classify unknown query species. Aided by features designed into the application such as touch screen image
rotation and contour preview, the algorithm is very successful in properly classifying species contained in the
training library.
The application of digital image processing techniques to the problem of automatic leaf classification
began two decades ago and it has been since proceeding in earnest. The technology found some of its earliest
applications in industrial agriculture, where the desire was to separate crop species from weed species, allowing
for decreased use of pesticides. The problem is complicated in this application by complex backgrounds that
make image segmentation difficult, but simplified by foreknowledge of one or two desirable crops amongst
unwanted weed species. Color and texture information is often sufficient to make this distinction, as opposed to
more general applications, where numerous shape features must be acquired. More recently, several groups
have approached the problem of automatic leaf classification. Though the groups often use similar digital
morphological features, e.g. rectangularity, sphericity, eccentricity, etc., there is great variation in how these
868

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

measures are combined and used in classification. Wu et al, for instance, used a two-tiered system, eliminating
grossly different samples on the basis of eccentricity alone before making a finer judgment based on the
combination of the cancroids-contour distance (CCD) curve, ACH, and eccentricity. Du et al. used roughly a
dozen morphological features and moments and defined a classification method called the move median centers
(MMC) hyper sphere classifier, achieving a correct classification rate of over 90%. Wu et al., on the other hand,
achieved similar results employing a probabilistic neural network. Another approach looks at a polygonal
approximation of the leafs shape.
Strategies of using near infrared spectroscopy (NIR) to classify plant leaves are as follows: firstly, same
number but different types of plant leaves were measured in the laboratory by using the Nexus-870 Fourier
transform infrared spectroscopy, then wavelet analysis and Blind Sources Separation (BSS) were used to
process the sample data, finally, BP neural network algorithm was applied to classify different kinds of plant
leaves. Figure shows the steps taken for the classification of plant leaves. The first step of the experiment is to
collect Camphor and Maple leaves, 75 pieces for each. The total pieces are divided into two sets: training set
and test set. As is shown in Table, Camphor leaf is marked as Leaf A, Maple leaf is marked as Leaf B, and each
50 pieces of both kinds are treated as training samples, and the rest 25 pieces of each are test samples.
2.ORACLE DATABASE
An Oracle database is a collection of data treated as a unit. The purpose of adatabase is to store and retrieve related
information. A database server is the key tosolving the problems of information management. In general, a
server
reliablymanages a large amount of data in a multiuser environment so that many users canconcurrently access the same
data. All this is accomplished while delivering highperformance. A database server also prevents unauthorized access and
providesufficient solutions for failure recovery.The database has logical structuresand physical structures. Because the
physical and logical structures are separate, the physical storage of data can be managed without affecting the access to
logical storage structures.
3.PRESENT SYSTEM

Leaf shape description is the key problem in leaf identification. Up to now, many shape features have been
extracted to describe the leaf shape. But there is no proper application to classify the leaf after capturing its
image and distinguishing its attributes yet.
4. PROPOSED SYSTEM
In the proposed system, the application facilitates user to provide the image of the leaf as the input. The system applies
algorithm to derive vital parameters related to the properties of the leaf. It then compares these parameters with the ones
stored against a leaf entry in the database.On successful match of the parameters, the application displays information
related to that particular leaf to the user for his review.
Likewise, the system also facilitates the user to report false reports generated by the application so as to sharpen its
results in future

869

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 4.1: System Architecture

4.1 Training
In this module, necessary input is fed to the system in the form of images of the leaf. The system applies necessary steps
to extract values for vital parameters from the image. This image along with these parameters with their values and other
essential information is stored in the database. These functions are performed by the admin from his login screen
4.2 Image Acquisition
This module is a part of the app installed on the android mobile. This module is initiated whenever the user intends to
discover details about any leaf. Using this module, the user can submit the image of the leaf for its identification by the
system. This module captures the image and sends it to the central server for processing.

4.3 Preprocessing
Using this module, the image captured from the users mobile is subjected to necessary preprocessing. In this method, the
image is converted into a standard binary format from other format like color or grayscale. This preprocessed binary
image is then subjected to identification wherein the vital parameters of the leaf are extracted for its comparison.
870

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4.4 Identification
In this process, the vital parameters of the leaf are extracted for its comparison with the ones stored in the database. The
algorithm is applied on the preprocessed image for its comparison. The image which has maximum of its characteristics
matched with the ones stored in the database is displayed to the user for viewing further details about the leaf
4.5 Improvement by feedback
The system facilitates user with an option to improve the results generated by it and reporting whether they were as per
the expectation and reality. Based on the feedback submitted by the user, the system trains and improvises itself further.
4.6 Share
The system also gives the user an option to share the results with other members around him

Algorithm Steps:
1. Image Acquisition
a. Image capture from phone
2. Pre-processing
a. Color-grayscale image to binary image
3. Morphological Feature Extraction
a. Centroid-contour Distance Curve
b. Aspect Ratio (AR)
c. Rectangularity (R)
d. Convex Area Ratio (CAR)
e. Convex Perimeter Ratio (CPR)
f. Sphericity (S)
g. Circularity (C)
h. Eccentricity (E)
i. Form Factor (FF)
j. Regional Moments of Inertia (RMI)

The application of digital image processing techniques to the problem of automatic leaf classification began two
decades ago and it has been since proceeding in earnest. In 1989, Petry and Kuhbauch were the first to extract
digital morphological features for use with identification models. The technology found some of its earliest
applications in industrial agriculture, where the desire was to separate crop species from weed species, allowing
for decreased use of pesticides. The problem is complicated in this application by complex backgrounds that
make image segmentation difficult, but simplified by foreknowledge of one or two desirable crops amongst
unwanted weed species. Color and texture information is often sufficient to make this distinction, as opposed to
more general applications, where numerous shape features must be acquired. More recently, several groups
871

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

have approached the problem of automatic leaf classification. Though the groups often use similar digital
morphological features, e.g. rectangularity, sphericity, eccentricity, etc., there is great variation in how these
measures are combined and used in classification. Wu et al, for instance, used a two-tiered system, eliminating
grossly different samples on the basis of eccentricity alone before making a finer judgment based on the
combination of the cancroids-contour distance (CCD) curve, ACH, and eccentricity. Du et al. used roughly a
dozen morphological features and moments and defined a classification method called the move median centers
(MMC) hyper sphere classifier, achieving a correct classification rate of over 90%. Wu et al., on the other hand,
achieved similar results employing a probabilistic neural network. Another approach looks at a polygonal
approximation of the leafs shape.
Strategies of using near infrared spectroscopy (NIR) to classify plant leaves are as follows: firstly, same number
but different types of plant leaves were measured in the laboratory by using the Nexus-870 Fourier transform
infrared spectroscopy, then wavelet analysis and Blind Sources Separation (BSS) were used to process the
sample data, finally, BP neural network algorithm was applied to classify different kinds of plant leaves. Figure
shows the steps taken for the classification of plant leaves. The first step of the experiment is to collect
Camphor and Maple leaves, 75 pieces for each. The total pieces are divided into two sets: training set and test
set. As is shown in Table, Camphor leaf is marked as Leaf A, Maple leaf is marked as Leaf B, and each 50
pieces of both kinds are treated as training samples, and the rest 25 pieces of each are test samples.
Many papers have been presented in International Journals and conferences; researchers have worked on
hierarchical, neural networks and machine learning methods. The work on classification of leaves started in the
early 20th century. The earlier dataset was the laboriously collected set of leaves, gathered by those who were
working on it. Chrysanthemum, for instance, was the dataset created and used in involved user interaction to
differentiate the 12 varieties, of the same kind that included self-intersections. The Flavia dataset was originally
created by Wu et al. It is a collection of 32 different species, a total of 1907 leaves. He extracted 12 leaf features
and featured them into 5 principle variables. This was the input vector to the Probabilistic Neural Network
(PNN) that was trained by 1800 leaves. It earned an accuracy of more than 90%. This flavia dataset, now
available as the standard dataset is accessible to all the researchers to work on. Initially the works were carried
out using neural networks.
Tzionas et al., in implemented an artificial vision system that extracted specific geometrical and morphological
features. Using a novel feature selection approach, a subset of significant image features was identified. A feed
forward neural network was employed to perform the main classification task and that was invariant to size and
orientation. It could successfully operate even with the deformed leaves. It achieved a considerable high
classification ratio of 99%. Further, Lin and Peng in attempted to realize a computer automatic classification for
30 broad leaved plants in a more convenient, rapid and efficient manner using PNN achieving 98.3% of
accuracy. Kadir et al., in also proposed a method for leaf classification which incorporates shape and vein, color
and texture features and used PNN as a classifier. The experimental result gives an accuracy of 93.75% when
tested on Flavia dataset. Later with the advancement of Machine Learning, researches were conducted
comparing its techniques with the neural network approaches. Anami et al., used Support Vector Machine based
on color and texture feature and neural network classifier to identify and classify images of medicinal plants
such as herbs, shrubs and trees. The edge and color descriptors have low dimension and are effective, simple
and rotation -invariant. Singh et al., in carried out the experiment on 50 leaf samples of 32 different classes of
varying shapes; using three different techniques; Support Vector Machine (SVM), Fourier Moments and PNN
based on the leaf shape and achieved an accuracy of 96%, 62% and 91% respectively. ArunPriya et al., in
compared SVM and k-NN classifier on flavia and a real dataset consisting of 15 tree classes and claimed SVM
to be a better classifier achieving more than 94% of accuracy for both the dataset. Kumar et al., conducted a
survey on different classification techniques; k-Nearest Neighbor classifier, Neural Network, Genetic
Algorithm, SVM and Principal Component Analysis and listed their advantages and disadvantages. The
drawbacks of SVM are that it is a binary classifier, training is slow, and it is difficult to understand structure of
872

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

algorithm. It also has limitation with speed and size, both in training and testing. Owing to the drawbacks of
SVM, Valliammal and Geethalakshmi in worked on automatic recognition system taking a total of 500 plant
and flower images for identifying and recognizing them to their respected categories using Preferential Image
Segmentation which is invariant to translation, rotation and scale transformations. They further extended their
work towards leaf classification and recognition using Hybrid image segmentation that combines threshold and
H maxima transformations.
This method extracted more accurate values of the leaf and involved minimum computational time in
comparison. A recent paper by Kaur and Kaur in used neural network based LM algorithm to train the classifier
and carried out the experiment on 12 kinds of leaves which yielded an accuracy of 97.9 %. The leaf
classification is of great importance in the field of medicine. S.E Kumar in conducted an experimental analysis
with a few medicinal plant species such as Hibiscus, Betel, Ocimum, Leucas, Vinca etc. and proved that the
method of identification based on leaf features such as area, color histogram and edge histogram is an efficient
attempt.
The leaves of five different plants namely Indian borage (KarpooraValli), Hibiscus rosa-sinensis
(Hibiscus),Ocimumtenuiflorum (Tulasi), Solanumtrilobatum (Dhuthuvalai) and Piper betel (Betel) were
collected, since they are used in most of the traditional herbal cough syrups in India. The images of these plant
leaves are shown in the Figure 1. and Figure 2. as fresh and dried ones. A large number of samples were
collected and based on their physical well being, few samples were taken for analysis.

CONCLUSION
Our undertaken project is thus going to be useful for farmers ,horticulturists and to trekkers by providing an useful plant
leaf identification system and will eventually identify their diseases.
ACKNOWLEDGEMENTS
We would like to sincerely thank Prof.S.P.Godse, our guide from Sinhgad Academy of Engineeringfor his support and
encouragement.

REFERENCES:
[1] Z. Wang, Z. Chi, and D. Feng, Shape based leaf image retrieval, IEE Proc. Vis. Image Signal Process., 150 (1), pp.
34-43, 2003.
[2] J.X. Du, D.S. Huang, X.F. Wang, and X. Gu, Computer-aided plant species identification (CAPSI) based on leaf
shape matching technique, Trans. Inst. Measure. Control, 28 (3), pp. 275-284 , 2006.
873

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[3] F. Mokhtarian and S. Abbasi, Matching shapes with selfintersection: application to leaf classification, IEEE Trans.
Image Process., 13 (5), pp. 653-661 , 2004.
[4] C.L. Lee and S.Y. Chen, Classification for leaf images, Proc. 16th IPPR Conf. Comput. Vision Graphics Image ,
Process, pp. 355362, 2003.
[5] J.X. Du, X.F. Wang and G.J. Zhang, Leaf shape based plant species recognition, Applied Mathematics and
Computation, 185, pp. 883893, 2007.
[6] X.F. Wang, D.S. Huang, J.X. Du, H. Xu and L. Heutte, Classification of plant leaf images with complicated
background, Applied Mathematics and Computation, 205, pp. 916926, 2008.
[7] S. Abbasi, F. Mokhtarian and J. Kittler. Reliable classification of chrysanthemum leaves through curvature scale
space, ICSSRC97, pp. 284295, 1997.
[8] F. Mokhtarian, S. Abbasi and J. Kittler, Efficient and robust retrieval by shape content through curvature scale space,
.Proceedings of the International Workshop Image DataBases and MultiMedia Search, pp. 3542, 1996.
[9] F. Mokhtarian and A.K. Mackworth, A theory of multiscale curvature based shape representation for planar curves,
IEEE Trans. Pattern Anal. Machine Intell., 14, pp. 789-805, 1992

874

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Analysis of Scheduling Nested Transactions in Distributed Real-Time


Environment
Anup A. Dange, Prof. Neha Khatri-Valmik
Department of Computer Science and Engineering
Everest Education Societys Group of Institutions
College of Engineering & Technology
Ohar, Aurangabad, Maharashtra, INDIA
dange.anup88@gmail.com
nehavalmik@gmail.com
Abstract A lot of research work has been done in the field of real-time database system to seek optimizing transaction scheduling
to ensure global serializability. Nested transaction offers more decomposable execution units and finer-grained control over
concurrency and recovery than flat transactions. For most applications we believe that it is desirable to maintain database constancy.
It is possible to maintain consistency without serializable schedules but this requires more specific information about the kinds of
transactions being executed. Since we have assumed very little knowledge about transactions, serializability is the best way to achieve
consistency.
Keywords Serializability, Concurrency Control, Lock Mechanism, hierarchical and flat protocol commit protocol, Two Phase
Locking, Transaction Optimization, priority assignment
1.

Introduction

Todays Database Management Systems (DBMSs) work in multiuse environment where users access the database
concurrently. Therefore the DBMSs control concurrent execution of user transactions, so its overall correction of the
database is maintained. Transaction is a user program which accesses the database. Database concurrency control permits
users to access a database in a multiprogrammed fashion while preserving the illusion that each user is executing alone on
a dedicated system. A distributed database system allows applications to access data from local and remote databases.
Distributed applications spread data over multiple databases on n number of machines. Several smaller sized servers can
be less expensive and more flexible than one large size, centrally located server. Distributed configurations take advantage
of small sized, powerful server machines and less expensive connectivity as an option. Distributed system allows you to
store data at several sites and each site can transparently access all the data. The key goals of distributed database system
are to maintain availability, accuracy, concurrency and recoverability.
multiple users are accessing the same database simultaneously, their data operations must have to be coordinated
so that inconvenient result gets avoided and concurrency of the shared data is reserved. This is called concurrency control
and should provide each concurrent user with the illusion that he is referencing independent, dedicated database. For
concurrent transactions execution serializability is the major correctness criteria. It is considered the highest level of
isolation between transactions and plays an essential role in concurrency control. Two-phase locking is the most common
method for concurrency control among transactions. Also this has been accepted as standard solution.
1.1 Commit Protocols
Enough research has been done on commit processing of flat transactions. The protocols are one phase commit,
two phase commit and three phase commit protocols, PROPT real-time commit protocol and many others according to
their optimizations. These protocols involve in transferring many messages in any of phase between participants where
distributed transactions are executed. During this process many log records are generated and some are forced to dumped
or flushed into the disk immediately in a synchronous manner. Due to this logging and messaging, commit protocols
significantly increase the execution time for transaction execution. This creates problem to meet need of real-time context,
this ultimately results in violation of timing constraints imposed on transactions. Therefore, selection of commit protocol
is an important design decision for DRTDBS. There are lots of papers those already discussed about this issue given
solution as relaxation in traditional ways or notations of atomicity or strict resource allocations and performance
guarantees from the system. These all according to Harista who proposed PROPT real-time commit protocol.
875

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Nested transactions means hierarchy of transactions and sub transactions and other sub transaction within
previous sub transaction or contain any of database operations (read and write). In whole nested transaction is a collection
of sub transactions and atomic database operations that comprise whole atomic execution unit.
In this paper we focus on How to achieve global serializability through concurrency control and transaction commit
protocols. The concurrency control mechanism can be thought of as a policy for resolving conflicts between two or more transactions
that want to lock the same data object. For concurrency control we used lock mechanism, called 2PL-NT-HP to solve conflict
problems between nested transactions. For nested real-time transactions we used hierarchical and flat protocols, called 2PC-RT-NT.
1.2 Organization
The remainder of this paper consists of: Next section describes existing nested transaction system model along with its
characteristics. Section 3 contains a real-time scheduler with priority assignment and concurrency control for nested transactions
named as Two Phase Commit Nested Transaction Hierarchical Protocol. 4 th section contains study of traditional hierarchical and flat
two phase commit protocol along with detailed information. Last Section consists of conclusion and future research direction.

2. Real Time Nested Transaction Model


Two main types of nested transaction models are (1) closed nested transaction and (2) open nested transaction. In
the closed nested transaction, a sub transactions effect not appears outside of its parents view. Here commitment of sub
transaction is depends upon the commitment of parent, while in open nested transaction model, sub transaction execute
and commit itself independently. Due to some semantics of transactions we just include closed nested transactions.
In nested transaction model sub transactions are appearing atomic to the surrounded transactions and may commit
independently. Until and unless all child transactions are committed a transaction is not allowed to commit. If child aborts
its parent transaction need not to abort instead it just performs its own task or recovery. This is important to achieve its
goal transaction need to perform any of following task: (1) to ignore condition (2) to restart the sub transaction (3) to
initiate new sub transactions.
Some of the characteristics of nested transactions are listed in following table.
Parameter
Ti
D(Ti)
P(Ti)
ArrTime(Ti)
SlackTime(Ti)
Starttime(Ti)
ResTime(Ti)

RemExTime(Ti)
ElaExTime(Ti)

Meanings
A
transaction/
sub
transaction
Deadline of Ti
Priority of Ti
Arrival time of Ti
SlackTime of Ti
StartTime of Ti
Resource time that the
transaction requires for its
execution
Remaining execution time
of Ti
Elapsed execution time of
Ti

Table1. Characteristics of Nested Transaction


3. Scheduling Real-Time Nested Transactions

3.1 Priority Assignment Policy


876

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Different types of priority assignment policies are there for flat transactions in Real time database management
system. Without priority both transactions Ti and Tj shares CPU and disk units. Due to this any of transaction misses its
deadline before completion of others work. Due to this best policy that is EearliestDeadLineFirst (EDF) priority
assignment is the best in terms of success ratio. EDF assigns priority on the base of deadline. It assigns highest priority to
earliest deadlined transaction. The formula for priority is given by:
P(Ti) = 1/D(Ti)
3.2 Sub transaction Priority Assignment
Along with main transaction priority there is need to prioritize subtransactions also. This help to ensure that
transaction is not get delayed due to data conflict. There is need to assign deadline to subtransactions also. This deadline is
according to transaction deadline and individual workload of subtransaction. But this does not improve success ratio as
proven in [4]. There is another way to assign priority to transaction as describe in [5]. According to [5] addition of small
p-value to the overall priority of transaction, this might help to prioritize subtransactions within a transaction such that this
does not effect on priority assignment policy based on EDF. As child subtransaction must get complete before its parent
subtransaction or transaction this is been done by assigning higher priority to a child subtransaction than parent
transaction. This help to avoid intra-deadlock. Formula to calculate subtransaction priority is given by:
Subtransaction_priority = transaction_priority + subtransaction_level
Where, subtransaction_level is 0 for top level transaction. Nest to top level transaction has level value as 1, and so on,
level value increases by 1 in each next level down in transaction tree.
4. Real-Time Concurrency Control
The most important characteristic of the cconcurrency control protocol is performance. In conventional database systems
performance is usually measeured as the number of trasactions per second. In real-time databases performane is depends upon many
other criteria, which are related to real-time. Some of these are the number of transactions that missed their deadlines, average tardy
time etc. Due to new goals of optimization the algorithm that are used in the conventional daatabase systems do not show best result.
4.1 Data Conflict
Data Conflicts between committing and executing transactions are not uncommon compared with data conflicts between
executing transactions. For example, in two phase locking protocol if an executing transaction request a data item which being locked
by another transaction in a conflicting mode, the lock request will be denied and the executing transaction will be blocked until the
lock is released.
4.2 Classical 2PL for Nested Transactions

Locking protocol provides two modes of synchronization [16]: Read: permits number of transactions to
access database at a time. Write: permits a single transaction for accessing a data item[12,9]. Transaction can acquire a
lock on data items in any of the mode as M(Read,Write) and it holds until its termination. Other than holding a lock on
data item a transaction can retain a lock when one of its subtransaction already committed. Its parent transaction inherits
lock and then can retain them. There is a difference between holding and retaining lock; if a transaction holds a lock then
it can access locked data items in given mode but this is not true for retaining a lock. Suppose write lock is retained by
transaction Ti then subtransaction outside the hierarchy of retained lock is not able to acquire the lock. Whereas its
descendent can acquire lock in read as well as write mode. This is not true in case of read lock. If a subtransaction Ti
retains lock in write mode then any non descendent can holds the lock but only in read mode not in write mode and it
remains lock retainer until its termination (i.e. commit or abort).

877

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4.3 Real-Time 2PL for Nested Transactions


During concurrent execution of transactions it is important to maintain concurrency to order the updates
in databases to maintain serializability. To do so in most of flat and nested transaction models locking data item is
standard method. In real-time environment to maintain serializability a real-time concurrency control model was put
forward by M. Abdouli, B. Sadeg and L.Amanton in their paper termed as 2PL-NT-HP[]. It used to solve data conflict
problem occur between subtransactions by allowing transactions/subtransactions with higher priority to access data item
and blocks or abort lower priority transaction. This 2PL-NT-HP is extended model of classical 2PL-NT, where some extra
characteristics are added to it. These characteristics are combination of priority inheritance, priority abort, conditional
restart and controller of wasted resources. Other than these there are some more characteristics of 2PL-NT-HP.
4.3.1 Priority Inversion
When the priority driven preemptive scheduling approach and two-phase locking protocol are simply
inherited together, a problem occurs called as priority inversion. This occurs when higher priority transaction has to wait
for execution of lower priority (sub) transaction.
For example, suppose transaction TH has highest priority and TL has lower priority than TH and TH is blocked by TL due to
access conflict. In this case TH waiting for lock and TL is executing, some transaction TM may arrive, whose priority lies in
between priorities of TH and TL then it just preempt TL and take over the CPU even though TM has no data conflict with TL
or TH. This eventually delays the execution of TH.
4.3.2 Priority Inheritance
Under this, when priority inversion occurs low priority transaction holding lock will execute at the
priority of highest priority transaction waiting for the lock, until it terminates (abort or commit). In this way the lock
acquiring transaction executes faster and releases lock quickly. This results in reduction of blocking time of higher
priority transaction.
For example, suppose TH is blocked by TL due to data access conflict. Then, by using priority inheritance, T L will execute
at the priority of TH. Now if TM, an inheritance priority transaction, arrives, it cannot preempt T L since its priority is less
than the inherited priority of TL. Thus TH will not be delayed by TM.
4.3.3 Priority Abort
Priority inversion problem is overcome by using priority abort scheme. In this lower priority transaction
is aborted when higher priority transaction requires lock which is hold by lower priority transaction.
For example, when transaction TH conflicts with a lock holding transaction TL, TL is aborted if THs priority is higher than
that of TL. Otherwise, TH will wait for TL. In this way, a high priority transaction will never be blocked by any lower
priority transactions. Therefore priority inversion is completely eliminated.
4.3.4 Conditional Restart and Controller of Waste Resources
Conditional restart is employed to avoid the starvation problem come across during scheduling by using
EDF, the mechanism is as follow: If slack time of (sub) transaction with higher priority is enough to execute after all
lower priority (sub) transactions and then allowing these lower priority (sub) transactions to access data items firsts
instead of aborting lower priority (sub) transactions. In other case these lower priority (sub) transactions must be aborted
and restarted if controller of water resources allows it.
The controller of waste resources is a mechanism that checks if the restarted (sub) transaction can achieve
its work before its deadline, using this mechanism, we reduce the wasting of resources as well as time.
878

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5. Distributed Real-Time Nested Transaction Commit Protocol


For the nested transaction model a variety of protocols are already developed most are based on classical
2PC protocol. In our paper we focus on distributed real-time 2PC for nested transactions. But before directly working with
2PC we first study classical 2PC protocol.
5.1 The Classical 2PC protocol
In the classical 2PC protocol, there are coordinator and participants. There is no direct communication between coordinator
and participants apart from participants inform the coordinator while they join the transaction. Coordinator and participants
communicate using message passing technique. Client can directly request to coordinator to commit (or abort) transaction. If client
request aborttransaction the transaction then coordinator informs this to every participant immediately. If client request for commit
transaction then 2PL comes to existence. In the first phase of the two-phase commit protocol the coordinator asks all the participants if
they are prepared to commit this is called voting phase; in the second, it tells them to commit (or abort) the transaction; this is called
decision phase. This is illustrated in fig.

Phase 1 (voting phase):


1. The coordinator sends a canCommit? request to
each of the participants in the transaction.
2. When a participant receives a canCommit? request
it replies with its vote (Yes or No) to the coordinator.
Before voting Yes, it prepares to commit by saving
objects in permanent storage. If the vote is No, the
participant aborts immediately.
Phase 2 (completion according to outcome of vote):
3. The coordinator collects the votes (including its
own).
(a) If there are no failures and all the votes are Yes,
the coordinator decides to commit the transaction and
sends a doCommit request to each of the participants.
(b)Otherwise, the coordinator decides to abort the
transaction and sends doAbort requests to all
participants that voted Yes.
4. Participants that voted Yes are waiting for a
doCommit or doAbort request from the coordinator.
When a participant receives one of these messages it
acts accordingly and, in the case of commit, makes a
haveCommitted call as confirmation to the
coordinator.

Fig. 1 Two Phase Commit Protocol


This above protocol invokes only after receiving the WORKDONE message from the participant to the
coordinator. This indicates transaction is ready to commit that is the work assigned to it is been completed. After receiving
WORKDONE message coordinator broadcast PREPARE message to the participants. Those participants are ready to
commit are replied as YES vote and acquires a prepared state. This prepared state means not unilaterally abort or commit
the transaction alone. It has to wait for the final decision from the coordinator. On the other hand those participants wants
to abort sends NO vote; it acts as veto to participants. This result in unilateral abort of transaction without waiting for the
final outcomes from the coordinator. After receiving vote from each participant coordinator starts second phase. If all
votes are YES then coordinator commits transactions by sending message as COMMIT to each participant; but any of
participant votes NO then transaction is aborted by sending ABORT message to all participants. In both cased ACK
message receives to the coordinator and all the resources get released.
879

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5.2 Real-Time 2PC for nested transactions


The nested 2PC protocol (also called Tree 2PC or Recursive 2PC) is a common variant of 2PC in a computer network,
which better utilizes the underlying communication infrastructure. The participants in a distributed transaction are typically invoked in
an order which defines a tree structure, the invocation tree, where the participants are the nodes and the edges are the invocations
(communication links). The same tree is commonly utilized to complete the transaction by a 2PC protocol, but also another
communication tree can be utilized for this, in principle. In a tree 2PC the coordinator is considered the root ("top") of a
communication tree (inverted tree), while the cohorts (participants) are the other nodes. The coordinator can be the node that
originated the transaction (invoked recursively (transitively) the other participants), but also another node in the same tree can take the
coordinator role instead. 2PC messages from the coordinator are propagated "down" the tree, while messages to the coordinator are
"collected" by a cohort from all the cohorts below it, before it sends the appropriate message "up" the tree (except an abort message,
which is propagated "up" immediately upon receiving it or if the current cohort initiates the abort).
ACKNOWLEDGMENT
There is always a sense of gratitude, which everyone expresses for others for the help that has been rendered a crucial point in
life and which facilitated the achievements of goals. I want to express a deepest gratitude to everyone who has helped me in
completing this Seminar report successfully.
I feel extremely honored for the opportunity to work under the guidance of Prof. Neha Khatri-Valmik. Her eagerness to
discuss on the topic and offer suggestion has been constant of encouragement of this work. I express my indebtedness to her support.
I am also thankful to Prof. R.A. Auti, Head, Computer Science & Engineering Department, Everest Education Societys
College Of Engineering and Technology, Aurangabad.
I am also thankful to Prof. Vankatesh Gaddime, Principal, Everest Education Societys College Of Engineering and
Technology, Aurangabad; for providing all necessary facilities at the college level and many helpful suggestions.
CONCLUSION AND FUTURE WORK

In this paper we studied the performance of distributed real-time nested transactions. We have used
comprehensive approach to ensure global serializability such as real-time concurrency control approach and to maintain
atomicity we used real-time two phase commit protocol. 2PL-NT-HP is a combination of properties like priority
inheritance, priority abort, a conditional restart and controller of waste resources. Hierarchical 2PC is better than flat 2PC.
The level size affects performance of real-time nested transactions. As we increase level size its performance gets
decrease this is because of communication and number of messages exchanged on each level.
For the future work we use other protocols such as 3PC and PROPT protocols to enhance the performance of real-time
nested transactions.

REFERENCES:
M. Abdouli, B. Sadeg and L. Amanton, Scheduling Distributed Real-Time Nested Transactions, Eight IEEE International
Symposium on Object-Oriented Real-Time Distributed Computing (ISORC05).
Hong-Ren Chen, and Y.H.Chin, "An Efficient Real-Time Scheduler for Nested Transaction Models", In Proceedings of the Ninth
International Conference on Parallel and Distributed Systems (ICPADS'02),2002, pp.335-340.
K.Y. Lam, T.W. Kuo, and W.H. Tsang, "Concurrency Control in Mobile Distributed Real-Time Database Systems", Information
Systems, 2000, vol.25, no.4, pp.261-286.
S.K. Lee, M. Kitsuregawa, and C.S. Hwang, "Efficient Processing of Wireless Read-Only Transactions in Data Broadcast", In
Proceedings of 12th International Workshop on Research Issues in Data Engineering: Engineering E-Commerce/E-Business
Systems RIDE-2EC, 2002,pp.101-111.
V.C.S. Lee, K.W. Lam, and S.H. Son, "On transaction Processing with Partial Validation and Timestamp Ordering in Mobile
Broadcast Environments", IEEE Transactions on Computers, 2002, vol.51, no. 10, pp.1196-1211.
Lei Xiangdong, Zhao Yuelon, and Yuan Xiaol, "Transaction Processing in Mobile Database Systems", Chinese Journal of
Electronics, 2005, vol.14, no.3, pp.491-494.
V.C.S. Lee, K.W. Lam, and T.W. Kuo, "Efficient Validation of Mobile Transactions in Wireless Environments", The journal of
Systems and Software, 2004, vol.69, no.1, pp.l83-193.
880

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Hong-Ren Chen, and Y.H. Chin, "Scheduling Value-Based Nested Transactions in Distributed Real-Time Database Systems", RealTime Systems, 2004, vol.27, pp.237-269.
M. Abdouli, B. Sadeg, and L. Amanto, "Scheduling Distributed Real-Time Nested Transactions", In Proceedings of the Eighth IEEE
International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC'05), 2005, pp.208-215.
Liao Guoqiong, Liu Yungsheng, and Wang Lina, "Concurrency Control of Real-Time Transactions with Disconnections in Mobile
Computing Environment", In Proceedings of the 2003 International Conference on Computer Networks and Mobile Computing
(ICCNMC'03), 2003, pp.205-212.
G. Weikum, and G. Vossen. Transactional information system: theory, algorithms, and the practice of concurrency control and
recovery, USA:Elsevier Science, 2003.
A. Data, and S.H. Son, "Limitations of Priority Cognizance in Conflict Resolution for Firm Real-time Database Systems", IEEE
Transactions on Computers, 2000,vol.49, no.5, pp.483-501.
E. Pitoura, and P.K. Chrysanthis, "Multiversion Data Broadcast", IEEE Transactions on Computers, 2002, vol. 5, no. 10, pp.1224 1230.
J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73.
I.S. Jacobs and C.P. Bean, Fine particles, thin films and exchange anisotropy, in Magnetism, vol. III, G.T. Rado and H. Suhl, Eds.
New York: Academic, 1963, pp. 271-350.
Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, Electron spectroscopy studies on magneto-optical media and plastic substrate
interface, IEEE Transl. J. Magn. Japan, vol. 2, pp. 740-741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301,
1982].
M. Young, The Technical Writers Handbook. Mill Valley, CA: University Science, 1989

881

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Data sharing in cloud storage with key-aggregate cryptosystem.


Mrs. Komal Kate, Prof. S. D. Potdukhe

PG Scholar, Department of Computer Engineering, ZES COER, pune, Maharashtra

Assistant Professor, Department of Computer Engineering, ZES COER, pune, Maharashtra

Abstract Cloud storage is a storage of data online in cloud which is accessible from multiple and connected resources. Cloud
storage can provide good accessibility and reliability, strong protection, disaster recovery, and lowest cost. Cloud storage having
important functionality i.e. securely, efficiently, flexibly sharing data with others. New publickey encryption which is called as Keyaggregate cryptosystem (KAC) is introduced. Key-aggregate cryptosystem produce constant size ciphertexts such that efficient
delegation of decryption rights for any set of ciphertext are possible. Any set of secret keys can be aggregated and make them as single
key, which encompasses power of all the keys being aggregated. This aggregate key can be sent to the others for decryption of
ciphertext set and remaining encrypted files outside the set are remains confidential.

Keywords Cloud storage, Key-aggregate cryptosystem (KAC), Ciphertext, Encryption, Decryption, secret key.
INTRODUCTION

Cloud storage is nowadays very popular storage system. Cloud storage is storing of data off-site to the physical storage which is
maintained by third party. Cloud storage is saving of digital data in logical pool and physical storage spans multiple servers which are
manage by third party. Third party is responsible for keeping data available and accessible and physical environment should be
protected and running at all time. Instead of storing data to the hard drive or any other local storage, we save data to remote storage
which is accessible from anywhere and anytime. It reduces efforts of carrying physical storage to everywhere. By using cloud storage
we can access information from any computer through internet which omitted limitation of accessing information from same computer
where it is stored.
While considering data privacy, we cannot rely on traditional technique of authentication, because unexpected privilege escalation
will expose all data. Solution is to encrypt data before uploading to the server with users own key. Data sharing is again important
functionality of cloud storage, because user can share data from anywhere and anytime to anyone. For example, organization may
grant permission to access part of sensitive data to their employees. But challenging task is that how to share encrypted data.
Traditional way is user can download the encrypted data from storage, decrypt that data and send it to share with others, but it loses
the importance of cloud storage.
Cryptography technique can be applied in a two major ways- one is symmetric key encryption and other is asymmetric key
encryption. In symmetric key encryption, same keys are used for encryption and decryption. By contrast, in asymmetric key
encryption different keys are used, public key for encryption and private key for decryption. Using asymmetric key encryption is more
flexible for our approach. This can be illustrated by following example.
Suppose Alice put all data on Box.com and she does not want to expose her data to everyone. Due to data leakage possibilities she
does not trust on privacy mechanism provided by Box.com, so she encrypt all data before uploading to the server. If Bob ask her to
share some data then Alice use share function of Box.com. But problem now is that how to share encrypted data. There are two severe
ways: 1. Alice encrypt data with single secret key and share that secret key directly with the Bob. 2. Alice can encrypt data with
distinct keys and send Bob corresponding keys to Bob via secure channel. In first approach, unwanted data also get expose to the Bob,
which is inadequate. In second approach, no. of keys is as many as no. of shared files, which may be hundred or thousand as well as
transferring these keys require secure channel and storage space which can be expensive.
Therefore best solution to above problem is Alice encrypts data with distinct public keys, but send single decryption key of
constant size to Bob. Since the decryption key should be sent via secure channel and kept secret small size is always enviable. To
design an efficient public-key encryption scheme which supports flexible delegation in the sense that any subset of the ciphertexts
882

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

(produced by the encryption scheme) is decryptable by a constant-size decryption key (generated by the owner of the master-secret
key).[1]

RELATED WORK
SYMMETRIC-KEY ENCRYPTION WITH COMPACT KEY
Benaloh et al. [2] presented an encryption scheme which is originally proposed for concisely transmitting large number of keys in
broadcast scenario [3]. The construction is simple and we briefly review its key derivation process here for a concrete description of
what are the desirable properties we want to achieve. The derivation of the key for a set of classes (which is a subset of all possible
ciphertext classes) is as follows. A composite modulus is chosen where p and q are two large random primes. A master secret key is
chosen at random. Each class is associated with a distinct prime. All these prime numbers can be put in the public system parameter.
A constant-size key for set can be generated. For those who have been delegated the access rights for S can be generated. However, it
is designed for the symmetric-key setting instead. The content provider needs to get the corresponding secret keys to encrypt data,
which is not suitable for many applications. Because method is used to generate a secret value rather than a pair of public/secret keys,
it is unclear how to apply this idea for public-key encryption scheme. Finally, we note that there are schemes which try to reduce the
key size for achieving authentication in symmetric-key encryption, e.g., [4]. However, sharing of decryption power is not a concern in
these schemes.

IBE WITH COMPACT KEY


Identity-based encryption (IBE) (e.g., [5], [6], [7]) is a public-key encryption in which the public-key of a user can be set as an
identity-string of the user (e.g., an email address, mobile number). There is a private key generator (PKG) in IBE which holds a
master-secret key and issues a secret key to each user with respect to the user identity. The content provider can take the public
parameter and a user identity to encrypt a message. The recipient can decrypt this ciphertext by his secret key. Guo et al. [8], [9] tried
to build IBE with key aggregation. In their schemes, key aggregation is constrained in the sense that all keys to be aggregated must
come from different identity divisions. While there are an exponential number of identities and thus secret keys, only a polynomial
number of them can be aggregated.[1] This significantly increases the costs of storing and transmitting ciphertexts, which is
impractical in many situations such as shared cloud storage. As Another way to do this is to apply hash function to the string denoting
the class, and keep hashing repeatedly until a prime is obtained as the output of the hash function.[1] we mentioned, our schemes
feature constant ciphertext size, and their security holds in the standard model. In fuzzy IBE [10], one single compact secret key can
decrypt ciphertexts encrypted under many identities which are close in a certain metric space, but not for an arbitrary set of identities
and therefore it does not match with our idea of key aggregation.

ATTRIBUTE-BASED ENCRYPTION
Attribute-based encryption (ABE) [11], [12] allows each ciphertext to be associated with an attribute, and the master-secret key holder
can extract a secret key for a policy of these attributes so that a ciphertext can be decrypted by this key if its associated attribute
conforms to the policy. For example, with the secret key for the policy (1 3 6 8), one can decrypt ciphertext tagged with class 1,
3, 6 or 8. However, the major concern in ABE is collusion-resistance but not the compactness of secret keys. Indeed, the size of the
key often increases linearly with the number of attributes it encompasses, or the ciphertext-size is not constant (e.g., [13]).

883

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Different Schemes

Ciphertext size

Decryption key size

Encryption type

Key assignment schemes

Constant

Non-constant

Symmetric or public-key

Symmetric-key
encryption with compact
key

Constant

Constant

Symmetric key

IBE with compact key

Non-constant

Constant

Public key

Attribute based
encryption

Constant

Non-constant

Public key

KAC

Constant

Constant

Public key

Table. 1. Comparison between KAC scheme and other related scheme

KEY-AGGREGATE CRYPTOSYSTEM
In key-aggregate cryptosystem (KAC), users encrypt a message not only under a public-key, but also under an identifier of
ciphertext called class. That means the ciphertexts are further categorized into different classes. The key owner holds a master-secret
called master-secret key, which can be used to extract secret keys for different classes. More importantly, the extracted key have can
be an aggregate key which is as compact as a secret key for a single class, but aggregates the power of many such keys, i.e., the
decryption power for any subset of ciphertext classes.[1]
With our example, Alice can send Bob a single aggregate key through a secure e-mail. Bob can download the encrypted photos from
Alices Box.com space and then use this aggregate key to decrypt these encrypted data. The sizes of ciphertext, public-key, master-secret key
and aggregate key in KAC schemes are all of constant size. The public system parameter has size linear in the number of ciphertext classes,
but only a small part of it is needed each time and it can be fetched on demand from large (but non-confidential) cloud storage.

FRAMEWORK
The data owner establishes the public system parameter through Setup and generates a public/master-secret key pair through
KeyGen. Data can be encrypted via Encrypt by anyone who also decides what ciphertext class is associated with the plaintext message
to be encrypted. The data owner can use the master-secret key pair to generate an aggregate decryption key for a set of ciphertext
classes through Extract. The generated keys can be passed to delegatees securely through secure e-mails or secure devices Finally, any
user with an aggregate key can decrypt any ciphertext provided that the ciphertexts class is contained in the aggregate key via
Decrypt. Key aggregate encryption schemes consist of five polynomial time algorithms as follows:

884

1.

Setup (1 , n) : The data owner establish public system parameter via Setup. On input of a security level parameter 1 and
number of ciphertext classes n , it outputs the public system parameter param

2.

KeyGen: It is executed by data owner to randomly generate a public/ master-secret key pair (Pk, msk).

3.

Encrypt (pk, i, m) : It is executed by data owner and for message m and index i ,it computes the ciphertext as C.

4.

Extract (msk, S): It is executed by data owner for delegating the decrypting power for a certain set of ciphertext classes and it
outputs the aggregate key for set S denoted by Ks.

5.

Decrypt (Ks, S, I, C): It is executed by a delegate who received, an aggregate key Ks generated by Extract. On input Ks, set
S, an index i denoting the ciphertext class ciphertext C belongs to and output is decrypted result m.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SHARING ENCRYPTED DATA


Network Storage

Encrypt(Pk,i,

Setup-> param

Aggregate Key K2,3,5

KeyGen->Pk,mk

Extract(mk,{2,3,5})->K2,3,5

Decrypt(K2,3,5,{2,3,5},i,

)->

Figure. 1. Using KAC for data sharing in cloud storage.

A canonical application of KAC is data sharing. The key aggregation property is especially useful when we expect delegation to be
efficient and flexible. The KAC schemes enable a content provider to share her data in a confidential and selective way, with a fixed
and small ciphertext expansion, by distributing to each authorized user a single and small aggregate key.
Data sharing in cloud storage using KAC, illustrated in Figure 1. Suppose Alice wants to share her data m1,m2,....,mn on the server.
She first performs Setup (1, n) to get param and execute KeyGen to get the public/master-secret key pair (pk, msk). The system
parameter param and public-key pk can be made public and master-secret key msk should be kept secret by Alice. Anyone can then
encrypt each mi by Ci = Encrypt (pk, i, mi). The encrypted data are uploaded to the server. With param and pk, people who cooperate
with Alice can update Alices data on the server. Once Alice is willing to share a set S of her data with a friend Bob, she can compute
the aggregate key KS for Bob by performing Extract (msk, S). Since KS is just a constant size key, it is easy to be sent to Bob through
a secure e-mail. After obtaining the aggregate key, Bob can download the data he is authorized to access. That is, for each i S, Bob
downloads Ci from the server. With the aggregate key KS, Bob can decrypt each Ci by Decrypt (K S, S, i, Ci) for each i S.

CONCLUSION
Users data privacy is a central question of cloud storage. Compress secret keys in public-key cryptosystems which support delegation of
secret keys for different cipher text classes in cloud storage. No matter which one among the power set of classes, the delegatee can always
get an aggregate key of constant size. In cloud storage, the number of cipher texts usually grows rapidly without any restrictions. So we have
to reserve enough cipher text classes for the future extension. Otherwise, we need to expand the public-key. Although the parameter can be
downloaded with cipher texts, it would be better if its size is independent of the maximum number of cipher text classes.

REFERENCES:
[1] Cheng-Kang Chu ,Chow, S.S.M, Wen-Guey Tzeng, Jianying Zhou, and Robert H. Deng , Key-Aggregate Cryptosystem for
Scalable Data Sharing in Cloud Storage, IEEE Transactions on Parallel and Distributed Systems. Volume: 25, Issue: 2. Year
:2014.
[2] J. Benaloh, M. Chase, E. Horvitz, and K. Lauter, Patient Controlled Encryption: Ensuring Privacy of Electronic Medical
Records, in Proceedings of ACM Workshop on Cloud Computing Security (CCSW 09). ACM, 2009, pp. 103114.
885

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[3] J. Benaloh, Key Compression and Its Application to Digital Fingerprinting, Microsoft Research, Tech. Rep., 2009.
[4] B. Alomair and R. Poovendran, Information Theoretically Secure Encryption with Almost Free Authentication, J. UCS, vol.
15, no. 15, pp. 29372956, 2009.
[5] D. Boneh and M. K. Franklin, Identity-Based Encryption from the Weil Pairing, in Proceedings of Advances in Cryptology
CRYPTO 01, ser. LNCS, vol. 2139. Springer, 2001, pp. 213229.
[6] A. Sahai and B. Waters, Fuzzy Identity-Based Encryption, in Proceedings of Advances in Cryptology - EUROCRYPT 05, ser.
LNCS, vol. 3494. Springer, 2005, pp. 457473.
[7] S. S. M. Chow, Y. Dodis, Y. Rouselakis, and B. Waters, Practical Leakage-Resilient Identity-Based Encryption from Simple
Assumptions, in ACM Conference on Computer and Communications Security, 2010, pp. 152161.
[8] F. Guo, Y. Mu, and Z. Chen, Identity-Based Encryption: How to Decrypt Multiple Ciphertexts Using a Single Decryption Key,
in Proceedings of Pairing-Based Cryptography (Pairing 07), ser. LNCS, vol. 4575. Springer, 2007, pp. 392406.
[9] F. Guo, Y. Mu, Z. Chen, and L. Xu, Multi-Identity Single-Key Decryption without Random Oracles, in Proceedings of
Information Security and Cryptology (Inscrypt 07), ser. LNCS, vol. 4990. Springer, 2007, pp. 384398.
[10] S. S. M. Chow, Y. Dodis, Y. Rouselakis, and B. Waters, Practical Leakage-Resilient Identity-Based Encryption from Simple
Assumptions, in ACM Conference on Computer and Communications Security, 2010, pp. 152161.
[11] V. Goyal, O. Pandey, A. Sahai, and B. Waters, Attribute-Based Encryption for Fine-Grained Access Control of Encrypted data,
in Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS 06). ACM, 2006, pp. 8998.
[12] M. Chase and S. S. M. Chow, Improving Privacy and Security in Multi-Authority Attribute-Based Encryption, in ACM
Conference on Computer and Communications Security, 2009, pp. 121130.
[13] T. Okamoto and K. Takashima, Achieving Short Ciphertexts or Short Secret-Keys for Adaptively Secure General Inner-Product
Encryption, in Cryptology and Network Security (CANS 11), 2011, pp. 138159.
[14] S. S. M. Chow, Y. J. He, L. C. K. Hui, and S.-M. Yiu, "SPICE -Simple Privacy-Preserving Identity-Management for Cloud
Environment," in Applied Cryptography and Network Security - ACNS2012, ser. LNCS, vol. 7341. Springer, 2012, pp. 526543.
[15] L. Hardesty, "Secure computers arent so secure," MIT press, 2009, http://www.physorg.com/news1761073.
[16] C.Wang, S. S. M. Chow, Q.Wang, K. Ren, and W. Lou, "Privacy-Preserving Public Auditing for Secure Cloud Storage," IEEE
Trans. Computers, vol. 62, no. 2, pp. 362375, 2013.
[17] B. Wang, S. S. M. Chow, M. Li, and H. Li, "Storing Shared Dataon the Cloud via Security- Mediator," in International
Conference on Distributed Computing Systems - ICDCS 2013. IEEE, 2013.
[18] S. S. M. Chow, C.-K. Chu, X. Huang, J. Zhou, and R. H. Deng, "Dynamic Secure Cloud Storage with Provenance," in
Cryptography and Security: From Theory to Applications Essays Dedicated to Jean-Jacques Quisquater on the Occasion of His
65th Birthday, ser. LNCS, vol. 6805. Springer, 2012, pp. 442464.
[19] D. Boneh, C. Gentry, B. Lynn, and H. Shacham, "Aggregate and Variably Encrypted Signatures from Bilinear Maps," in
Proceedings of Advances in Cryptology EUROCRYPT 03, ser. LNCS, vol. 2656. Springer, 2003, pp. 416432.
[20] M. J. Atallah, M. Blanton, N. Fazio, and K. B. Frikken, "Dynamic and E_cient Key Man- agement for Access Hierarchies,"
ACM Transactions on Information and System Security (TISSEC), vol. 12, no. 3, 2009

886

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Survey Paper on Security Issues in Satellite Communication Network


infrastructure
Syed Muhammad Jamil Shah, Ammar Nasir, Hafeez Ahmed
Department of Electrical Engineering,
Institute of Space Technology, Islamabad
engineerjamz@gmail.com, Contact No. 0092-312-6840511

Abstract Satellite communication is one of the most popular next generation communication technologies for global
communication networks in parallel to terrestrial communication networks. In modern age military intelligence, navigation &
positioning, weather forecasting, digital video Broadcasting (DVB), and broadband internet services, are the few demanding
applications of Satellite communication. Although Satellite communication is cost effective solution for such long distance
communication application, However security over the link is still a major concern in satellite communication. Due to limitations such
as high bit error rate, power control, large distance between end nodes, high link delays because of large round trip times, and link
availability, common security techniques incorporate a lot of issues in implementation of proper secure communication over the
satellite links. In this survey paper, we explore the importance of security, trivial and currently deployed security tools, and the
limitations to be considered while deploying such security techniques and protocols for securing satellite communication. Finally we
reported some future research space in process to further optimize the security tools and measures for the proper security frame work
over the satellite communication infrastructure.

Keywords Satellite Communication, Security, PEP, IPsec, VPNs, Encryption, SSL, AKE.
INTRODUCTION

With the advent of satellites, the communication has been revolutionized. It is no longer viewed as a simple bent pipe but as an
important component of a large global communications networking system, requiring interoperability between satellite and terrestrial
communication components and thus compatible protocols and standards. Satellite networks provide global reach and wide area
coverage to remote, rural and inaccessible regions. A few of its common applications include weather prediction, telephony services,
telemedicine, multimedia services, internet connectivity, navigation through GPS, imaging through remote sensing satellites for
resource monitoring and many important military applications. It is becoming critically important that satellite networks should be
able to offer convincing networking solutions in the rapidly changing field of communications dominated mobility, personalization
and high capacity demands [1].
However, with the increasing utility and demand, the need for security increases in satellite communication. The users of this
mode of communication along with all sorts of benefits in terms of Quality of Service (QoS) and reduced cost of services also demand
confidentiality and integrity of their data. This is particularly important in the case of military applications. Eavesdropping and active
intrusion are serious concerns especially for people who transfer sensitive data over satellite link and these actions are much easier in
satellite networks because of the broadcast nature of satellites as compared terrestrial fixed or mobile networks.
Moreover it is reported that, satellite channels experience long delays and high bit error rates, which may cause the loss of
security synchronization. This demands a careful evaluation of encryption systems to prevent Quality of Service degradation because
of security processing [2].It is also seen that in satellite communication Protection of the satellites and links is not only the key
concern for the researchers and industry, but also to provide a sound integrity and confidentiality of the downlink Earth Stations and
Information Systems of the command and control systems.
In this survey paper, we provided an overview of the security and related issues of satellite communication for commercial and
strategic defense communication. This survey paper focused the need for security, key issues in satellite communication security, the
limitations in a satellite link, and currently deployed encryption techniques that may be applied to secure a satellite link.
887
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A brief overview of current security measures like PEP, IPsec, IPsec anti-replay, VPN based security techniques, Key exchange
methods, and other such advance security measures. Then we also discussed the counter effects of such security measures over the
communication system and in implementation issues in satellite networks. Finally we explored the future challenges and research
frontiers in secure satellite communication that can handle the base issues still need to be addressed.

TAXONOMY OF WORK
While for the time being it is considered that a lot of advancement has been made in securing the information over the
communication links. As far as it is concerned towards the security measures of wired network, to some extent this idea can be suited
well, but as far as it concerned with Satellite Communication link, it still have many issues to be sorted out fully. As in satellite
communication link due to its long distance, delay, more accurate power control requirements, and other concerns with the space links
made it difficult for the current security measures which usually designed for the wired networks, wireless LANs or for wireless
sensor networks (WSNs); to be applicable for this situation, as these security measures are not specifically designed for the such long
distances space communication links like satellite uplink and down links or inter satellite links. Since satellite links have a different
form of interferences and some unique interference figures. The need for satellite communication over the long distances through
space communication is ever increasing demand of the time; a lot of communication traffic has been shifted towards wirelessnetworks, especially satellite links, which provide a cost effective, less infrastructure overhead, and a more global solution for the
commercial and military applications.
In this survey paper, in section II we mentioned the need for security and then provide a comprehensive list of currently cited
security issues and the impact over the satellite communication network that need to be consider while looking at the secure satellite
communication. In section III of this literature we surveyed the issues with satellite link security protocols as security measures in a
In section IV, a detailed survey on security concerns with satellite communication networks infrastructure is provided. Since a lot
of literature published on satellite information system security concerns, while implementing Transport and Application layer security
protocols like ipsec, vpn, and PEP. Then in section V, a detailed analogy of literature on DDoS attacks and Attack tracing mechanisms
is reported. In section V, ASC (American Satellite Company) work regarding security issues in satellite communication ground based
command and control stations is overviewed to emphasize the command stations security concerns.
Finally in section VI of the survey paper, we focused some future research concerning areas related to more secure commercial
satellite Communication. In concluding part VI of our work we tried to emphasize the security concerns which are still under
consideration and need to be worked out for the better security solutions in satellite communication to maximize the reliability,
confidentiality, and integrity over the satellite links to meet the next generation space and terrestrial hybrid communication trends
using the secure satellite links.

SECURITY ISSUES IN SATELLITE COMMUNICATION


A well designed implementation of satellite communication networks is usually the prime concern in order to allow its usage for
certain sensitive scenarios like secure defense and strategic purposes, and reliable communication through the public media. However,
while deploying security in satellite communication, it is common to face some issues caused either by the characteristic of the
satellite link such as the long end-to-end delay and higher bit error rates requirement for high carrier to noise power ratios C/N over
the uplink and downlink. Protocols typically and frequently used in satellite networks, such as Performance Enhancing Proxies (PEPs)
or IP Multicast IPsec [1] [2] to improve the efficiency in time and bandwidth of data transmission over the costly and scarce frequency
bands, also contribute to the above stated issues.
In this section of survey paper we have discussed firstly why we need security and then examined few important issues related to
satellite communication link and payload security.

888

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A - Why need security?


As world of internetwork expanded need for the security over communication link as well as over endpoints (source &
destination) also became one of the major concerns for the service providers, operators, and researchers.
In 2003, a research team at Los Alamos National Laboratory, USA, demonstrated how spoofing attack can be implemented on GPS. A
GPS satellite simulator was used to broadcast a fake signal that caused the GPS receiver to operate as though it were located at a
position different from its actual location [3].
Similarly in 2007 and 2009, a research team at Stanford University decoded Galileo in-orbit validation element A (GIOVEA) and
Compass-M1 civilian codes in all available frequency bands [4]. These events emphasize the need to secure communication in
satellite networks along with coding techniques.
Communication security means integrity and confidentiality in delivery of information. Researchers describe security in terms key
features like confidentiality, authentication, integrity, access control, and key management; so that entity remains error free and well
intercepted on the target receiver [3]. While dealing with the security in satellite communication these key points always remain a
major concern for the security measures. In general we can define the security particulates as following [4] [5]:
Confidentiality means that only the appropriate users have access to the information.
Authentication requires verification of a users identity and right to access. It is achieved using a public key interchange protocol that
ensures not only authentication but also the establishment of encryption keys [6].
Integrity means that the information has not been corrupted.
Access control ensures that the system cannot be compromised by unauthorized access (e.g. pirating a satellite).
Key management is the way how to manage the security keys (how dynamically generated, concealed and distributed, finally how
well-kept by the actual target) users. Key Management is a key issue with respect to IPsec over multicast Satellite Communication.
Security over the satellite link is always a key feature for the military and state departments. There always arise issues in security of
the link due to number of attacks from the intruders to disrupt the link as well as tarnish data integrity, and confidentiality. Here it is a
matter of key concern for the Military operations success that the communication facility they are acquiring from the satellite
networks must ensure a high level of confidentiality and protection; Satellite security system must deny any sort of antagonist access
to the information system that always considered as a mission critical data for the national security.
On the other end, the satellites for commercial communication, which are also time to time used for military communication
objectives in temporary bases, the satellites mostly part as repeaters over the sky, and unlike the military owned satellites, the
commercial satellites are not necessarily supposed to carry bulk of encryption data in space trajectories for the downlink telemetry
command and control systems and earth station data and information stations. As for the commercial applications, Satellite operators
mostly keep less overhead over the in-orbit satellite transponder and links to ensure the maximum cost effectiveness of the satellite
link for the business gains. So, for a trusted and protected communication over the leased satellite link most of the security measures
are deployed at the ground stations.
Moreover, in literature we find that as satellite networks are usually wireless broadcast networks over the downlink, the issue of
eavesdropping [7] is one of the prime concerns. So appropriate security measures requirement become vital there.

B - List of Security Issues in satellite communication network infrastructure


1.
2.
3.
4.
5.
6.
889

Scanning/attacking
Jamming
Miss positioning/Control
Transponder spoofing/direct commanding.
Security Key management issues
Satellite based hybrid networks security implementation issues
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

7.
8.
9.
10.
11.
12.

Satellite based broadband multicast networks security implementation issues


TCP based security implementation issues in Satellite Networks infrastructure.
VPN Implementation based Security issues in Satellite communication infrastructure.
Ground Stations, Telemetry and command systems protection issues.
Denial of Service attack (DoS and DDoS attacks) issues.
Miscellaneous command and infrastructure security issues.

ISSUES WITH SATELLITE LINK-SECURITY PROTOCOLS


To establish a secure channel between end nodes, we use a protocol called the authenticated key-exchange (AKE). In this protocol two
parties communicate after they have shared a common key between them. AKE protocols are either based on symmetric or
asymmetric cryptography. The symmetric approach requires a large number of pairwise keys. Basically the long-term symmetric key
is generated before executing the key-exchange protocol. In case of satellite networks, generating and managing these keys are highly
problematic and thus inappropriate for scalable satellites [2]. In certificate based-asymmetric approach, each node obtains a public key
of the other and verifies it with a trusted third party. But unfortunately due to the long time delay in data transmission over a satellite
link, certificate based AKE protocols perform inefficiently. Furthermore the method demands a node to acquire certificate of the other
node from a certificate authority and then access the certificate revocation list so that the two nodes extract each others valid public
keys, imposing extra burden on the communication channel [2]. To eliminate this burden, an identity based cryptosystem was
introduced in which the nodes public keys can be arbitrary string that serves as identity string and the intervention of the third party
certificate authority is avoided [25].
The early AKE schemes such as the identity-based scheme providing zero knowledge authentication and authenticated key
exchange [26] or the identity-based key exchange protocol, [27] never took into account the possibility of active intrusion and
hence provided little security. Most of the more recent identity-based AKE protocols such as the two-party identity-based
authenticated key agreement, [28] and the Identity-based Authenticated key agreement protocol based on weil pairing, [29]
establish session key by employing pairing techniques to secure communication against attacks. These protocols involve great number
of multiplications hence are computationally inefficient. In addition, in most of these schemes, a participants identity must be mapped
to a point on an elliptic curve, and this is computationally expensive [2].
A modular approach was proposed [30] which uses application of identity-based signature scheme. The Diffie-Hellman (DH) protocol
is constructed using the same technique of identity-based signatures [31]. The AKE protocol built using DH platform is known as
Authenticated DH (ADH) or the Canetti-Krawczyk (CK) protocol [32]. But the downside of this protocol is that it uses three round of
message transmission whereas most AKE protocols only use two rounds [2].
These protocols needed to be analyzed for security, so it was for Bellare and Rogaway to present the first security model, the BR
model in 1993 [33]. Since then many extensions have been done in order to increase the level of security along with computational
efficiency. The most famous extended BR model is the CK model [32] which was proposed in 2001. However, all these models
followed an assumption that the attacker is not allowed to obtain certain secret information about the session that is being attacked
hence leaving the AKE protocols stated above still prone to leakage of short-lived secret or private key [3].
But with the researchers with their heads down, working hard, the problem at hand has been addressed. A model that is resistant to the
above mentioned attack has been reported known as the extended CK (ECK) [34]. According to this model, the only attacks that are
not allowed are those that would trivially break an AKE protocol. In other words an attack in which the adversary reveals the
ephemeral secret and the static private key of one node in the protocol so that this node can be impersonated. Thus, the ECK model is
currently regarded as the strongest security model for AKE protocols [3]. ECK security also implies key generation center (KGC)
forward secrecy [35], which helps ensure the security of previously established session keys after the master key of the KGC has been
compromised.

SECURITY ISSUES BASED ON SATELLITE COMMUNICATION NETWORK INFRASTRUCTURE


In addition to satellite link securing issues, there always remain a constant amount of issues in ground station based network
infrastructure and communication protocols. In this section of the survey paper we visited few key communication infrastructures
issues which are equivalently encountered in both wired and wireless environments, inclusive satellite communication networks.
890

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Moreover it is successively reviewed in satellite communication related literature that due to long delays and power limitations and
free space interferences, these security issues become more hazardous if proper precautions not adopted in satellite communication.

A - Security Issues with Satellite Based Hybrid Networks


There are several security challenges with hybrid communication systems which use satellite as a part for back haul or as intermediate,
which are reported time to time by keeping in view the following considerations about satellite systems:
1.

As we already discussed the broadcast nature of satellite channels, which actually does not limit any unauthorized user to
receive the signal and eavesdrop on the unencrypted or poorly secured satellite communication link is easily possible.
2. In case of poor link security, a well-equipped combatant can easily jam or intrude the communication link by sending false
commands to the satellite.
3. In bad weather conditions like rain or cloudy condition, satellite channels face burst of errors and result in loss of packets too,
which in turn also cause loss of data integrity.
4. Satellite Communication experiences long propagation delays of an order of about 0.56 seconds for geostationary satellites
[20] (at a distance about 35785 km or more from earth). So security systems should be such that it cause minimal delays to
the communication over the link and have an efficient mechanism of recovery from such errors.
5. Loss of message integrity or modification issues are also commonly reported security threats. Which may be removed by
proper integrity checksum like MAC[24] on satellite receiver but in commercial satellites such facility is limited, and cant be
provided to every message.
6. Denial of Service DoS Attacks are possible over the satellite transponder due to the limitations of its power and processing
capability, and it may become a single point of failure in network to cause a hazard for whole infrastructure.
7. To provide some level of security over such scenarios encryption methods deployed, but it is seen that it also create two fold
issues, selection of encryption technique and then management of encryption keys [24].

Fig. 1 hybrid satellite based communication network [14]

B Issues with IPsec and SSL in Satellite communication

891

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Internet Protocol Security (IPsec) must provide authentication, access control, integrity, confidentiality and key management
[1]. It is reported that IPSec being considered as IETF standard End to End secure tunneling based data transfer protocol, so it is also
used for satellite communication networks.
In literature about satellite communication security in unicast environment, it is also reported that Originally Terrestrial
networks designed protocols, such as Internet Security Protocol (IPsec) or Secure Socket Layer (SSL) [14], when tried to implement
in satellite networks, causes severe performance degradations and link overloading.

Fig.1.

Traditional IPSec and SSL encryptions

Moreover it is also reported that IPSec has large bytes overhead for authentication and integrity checks services using ESP
[24].Detailed study on IPSec protocol also explored that it was originated for p2p communication security, so it lacks multipoint or
broadcast communication environment, as it doesnt provide dynamic key generation [24]. So IPSec have certain security
implementation limitations.
It is also observed that due to the IPSec property to not allow any intermediate nodes authentication or decryption by key
exchange elsewhere except end to end nodes, so PEP cant read the encrypted TCP header, so IPSec doesnt work with PEPs protocol.
Similarly HTTP Proxy cant work its prefetching function, due to inability to read TCP header, which leads to a serious degradation of
performance in satellite based network infrastructure.
Further during the survey, we found that to address the performance limitation issues of SSL or IPSec, researchers developed
transient protocol types like HTTP over IPSec tunnel [24][49][50], Dual-mode SSL for HTTP optimization[24] DSSL.
In effort to optimize the performance and compatibility issues of IPSec and SSL, changing in the mechanism cause other
adversities in performance. Like Layered IPSec cause increase in bytes overhead by almost three times more than that of original
version. When we examined the SSL solution as DSSL it also provide other performance depreciation issues like complexity in
design, larger overheads due to two different keys deployment. Therefore it can be concluded that IPSec and SSL improvements are
considerable but still require more work to be done for lesser overheads in authentication and encryptions and simplicity in design to
work with http proxy servers for better system performance in terrestrial as well as satellite communication networks.
Following table provides an overview for the Comparative study of IPSec vs. SSL security protocols.

Parameters
Security
Implementation
Implementation Layer

Bytes overhead

892

SSL family protocols


SSL
DSSL
Unicast
Unicast
End-to-End
End-to-End
(between
(between app. &
application and
transport layer)
transport layer)
Less overhead then Heavy bytes overhead
IPSec
increased
www.ijergs.org

IPSec family protocols


IPSec
Layered IPSec
Unicast
Unicast
End-to-End
End-to-End
Network
Network
Layer
Layer
Heavy bytes (about
34 bytes)

Three times more


heavier bytes
overhead.

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design complexity
Group communication
Performance with PEP

Less complex
No
Yes

More complex
No
yes

Less
No
Severe degradaton

Performance with HTTP


Proxy

Yes,
HTTPS is main
example

Yes Supports, less


performance
degradation

Severe degradaton

Medium
No
Yes supports PEP,
Better performance
No support, so still
degradation of
performance

Table.1. SSL vs. IPSec comparison

C - Security Issues with Broadband Multicast Satellite Communication


In multicast satellite communication due to broadcast nature of satellite links appropriate addressing and security measures are
essential to provide service to only authorized users. A highly optimized and efficient key generation and update mechanism is a key
for the Multicast applications which are dynamic in nature.
Although Satellite Communication is the key technology for global communication, but Satellite systems are typically resource
limited, which means links are constrained in channel capacity, Power controls, processing speeds, and switching capabilities,
throughputs, link availabilities, Which in turn bounds the Link budgeting and network design of a Satellite Network.
Security issues arise with the use of protocol functionalities such as IP Multicast, mobility support functions for end systems, and
Path MTU discovery used in Broadband satellite communication for the purpose of proper broadcast link management and mobility
management. Similarly while using Security techniques like PEPs (Performance Enhancement Proxies) and IPsec anti-replay [1] [2]
protection used for some degree of QoS insurance in satellite communication links; some decrease in performance of these protocols
characteristics over the link also incorporates. Few such link characteristics are:
1.
2.
3.

High link Latency (satellite link more specifically)


Large Error Rates (mostly for Wireless and Satellite links (under Rain Conditions due to effect of rain over system Noise
temperature and degraded link availability)
Links with unequal transmission Rates, mostly high data rates at downlink, while lower on uplink (again mostly satellite links
which are A-symmetric in nature) PEPs features are briefly given by IETF documents (RFC 3135), [2].

Some of the novel features about PEPs are [2] following:


1.
2.
3.
4.

A-symmetric/Symmetric PEPs.
Distributed (used in satellite links)/Integrated (in peer relationship) PEPs.
Layered approach PEPs (Transport/Application layer PEPs).
Variable Degree of Transparency (w.r.t layer in which operational)

D - TCP based Security implementation Issues in Satellite Communication Networks


Transmission Control Protocol (TCP) generally considered well for end-to end connections like wired Internet services. However,
performance issues over satellite links comes due to multiple factors, among those propagation delay and channel noise effects are
most compelling one. Like other wireless communication links Satellite links are also major victim of Noise, which cause drop in
Eb/No, as a result of which Bit-error-rate (BER) is proportionally higher to the wired medium networks. And TCP due to its
retransmission feature for reliability; is more vulnerable to high BER.TCP has no idea of whether a loss over the link caused by
congestion over the link (buffer-overflow) or by corruption (noise, jamming).Moreover in Satellite Networks, TCP (which act as
transport layer for Application layer protocols like telnet, http, ftp, etc.) performance declines due to the high link latency, large error
rates and asymmetric links. Transport layer PEPs enhance TCP functionality which in turn eradicates issues with:
1.
2.

893

TCP bandwidth delay limitations over the satellite links


TCP slow start problem by limiting the acknowledgements and link traffic in congestion situation.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

There are many Enhanced TCP approaches to increase its performance especially over the satellite links, like TCP-Scalable, TCPBOC, TCP-High speed, or TCP-Hybla [8] [9]. But it all requires compliment with end systems, which is again a concern for the
Researcher. However, a new approach of selective retransmissions using UDP as transport protocol is optimized for satellite links
[13]. Application layer PEPs are used as a solution for the issue of application layer protocols enhancements, like Web Cache (HTTP
proxy with caching functionality). Moreover cache enabled DNS servers can be used in the satellite communication to provide a low
latency solution for the DNS queries by avoiding several round trips for this purpose in normal case. Similarly application layer proxy
can provide compression, which in turn reduces the amount of data to travel through the satellite link. However, for compression and
decompression a distributed PEP implementation is essential [10] [12] [13]. Since for this purpose PEPs need access of the protocol
headers and payload, usually encryption or protection by network layer VPN like IPsec dont allow this [10][11][12]. So PEPs
implementation causes the security issues like authentication and integrity disruption, as the PEP is not able to terminate the TCP
connection [13].

INFORMATION SYSTEM BASED SECURITY ISSUES


Till now we have visited the security issues encountered with the information transmission systems. Similarly in literature
issues with the protection of ground based information systems over the Satellite downlink infrastructure, such as command and
control systems, information systems and the related downlink data-centers security, both in Military and in commercial satellites
based networks infrastructure. As mostly it is found that military satellites specially deploy well-planned security techniques over their
owned transponders and links, but in case of commercial satellites communication, where the satellite transponders mainly part as
repeaters in sky, and company always seeks cost effectiveness in their business plan. So, in an effort to make the profitable business
infrastructure the commercial satellites are mostly less equipped with the security tools for the downlink telemetry, command and
control systems and for the related ground stations. So, for a trusted and protected communication over the leased satellite link most of
the security measures are deployed at the ground stations.
The most common issue in securing the satellite information systems, Telemetry & command and control system stations include
(attacks listed in table 1) [36] information systems access control attacks, injection and execution of malicious software attacks ,
masquerading attacks, sniffing, snooping, denial-of-service (DoS) attacks and object reusability attacks.
A. Distributed Denial of Service Attacks
Here in this survey paper we also focused DDOS[36] (Distributed Denial of Service) Attack, Another one of the most alarming one of
all above mentioned security attacks in the satellite communication networks infrastructure. It is mentioned in literature in number of
security attacks that a hacker or an intruder as antagonist always introduce security attacks in a flooded manner so that it can overcome
the security redundancies. Similarly it is true in case of DDoS attacks.
According to the repeatedly found definition in literature of DoS (Denial of Service) [36] attacks, the DoS attack hampers the well
authorized users from accessing the available resources and services. While in DDoS attack usually single to many information
systems become affected due to a collaboratively and a large scale attack over the services access in the satellite based network
infrastructure. But due to power and processing capability limitations over the satellite transponders, such DDoS attacks can be more
violent on satellite which may become a single point of failure in communication network[14].
In Recent studies it is revealed that DoS attacks become major security concern for the host machines in the ground based Telemetry
and command & control systems, and Information systems connected to the internetworks. With the advancement in hardware and
software base equipment in communication industry, DDoS attacks also become more vulnerable for the communication
infrastructure. Over the years DDoS attacks studies revealed that with progress the DDoS attacks tracing mechanism and their cure
become more and more complicated process.
According to a deliberate study on DDoS attacks in collaborative environment[36], it is reported in a survey by Arbor Networks [37]
on November 2008 that DDoS attacks scale have been ever growing since 2001. Further it is reported there that, in 2008, the largest
recorded DDoS attacks against a single target reached 40 gigabits per second, more than that was reported in year 2007 of 24 Gbps.
Hence, DDoS (Distributed Denial of Service) attacks are one of the alarming issues of security over the collaborative networks, and in
turn a major concern for the security over the commercial satellite based communication infrastructure that often poorly equipped with
894

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the measures to encounter the serious and advance security threats over the links. As a flooded DDoS attack can be propagated over
the poorly secured satellite links by a jammer or some intruders. Similarly the probability to cause such attacks on command and
control systems on ground stations is also increased over the times just due to the sophisticated attacking tools and weak security
measures. Security threats due to malicious intrusion to the information and command systems, such as DDoS over the commercial
satellites infrastructure; are well understood. Now these are reported as a major concern for the ground stations security in satellite
communication network infrastructure.

a)

Classification of DDoS Attacks

In classified study about the DDoS attacks, we found following two major categories of DDoS attacks [38], classification based on
nature of Denial of Services, in which victim becomes unable to access the resources in routine.
1.
2.

b)

The bandwidth depletion attacks: in which services and resource access are denied due to the excessive flooding of junk
traffic from attackers.
The resource depletion attacks: in which by malfunctioning of TCP protocol to send incorrect semantic ip packets to crash
the victims system and causing critical (Processor or memory) resources depletion [36].
DDoS Attack Tools

DDoS attack tools are also now becoming more accessible and easier one in use to cause such hazardous attacks on the information
systems. Few common DDoS attack tools [36] listed in following table:
Sr. No.

DDoS Attack Tool

DDoS Attack Class

Trino [39]

Bandwidth depletion

TFN(Tribe Flood Network) [40]

Bandwidth & Resource depletion

TFN2K [41]

TARGA and MIX

Stacheldraht [42]

Bandwidth & Resource depletion

Mstream [43]

Bandwidth depletion

Shaft [44]

Bandwidth & Resource depletion

Trinity [45]

Bandwidth & Resource depletion

Knight [46]

Bandwidth & Resource depletion

Table.2 DDoS Attack Tools Table

c)

DDoS Attacks Detection

After the DDoS Attacks occurrence over the information systems and command & control stations in satellite communication network
infrastructure, there arise a need for detection mechanism for such attacks. Detection of such attacks in satellite networks becomes
another major challenge due to long haul communication links and limitations to delay, bandwidth, power controls and over the
satellite links. Moreover with the advancement in computer systems and software products, severity of such attacks also demanded
dynamic tracking mechanism for proper mitigation, particularly in satellite infrastructure.
Over the times, researcher worked for devising proper tracing mechanisms and related issues. The most reported one among all such
issues, the IP-trace back mechanism definition issue.
Following table represent a comparison study of existing DDoS trace back tools and also discuss some issue, that still need to be
worked on with the passage of time and advancement in communication systems and the attacking tools.
Sr. No.
895

Trace-back
Mechanism

Mechanism
Principle

Advantage
www.ijergs.org

Disadvantage

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Hash Based IP Traceback

Hashing on 28 Byte data

Low storage
requirements
avoids
eavesdropping.
Improved robustness, noise
removal, multiple path
construction
Less time required for full
attack path tracing.

Overhead in generating 28
byte hash.
increased risk of incurring
false positives.
Changes in full path back
tracing, poor scaling.

2.

Algebraic back tracing

3.

Enhanced ICMP trace


back

4.

Advance Hash based IP


trace back

Polynomial based trace


back data generation
method
Intermediate routers based
CP messages back tracing
method
32 bits IP address hashing.

5.

Deterministic packet
marking

Tracing information field


in packet headers

Small amount of packets


required for tracing.

6.

Probabilistic packet
marking

Probabilistically router
marked packets

More efficient then


Deterministic marking
approach.

7.

Flexible Deterministic
packet marking
mechanism.

Tracing by mark
recognition and address
recovery mechanism.

Less amount of tracing


packets, less computational
overhead, high rates of tracing
success.

Less computational,
processing , routing, and
network overheads,

Routers require changes,


Router packet Space and
processing issue.
Require time
synchronization,
compromised Shared key
between routers.
Large size packet headers.
No overload management,
Intermediate processing
overhead.
Large no. of tracing packets,
Probability based approach
doesnt ensure tracing, less
overload mitigation, More
complexity.
Huge consumption of critical
resource like memory for
tracing.

Table.3. Current IP Tracing Mechanisms Comparison study


In latest studies over DDoS Attack tools and their tracing mechanisms, it is clearly reported that currently deployed tracing
mechanisms impose following issues [36]:
1.
2.
3.
4.
5.

Scalability limitations
Bad efficiency in case of legacy routers
Large computational cost
Large memory cost
Poor efficiency in term of resource consumption.

B. Miscellaneous Security issues with satellite command and control stations


In addition to the above security issues with satellite communication networks, links, and information systems; there is some
documentation on the other parallel issues with the satellite networks particularly on the satellite command systems security issues and
their counter measures.
American Satellite Company (ASC) [47], in collaboration with the IES (Industry Executive Subcommittee) and other forums,
worked on the issues with ground station security infrastructures which were previously given no such importance to develop a sound
security mechanism for the command and control stations, information systems, and TT&C stations protection. To mitigate such
threats on the commercial satellite networks, ASC deployed a command link protection system with help of Commercial Satellite
Survivability (CSS) Task Force, based on the DES algorithm [47]. As an overview of ASC security measures implementation,
following Key security measures included:
a)
b)
c)
d)
896

Command link protection system.


ASC Satellite control systems.
ASC Satellite commands standardization.
ASC Satellite commands Authorization.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

e)
f)
g)
h)

Ground based authentication system.


Robust key management mechanism.
a two-tier key, Master and Operation, approach to enhance cryptographic security of keys.
Redundant operational featured authentication system.

CONCLUSION
Satellite Communication is becoming more and more popular in modern day communication systems. From Commercial usage to the
Military intelligence purposes, satellite based hybrid communication network architecture is becoming the ultimate future for
communication engineering. But with the advancements in electronics and communication technologies the limitations of this system
like long delays, link and system sensitivity towards the atmospheric effects, and larger links; are becoming more vulnerable for the
hacking and rogue communication over the legitimate links. Hence Over the times number of security issues in satellite
communication networks infrastructure arise, so these issues need more detail studies in the particular area, as do this survey paper
focused with a brief literature review and pinpoint study over the few vital security loop holes in satellite communication systems.

Future work & Recommendations


As the cyber security landscape continues to change with each new wave of attacks, DoS and DDoS attacks are changing as well and
will continue to target organizations with more gusto than even before, said Avi Chesla, chief technology officer, Radware.int. In
Radware's 2011 Global Application and Network Security Report reported that in last two years major security attacks over the
Network infrastructure (both wired and wireless including satellite communication) were not only volume based but also the most
threatening ones included light volume, slow security attacks. It recommends: to scale the security attacks over the information
systems and network infrastructure of satellite command and control stations like, DoS and DDoS attacks, a precise measure of attack
size, attack type, and frequency of attack must be collected to device a correct measure for the security attack.
So more research work on such topics is required to optimize the Security solutions for the rapidly mounting Denial of service attacks,
especially for the delay, power, and resource limited communication networks.
Issues with IPSec and SSL security protocols which considered as IETF adopted protocols, need to be optimized further to remove the
design complexity issues in DSSL, ML-IPSec increased bytes overhead issue, to make them ideal security protocols for the upcoming
challenges both in satellite and terrestrial network infrastructures.
Similarly a robust key exchange mechanism over the satellite link is still an open frontier for the developers and security experts.
Since recently a study reveals the vulnerability of RSA codes security. RSA key extraction performed using acoustic cryptanalysis
[48], while putting one of the advance key encryption mechanisms under threat of being breached. So new security issues, in addition
to previous ones, born with such sophisticated attacks over the encryption mechanism.

REFERENCES:
[1]Mihael Mohorcic, Ales Svigelj, Gorazd Kandus and Markus Werner, Adaptive Routing for Packet Oriented Intersatellite Link
Networks: Performance in Various Traffic Scenarios, In proceedings of IEEE Transactions on Wireless Communications, No. 4,
Volume 1, Pg. 808 818, October 2002.
[2]Zhong Yantao and Ma Jianfeng, A Highly Secure Identity-Based Authenticated Key-Exchange Protocol for Satellite
Communication, Journal Of Communications And Networks, Vol. 12, No. 6, December 2010.
[3] ETS TR 102 676 V1.1.1: Satellite Earth Stations and Systems (SES);Broadband SAtellite Multimedia (BSM); Performance
Enhancing Proxies (PEPs), November 2009

897

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[4]Carlo Caini, Rosario Firrincieli, Daniele Lacamera: PEPsal: a Performance Enhancing Proxy for TCP satellite connections,
Paper, July 2006
[5]Cruickshank, H, Iyengar, S, Howarth, MP and Sun. Z, Securing satellite communications, IEE Military Satellite Communications
Seminar, IEE Savoy Place, London, October 2002
[6]J. Warner and R. Johnston, "A simple demonstration that the global positioning system (GPS) is vulnerable to spoofing," J.
Security Admin, Pg. 1928, 2002.
[7] A. Noubir and L. von Allman, Security Issues in Internet Protocols over Satellite, Proc. IEEE VTC 99 Fall, Amsterdam, The
Netherlands, Sept. 1922, 1999.
[8] Y. Challal, H. Bettahar, A. Bouabdallah, A Taxanomy of Multicast Data Origin Authentication: Issues and Solutions, IEEE
Communications Surveys and Tutorials, Vol. 6, No.3, pp. 34-57, Oct. 2004.
[9] A. Perrig, D. Song, J.D. Tygar, ELK , a New Protocol for Efficient Large-Group Key Distribution, Proc. IEEE Symp. on
Security and Privacy, pp. 247-262, May 2001.
[10] M. Arslan and F. Alagoz, Security issues and performance study of key management techniques over satellite links, in 11th
Intenational Workshop on Computer-Aided Modeling, Analysis and Design of Communication Links and Networks, 2006
[11] Solutions for securing broadband satellite communication Wolfgang Fritsche Internet Competence Centre IABG Ottobrunn,
Germany Fritsche@iabg.de
[12] C. Caini and R. Firrincieli, A New Transport Protocol Proposal for Internet via Satellite: the TCP Hybla, in Proc. ESA ASMS
2003, Frascati, Italy, Jul. 2003, .vol. SP-54.
[13] Carlo Cainin,y and Rosario Firrincieli: TCP Hybla: a TCP enhancement for heterogeneous networks, paper, Int. J. Satell.
Commun. Network. 2004; 22:547566 (DOI: 10.1002/sat.799)
[14] T. Dierks, E. Rescorla, The Transport Layer Security (TLS) Protocol Version 1.2, IETF RFC 5246, August 2008
[15] S. Kent, K. Seo, Security Architecture for the Internet Protocol, ietf RFC 4301, December 2005
[16]R. Fox, TCP Big Window and Nak Options, IETF RFC 1106, June 1989
[17] M. Mathis, J. Mahdavi, S. Floyd, A. Romanow, TCP Selective Acknowledgment Options, IETF RFC 2018, October 1996
[18] S. Kent, IP Encapsulating Security Payload (ESP), IETF RFC 4303, December 2005
[19] S. Kent, Extended Sequence Number (ESN) Addendum to IPsec Domain of Interpretation (DOI) for Internet Security
Association and Key Management Protocol (ISAKMP), IETF RFC 4304, December 2005
[20] K. Nichols, S. Blake, F. Baker, and D. Black, Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6
Headers, RFC 2474, December 1998
[21] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, An Architecture for Differentiated Services, IETF RFC
2475, December 1998
[22] V. Jacobson, K. Nichols, and K. Poduri, An Expedited Forwarding PHB,IETF RFC 2598, June 1999
[23] J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski, Assured Forwarding PHB Group, IETF RFC 2597, June 1999
[24] Roy-Chowdhury A., Baras J. S., Hadjitheodosiou M., Papademetriou S., "Security Issues In Hybrid Networks With a Satellite
Component", IEEE Wireless Communications, December 2005
898

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[25]A. Shamir, Identity-based cryptosystems and signature schemes, in Proc. Advances in Cryptology-Crypto, Berlin: SpringerVerlag, Pg. 4753, 1984.
[26]M. Girault and J. C. Pailles, An identity-based scheme providing zero knowledge authentication and authenticated key
exchange, in Proc. European Symposium Research Computer Security, Pg. 173184 Oct. 1990.
[27] C. Gunther, An identity-based key exchange protocol, in Proc. EUROCRYPT, Pg. 2937, 1989.
[28] N. McCullagh and P. S. L. M. Barreto, A new two-party identity-based authenticated key agreement, in Proc. CT-RSA, Pg.
262274, 2005.
[29] N. P. Smart, Identity-based Authenticated key agreement protocol based on weil pairing, IET. Electron. Lett. vol. 38, no. 13,
Pg. 630632, 2002
[30]M. Bellare, R. Canetti, and H. Krawczyk, A modular approach to the design and analysis of authentication and key exchange
protocols, in Proc. ACM Symposium. on Theory Comput, 1998, pp. 419428.
[31]W. Diffie and M. Hellman, New directions in cryptography, IEEE Transaction Information Theory, vol.22, no. 6, Pg. 644654,
1976.
[32]R. Canetti and H. Krawczyk, Analysis of key-exchange protocols and their use for building secure channels, Lecture Notes
Comput. Sci., Springer-Verlag, vol. 2045, pp. 453474, 2001.
[33]M. Bellare and P. Rogaway, Entity authentication and key distribution, in Proc. CRYPTO, 1993, pp. 232-249.
[34]B. LaMacchia, K. Lauter, and A. Mityagin, Stronger security of authenticated key exchange, Lecture Notes Computer Science,
vol. 4784, Heidelberg: Springer, Pg. 116, 2007.
[35]L. Chen and C. Kudla, Identity based authenticated key agreement protocols from pairings, in Proc. IEEE Comput. Security
Found. Workshop, 2003, pp. 219233.
[36] Arun Raj Kumar, P. and S. Selvakumar,Distributed Denial-of-Service (DDoS) Threat in Collaborative Environment- A Survey
on DDoS Attack Tools and Traceback Mechanisms, 2009 IEEE International Advance Computing Conference (IACC 2009) Patiala,
India, 6-7 March 2009
[37]Arbor Networks, "Worldwide Infrastructure Security Report", Volume IV, October 2008.
[38]Stephen M. Specht and Ruby B. Lee, "Distributed Denial of Service: Taxonomies of Attacks, Tools, and Countermeasures",
Proceedings of 7th International Conference on parallel and Distributed computing Systems, 2004 International Workshop on Security
in Parallel and Distributed Systems, pp.543-550, September 2004.
[39] P.J. Criscuolo, Distributed Denial of Service TrinOO, Tribe Flood Network, Tribe Flood Network 2000, and Stacheldraht CIAC2319, Department of Energy Computer Incident Advisory (CIAC), UCRL-ID-136939, Rev. 1,Lawrence Livermore National
Laboratory, February 14, 2000.
[40] D. Dittrich, The Tribe Flood Network Distributed Denial of Service attack tool, University of Washington, October 21, 1999.
[41] J. Barlow, W. Thrower, TFN2K an analysis, 2000, Available from http://security.royans.net/info/posts/bugtraqddos2.shtml>.
[42] D. Dittrich, The _Stacheldraht_ Distributed Denial of Service attack tool, University of Washington, December 1999.
[43] D. Dittrich, G. Weaver, S. Dietrich, N. Long, The _mstream Distributed Denial of Service attack tool, May 2000.
[44] S. Dietrich, N. Long, D. Dittrich, Analyzing Distributed Denial of Service tools: the Shaft Case, in: Proceedings of the 14th
Systems Administration Conference (LISA 2000), New Orleans, LA, USA, December 3-8, 2000, pp. 329-339.
899

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[45] B. Hancock, Trinity v3, a DDoS tool, hits the streets, Computers Security 19 (7) (2000) 574.
[46] CERT Coordination Center, Carnegie Mellon Software Engineering Institute, CERT Advisory CA-2001- 20 Continuing threats to
home users, 23, 2001
[47] Mr. Otto W. Hoernig, Jr. and Dr. Des R. Sood, command system protection for commercial communication satellites American
Satellite Company1801 Research Boulevard, Rockville, Maryland 20850-3186
[48] Daniel Genkin, Technion and Tel Aviv University, Adi Shamir, Weizmann Institute of Science, Eran Tromer, Tel Aviv
University, RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis December 18, 2013
[49] Y. Zhang, A Multilayer IP Security Protocol for TCP Performance Enhancement in Wireless Networks, IEEE JSAC, vol. 22,
no. 4, 2004, pp. 76776.
[50] M. Karir and J. Baras, LES: Layered Encryption Security, Proc. ICN04, Guadeloupe (French Caribbean), Mar. 2004

900

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Adaptive Multicast For Dealy Tolerant Network using AOMDV protocol


Shilpa Gangele , Prof. Nilesh Bodne
1

Scholar, Rastrasant tukodoji maharaj, Nagpur University

Faculty, Rastrasant tukodoji maharaj, Nagpur University


Email- shilpagangele@gmail.com

Abstract: Many Delay tolerant Network(DTN) application multicast services thus making efficient multicast communication
support. In this paper we propose a adaptive multicast in delay tolerqant network to challange the intermittent link connectivity and
frequently partiontioned in DTN.In this paper our main aim is to minimize the packet drop problem and we analyze this problem by
using NS-2.in this we especially focus on packet drop problem, for minimizing this we use low duty cycle, setting packets are
addressed to set of potential receiver and forward by the neighbour that wakes up first and it recieve packet successfully and reduce
delay and energy consumption by utilizing the potential of all neighbour.

Keywords: Delay tolerant network, multicast, low duty cycle, Adaptive,Aomdv,NS2.

Keywords Minimum 7 keywords are mandatory, keyword must reflect the overall highlighting words of research. This must be of
Time New Roman Front of size 10.
INTRODUCTION
Delay tolerant networks (DTNs) are a class o f emerging net- works that experience frequent and long-duration partitions
[1,2]. There is no end-to-end path between some or all nodes in a DTN. These networks have a variety of applications in situations
that include crisis environments like emergency response and military battlefields, deep-space communication, vehicular
communication, and non-interactive Internet access in rural areas [3,4,5, 6,7,8,9,10,11]. Multicast service supports the distribution of
data to a group of users. Many potential DTN applications operate in a group-based manner and require efficient network support
for group communication. For example, in a disaster recovery scene, it is vital to disseminate information about victims and potential
hazards among rescue workers. In a battlefield, soldiers in a squad need to inform each other about their surrounding
environment. Although group communication can be implemented by sending a separate uncase packet to each user, this approach
suffers from poor performance. The situation is especially acute in DTNs where resources such as connectivity among nodes, available
bandwidth and storage are generally severely limited. Thus efficient multicast services are necessary for supporting these applications.
Multicasting in the Internet and mobile ad hoc networks (MANETs) has been studied extensively in the past . However, due to the
unique characteristic of frequent partitioning in DTNs, multicasting in DTNs is a considerably different and challenging problem. First,
it is difficult to maintain a connected multicast structure(mesh or tree)during the lifetime of a multicast session. Second, data
transmissions would suffer from many failures and large delays due to the disruptions caused by intermittent and opportunistic link
among nodes. Third, the traditional approaches are designed with the assumption that the membership change during the multicast
session is rare and can be ignored, which is norm rather than exception in the DTNs environments.
In this paper we have to minimize the problem of packets dropping .In DTN, there is a lot of dynamic nodes and
intermittently connect scenarios, the major feature is no End-to-End path. For increasing the delivery ratio when the path is no End-toEnd, the most common method is SCF (Store Carry Forward) mechanism [12]. It also means that each node should have a buffer
queue to store messages. And, there are many routing methods have been developed, such as single-copy [13], multicopy [13] and
grid-based routing [14].
For longer delay tolerant, the node have to wait for a long time until the message can be transferred to more suitable nodes or can be
directly transferred to the destination node. The size of buffer queue is often not enough to store all the messages! Therefore, we must
have to decide to discard some buffer message.
901

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.1 Three types of packet drop problem


ISSUES
Adhoc on demand distance vector routing (AODV):
Adhoc on demand distance vector routing (AODV) is a stateless on-demand routing protocol . The Ad-hoc On Demand
Distance Vector (AODV) classified under reactive protocols. The operation of the protocol is divided in two functions, route
discovery and route maintenance. In Ad-hoc routing, when a route is needed to some destination, the protocol starts route discovery.
Then the source node sends route request message to its neighbours. And if those nodes do not have any information about the
destination node, they will send the message to all its neighbours and so on. And if any neighbour node has the information about the
destination node, the node sends route reply message to the route request message initiator. On the basis of this process a path is
recorded in the intermediate nodes. This path identifies the route and is called the reverse path. Since each node forwards route request
message to all of its neighbours, more than one copy of the original route request message can arrive at a node. A unique id is
assigned, when a route request message is created. When a node received, it will check this id and the address of the initiator and
discarded the message if it had already processed that request. Node that has information about the path to the destination sends route
reply message to the neighbour from which it has received route request message. This neighbour does the same. Due to the reverse
path it can be possible. Then the route reply message travels back using reverse path. When a route reply message reaches the initiator
the route is ready and the initiator can start sending data packets.
Dynamic Source Routing (DSR):
Dynamic Source Routing is a Pure On-Demand routing protocol [17], where the route is calculated only when it is required.
It is designed for use in multihop ad hoc networks of mobile nodes.DSR allows the network to be self organized and self configured
without any central administration and network infrastructure. It uses no periodic routing messages like AODV, thus reduces
bandwidth overhead and conserved battery power and also large routing updates. It only needs the effort from the MAC layer to
identify link failure.DSR uses source routing where the whole route is carried as an overhead. [16]
Destination-Sequenced Distance Vector routing (DSDV):
(DSDV) is a table driven routing scheme for ad hoc mobile networks based on the Bellman-ford algorithm.The improvement
made to the Bellman-Ford algorithm includes freedom from loops in routing table by using sequence numbers [18].Each node acts as
a router where a routing table is maintained and periodic routing updates are exchange, even if the routes are not needed. A sequence
number is associated with each route or path to the destination to prevent routing loops. Routing updates are exchanged even if the
network is idle which uses up battery and network bandwidth. Thus, it is not preferable for highly dynamic networks. In DSR, the
whole route is carried with the message as an overhead, whereas in AODV, the routing table is maintained thus it is not required to
send the whole route with the message during the Route Discovery process.

902

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Proposed Method for implementation of Adaptive multicast for DTN using Low duty Cycle
Ad-hoc On Demand Multipath Distance Vector Routing (AOMDV)
Ad-hoc On Demand Multipath Distance Vector Routing is a extension of AODV . AOMDV employs the Multiple Loop Free and Link-Disjoint path technique. In AOMDV only disjoint nodes are considered in all the paths, thereby achieving path
disjointness. For route discovery route request packets are propagated throughout the network thereby establishing multiple paths at
destination node and at the intermediate nodes. Multiples Loop-Free paths are achieved using the advertised hop count method at each
node. This advertised hop count is required to be maintained at each node in the route table entry. The route entry table at each node
also contains a list of next hop along with the corresponding hop counts. Every node maintains an advertised hop count for the
destination. Advertised hop count can be defined as the maximum hop count for all the paths. Route advertisements of the
destination are sent using this hop count. An alternate path to the destination is accepted by a node if the hop count is less than the
advertised hop count for the destination.
In this approach our aim is to minimize the dropping of packets when it delivers from the source to destination.firstly we do
the multicasting in network here we use both multicast and broadcast for packet delivery.In this we consider one node as a
broadcasting node and all other nodes as a reciver nodes and use ad-hoc on-demand multicast distance vector which is use for both
multicast and broadcast and to reduce the duty cycle for packets to achive the less packets drop.because of these we also achive the
minimization of cost because in delay tolerant network(DTN)we have a less delay,so for reducing the delay we do not have to add an
extra devices,so automatically cost is reduced.

START

Apply low
duty cycle
broadcast
algorithm

Normal DTN

1
Multicast and
broadcast DTN

STOP

Fig.2 Flow Diagram for Adaptive multicast for DTN

903

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In this section we have described about the tools and methodology used in our paper for analysis the performance i.e about
simulation tool, Simulation Setup(traffic scenario, Mobility model) performance metrics used and finally the performance of protocols
is represented by using excel graph.In this paper the simulation tool used for analysis is NS-2 which is highly preffered by research
communities. NS is a discrete event simulator targeted at networking research. Ns provides substantial support for simulation of TCP,
routing, and multicast protocols over wired and wireless (local and satellite) networks [15]. NS2 is an object oriented simulator,
written in C++, with an Otcl interpreter as a frontend. This means that most of the simulation scripts are created in Tcl(Tool Command
Language). If the components have to be developed for ns2, then both tcl and C++ have to be used.The flow diagram given in figure4
shows the complete working of NS2 for analysis
Simulation Setup
NS version

Ns allinone-2.34

Number of nodes

10 wireless nodes

Traffic

CBR(Constant Bit Rate)

CBR Packet size

512 bytes

Simulation Area size

300 x 300 m

Mobility model

Random Way Point mobility

Packet size

1000

Packet interval

0.01

Antenna Model

Omni antenna

Radio propogation Model

Two way ground

Fig-3 shows data transfer from source to destination. Dropping packets in fig shows the packets lost. Fig-4a nd 5 shows the graph of
delay and throughput when packet size is 1000 bytes and interval between packet sending is 0.01 sec.From the graph we clearly see that the
delay has become more when no duty cycle is applied,and when we applied that the delay will lwss and throughput is also nearly equal to
hundred percent.

Fig.3 Simulation Showing packet Transfer

904

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.4 Graph of Delay and Throughput

Fig.5 Graph of Optimized Delay and Throughput

CONCLUSION
This paper describes the design of Adaptive multicast for delay tolerant network.This paper evaluated the performance of
normal DTN and DTN with low duty cycle.Comparision is based on the delay,throughput and energy.The design techniques that we
have used in this paper to achive low delay, low cost, better throughput.For simulation we used network simulator NS-2.

REFERENCES:
[1] A. A. Hasson, R. Fletcher, and A. Pentland. DakNet:A road to universal broadband connectivity.
Wireless Internet UN ICT Conference Case Study, 2003.
[2] D. Johnson and D. Maltz. Dynamic source routing in ad-hoc wireless networks. In ACM SIGCOMM, August 1996.
[3] L. Aguilar. Datagram routing for internet multicasting. In Acmsigcomm84, 1984.
[4] A. Chaintreau, P. Hui, J. Crowcroft, C. Diot, R. Gass, and J. Scott.Pocket switched networks: Real-world
mobility and its consequences for opportunistic forwarding. Technical Report UCAM-CL-TR-617,
University of Cambridge, Computer Lab, February 2005.
905

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[5] Q. Huang, C. Lu, and G.-C. Roman. Spatiotemporal multicast in sensor networks. In ACM SenSys 2003,
Pages 205-217, 2003.
[6] Lee, M. Gerla, and C. Chiang. On-demand multicast routing protocol (ODMRP) for ad hoc networks. Internet draft, June 1999.
[7] H. Wu, R. Fujimoto, and G. Riley. Analytical models for data dissemination in vehicle-to-vehicle networks. In IEEE
VTC2004/Fall.
[8] W. Zhao, M. Ammar, and E. Zegura. A message ferrying approachfor data delivery in sparse mobile ad hoc networks.
In ACM MobiHoc2004, Tokyo Japan, 2004.
[9] W. Zhao, M. Ammar, and E. Zegura. Multicasting in delay tolerant networks: Semantic models and routing algorithms.Technical
report, College of Computing, Georgia Institute of Technology, 2005.
[10] DARPA Disruption Tolerant Networking Program:http://www.darpa.mil/ato/solicit/dtn.
[11] J.Moy.Multicast extensions to OSPF. RFC 1584, Mar. 1994. [17] Network simulator http://www.isi.edu/nsnam/ns/.
[12] Kevin Fall, "A delay-tolerant network architecture for challenged internets," ACM, SIGCOMM,pp.27-34, August 2003.
[13] T. Spyropoulos, K. Psounis, and C. S. Raghavendra, Efficient routing in intermittently connected mobile networks: The singlecopy case, IEEE/ACM Transactions on Networking (TON), vol. 16 no. 1, pp. 63-76, Feb. 2008.
[14] Zhou Tao, Xu Hong-bing, Liu Ming, "Grid-Based Selective Replication Adaptive Data Delivery Scheme for Delay Tolerant
Sensor Networks," JDCTA, Vol. 6, No. 1, pp. 57-66, 2012.
[15] Nsnam web pages: http://www.isi.edu/nsnam/ns/
[16] Murizah Kassim Ruhani Ab Rahman, Mariamah Ismail Cik Ku Haroswati Che Ku Yahaya, Performance Analysis of Routing
Protocol in WiMAX Network, IEEE International Conference on System Engineering and Technology (ICSET), 2011.
[17] Yuan Xue, Member, IEEE, Baochun Li, Senior Member, IEEE, and Klara Nahrstedt, Member, IEEE, Optimal Resource
Allocation in Wireless Ad HocNetworks: A Price-Based Approach,2006
[18] C.E. Perkins and P.Bhagwat,Highly Dynamic Destination Sequenced Distance-Vector Routing(DSDV) for Mobile
Computers,IGCOMM,London,UK,August 1994

906

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey of Social Tagging with Assistance


of Geo-tags and Personalization
PrasannaS. Wadekar
M.E. Computer-II,
Vidya Pratishthans College Of Engineering-Baramati.
prasannawadekar90@gmail.com
Abstract - Social websites such as Flickr, zoomer, Picasa are photo sharing websites that permit users to share their multimedia data
over the social networks. Flickr inspires sharing photos with tags, joining in interested groups, contacting other users with similar
interest as friends as well as expressing their preference on photos by tagging, commenting, sharing, annotating. The users social
tagging includes metadata in the form of keywords reflects users preference over those photos and those can be better utilized to mine
the users preference. The social tags are used in various ways by many of the recommender systems to predict searchers preference on
returned photos for personalized search. Geotagging of a photo is the process in which a photo is marked with the geographical
identification of the place it was taken. Geotagging can help users to find a wide variety of location-specific information. In
personalized tag recommendation, tags that are relevant to the users query are retrieved based upon the users interest.
Keywords Personalization, Geo tags, Co-occurrence pairs, Subspace learning,

INTRODUCTION
Social tagging is the process that assigns the important words that are relevant for the multimedia data. Humans can assign tags for
photo but it requires a time. Tag recommendation inspires users to add more tags while connecting the semantic break between human
perception and the features of media entity, which offers an achievable solution for Content Based Image Retrieval (CBIR). Many tag
recommendation strategies have worked upon connection between tags and photos. Users like to create photo album with respect to
the places they have visited.This task can be achievedby adding geo tags for photos. Geo tagging is the process of including
information to various media objects in the form of metadata such as longitude, latitude, city name etc. Same tags can be
recommended to visually similar photos of user but if geo favor of user is considered then it will recommend photos that are relevant
with location. Social tagging describes the multimedia data by its meaning from its metadata and syntax of that multimedia data such
as features patterns. In traditional systems tagging can be done by two ways,
Quality of tagging is reduced with the human based tag assignment. As per the M.wang, B. Ni et.al [1] proposed three techniques for
tagging that improve manual tagging and automatic tagging:
1) Tagging with data selection and organization: manual process for tag selection from data
2) Tag recommendation
3) Tag processing: - It is process of refining tags or adding new tags.
Half percentage of the tags offered by Flickr users are truly interrelated to the photos. Second, it hasdeficiency of an optimum ranking
strategy. Lets take Flickr as example. There are dual ranking preferences for tag-based social image exploration,time based ranking
and interestingness based ranking. The time-based ranking method ranks images based on the uploading time of each image, and the
interestingness-based ranking method ranks images based on each images interestingness in Flickr. Above two methods do not take
the graphical content and tags of images into consideration. It gives irrelevant results. It gives irrelevant results. Therefore, tag
oriented image retrieval is used to provide highly efficient results.

LITERATURE SURVEY
A] Motivation for annotation in mobile and online media
M. Ames et.al [2] proposed in, why we tag: motivation for annotation in mobile and online media,Flickr is a popular web
based photo sharing system It is useful when organization and retrieval of photos is important task. It allows user to discover other
users by sharing photos to other community members or groups. Authors have motivated users to assign tags. Authors described about
features that are provided by Flickr such as, Privacy setting. Privacy setting allows user to make available his photo to others either by
public or private. Authors explained how the tags, title, comment, and description can be assigned with interface provided by Flickr.
907

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Authors also explained tag-specific retrieval mechanism. Authors have discussed motivations to tag with respect to function and
sociality. Table describes motivations for tagging.
Zone tag is camera phone application. It is used to upload photos taken by phone. It can capture, annotate, store and share
photos from phone. Tags are suggested based on contexts that are pre-fetched from Zone Tag server. It has feature to provide photo
with privacy setting i.e. public or private access. It considers location, time and context while suggesting tags.
Disadvantages: 1) Organize their memory geographically by reporting photos with tags related to places where those photos were taken.
2) Suggesting non obvious tags that may be confusing to users.
3) Users are inclined to attach a tag even if it is irrelevant.
4) User preference is not considered.
B] Tag Recommendation by Concept Matching
A. Sun et.al [3] proposed in, Social Image Tag Recommendation by Concept Matching,Tag recommendation is three step
process: 1) Tag relationship graph construction.
2) Concept detection.
3) Actual Tag recommendation.
1) Tag relationship graph construction:

First, it selects a set of candidate tags.


Tag relationship graph is constructed from tag co-occurrence pair.
Tag co-occurrence pair of tags denotes the numbers of images annotated by that tags.
Central node is required to create TRG graph.
Most transpiring tags with central node are included in first iteration of tags tocentral node. Choose those tags that are related
with as a minimum of two first iteration tags and areincluded them in the TRG as second iteration tags of central node.

2) Concept detection: Removal central node is key step to detect concept.


If central node is removed then relation between central node and first iteration hop should be remained as it is.
It is detected by graph cut problem..
3) Actual Tag recommendation:

Compute the recommendation score of each candidate tag with respect to the user-given tags of an image.
Sorts the candidate tags in descending order of their scores.
Suggests most occurring tags to the Flickr user.
Calculating score with Tc and inclusion of cosine similarity leads to matching of tags.

Advantages: 1) It enables customized matching score computation.


2) It boosts scalability and efficiency of tag recommendation process.
Disadvantages: 1) Geo specific information is not considered here.
C]Tagrecommendation based on collective knowledge.
B. Sigurbjrnsson et.al [4] proposed in, Flickr tag recommendation based on collective knowledge, Specified a photo using user
defined tags. Ordered list of m-candidate tags established on tag co-occurrence is derived. List of candidate tags are served asinput for
tag aggregation as well as ranking. It Produce ranked list of n-recommended tags.
It is two-step process: 1) Tag co-occurrence.
2) Tag aggregation and promotion
908

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1)Tag co-occurrence:

Collective knowledge is generated from user provided tags.


Two methods are used to calculate co-occurrence between two tags. Symmetric measure And asymmetric measure

2) Tag aggregation and promotion: In tag aggregation lists are merged into unified ranking.
It has Two methods, 1) Voting. 2) Summing.
Vote: - A list of suggested tags is acquired by sorting contender tags on votes.
Sum: - It takes combination of all tag list and adds over co-occurrence values of tags.
Advantages: 1) It can handle evolution of vocabulary.
2) It can recommend locations, objects and things.
Disadvantages: 1) User-provided tags are usually limited, one reason is that it is difficult and often requires high mental focus to figure out a lot
of words to describe image or video content in a short moment.
2) It ignores user preferences or not personalized.
3) No evaluation setup for studying user.
4) Being less interactive, performance calculation is not accurate.
5) It requires crucial tuning parameters.
6) System is expensive.
D] Georeferenced tag recommendation.
A. Silva and B. Martins [5] proposed in,Tag recommendation for georeferenced photos,Georeferenced tag recommendation
annotates geo referenced photos with descriptive tags. It discovers redundancyover huge number of annotations accessible at online
sources with other geo referenced photos. Previous methods have used heuristic approaches for integrating geospatial contextual
information but in this method supervised learning is used to rankmethods for combining different estimators of tag relevance.
Various estimators are used such as, adjacent images, different users, number of visits on website made by different users for
particular photo, geospatial distance of image and users. Ranking techniques such as RankBoost, AdaRank, Coordinate, CombSUM,
CombMNZ
Disadvantages: 1) It does not consider concept of image to improve to visual search.
2) It ignores user preferences.
E] Personalized tag recommendation.
N.garg and I.Weber et al [6] proposed in, Personalized interactive tag recommendation for Flickr, a personalized tag
recommendation idea that discovers the tagging history by profile of users. System suggests tags dynamically based on Users previous
tagging history. User may select from suggested list or simply ignore to tag a photo. It is personalized approach in that learns
behavior. E.g. User may tag apple as fruit or apple as electronic device such as laptop, phone, and tab etc. It uses various algorithms
such as Nave Bayes local, Tf-Idf global, and The Better of two worlds.

Advantages: 1)
2)
3)
4)
5)
6)

It is hygienicmethodology that leads to conservative performance.


It shows use of old classification algorithms that can be appropriate to this problem,
It has initiated a new price measure, which grabs the effort of the whole tagging process,
It clearly identifies, when purely local schemes can or cannot be improved by global scheme.
It is less computationally complex than collective knowledge.
It recommends tags dynamically.

Disadvantages: 909

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1) It concentrates only on tags of user and not on metadata contextual information and contents.
2) It does not consider geo specific information.
3) It uses group of images, so it does not use classifier.
F] Tensor factorization and tag clustering for item recommendation
D. Rafailidis et al [7] proposed in,The TFC model: Tensor factorization andtag clustering for item recommendation in social
tagging systems,a method that can handle very sparse data. It can utilizeLow order polynomials with help of <user, image, tag,
weights>.
Steps of this approach:
1) Tag propagation by exploiting content. It uses relevance feedback mechanism to work on tag propagation between similar
items.
2) Tag clustering is performed to find topics on social network and interest of users. It has three types: Tripartite, Adapted Kmeans, and Innovative Tf-Idf
3) TF based HOSVD: - In this cubic complexity of HOSVD is minimized by number of tags to tag clusters. By using this step
latent association between user, tag, topics and images are revealed. It requires tensor modeling to dataset and HOSVD to
produce reconstructed tensor.
Steps:

Initial construction of 3 order tensor.


Matrix unfolding of tensor A to create three new matrices
Apply SVD on each unfold matrix.
Construction of core tensor S.
Reconstruction of tensor.
Generation of Item recommendation.

Advantages: 1) It handles Learning tag relevance problem, cold start problem, Sparsity problem.
2) It is collaborative-based approach that improves accuracy.
3) It uses relevance feedback mechanism.
Disadvantages: 1) It ignores geo-specific information.
2) It requires high space and time complexity.
G] Subspace learning with Matrix factorization
J. liu et al [8] proposed in in, Personalized Geo-Specific Tag Recommendation for Photos on Social Websites, It generates subspace
that creates latent space that is output of visual space and textual space. It recommends tags based on this embedded space and it uses
content based image retrieval techniques to find and retrieve similar types of images. Those Images are recommended to user. Tag
recommendation uses similarity of visual and textual similarities. It recommends tags and photos to users based on his interest, profile
history, image features and geo specific interest.
COMPARISON OF ALL METHODS: Authors

Title

Work and its type


Contents

Morgan Ames

Motivation for annotation in

et.al [1]

mobile and online media.

Aixin Sun

Tag Recommendation by Concept

et.al [2]

Matching

910

Tags

User

Geo

Preference

Preference

Yes

Yes

No

Yes

Yes

Yes

No

No

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Sigurbjornsson

Tag recommendation based on

et.al [3]

collective knowledge

Ana Silva et.al

Georeferenced tag

[4]

recommendation.

N.garg and

Personalized tag

I.Weber [5]

recommendation.

D. Rafailidis

Tensor factorization and

No

Yes

No

No

Yes

No

No

Yes

No

Yes

Yes

No

Yes

Yes

Yes

No

tag clustering for item


et al [7]

recommendation

CONCLUSION
Survey includes study of social tagging, types of social tagging. It has also covered study of various tagging techniques, their
pros and cons over each other and came to conclusion that Subspace learning approach will solve tag recommendation problem with
contents, Tags, User preferences and Geo-Preferences.

REFERENCES:

[2]
[3]
[4]
[5]
[6]
[7]
[8]

911

[1] M. Wang, B. Ni, X. Hua and T. Chua, Assistive Tagging: A Survey of Multimedia Tagging With Human-Computer Joint
Exploration, In Proc. ACM Computing Surveys, Vol. 44, No. 4, Article 25, 2012.
M. Ames and M. Naaman, why we tag: motivation for annotation in mobile and online media, in Proc.ACMCHI, 2007.
Sun, S. Bhowmick and J. Chong, Social Image Tag Recommendation by Concept Matching, in Proc. ACM Multimedia, 2011.
B. Sigurbjrnsson and R. van Zwol, Flickr tag recommendation based on collective knowledge, in Proc. ACM WWW, 2008.
Silva and B. Martins, Tag recommendation for georeferenced photos, in Proc. ACM SIGSPATIAL Int. Workshop LocationBased Social Networks, 2011.
N. Garg and I. Weber, Personalized, interactive tag recommendation for Flickr, in Proc. ACM Recommender Systems, 2008.
D. Rafailidis and P. Daras, The TFC model: Tensor factorization and tag clustering for item recommendation in social tagging
systems, IEEE Trans. Syst., Man, Cybern.: Syst., vol. 43, no. 3, pp. 673688, 2013.
J. Liu, Z. Li, J. Tang, Y. Jiang, and H.Lu, Personalized Geo-Specific Tag Recommendation for Photos on Social Websites,
IEEE Trans. Multimedia, vol. 16, no. 3, Apr. 2014

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review on Collaborative Filtering and Web Services


Recommendation
Pradip N. Shendage
M.E. Computer-II,
Vidya Pratisthan`s College Of Engineering-Baramati
pradip51091@gmail.com
Abstract- Web services are software components that designed to support interoperable interaction between machines over network.
It has been use in industry and academia. A lot of research has concentrated on QoS selection and recommendation .In previous
system; they had not given good performance on web service recommendations and provided limited information about the
performance of the service candidates. In this paper, for large-scale web service recommendation a novel collaborative filtering
algorithm is designed and characteristics of QoS by clustering users into different regions have been studied. This recommendation
visualization technique is demonstrated how a recommendation is grouped with other choices.
Keywords Service recommendation, QoS, collaborative filtering, Self-organizing map, Visualization.

Introduction
Web service is method of communication between two electronic devices over network. Most of the applications of web services
are being used in business and large scale enterprises. Nowadays, in businesses these applications have lifted from huge application to
dynamic setup of business processes. In current scenario, web services are widely used in industry and academia. It is a collection of
open protocols and used for replacing data between applications or systems. Software applications are written in different types of
programming languages and running on different platforms which can be used in web services to exchange data over computer
networks like the Internet. Web Services have significant characteristics like they are self-contained, modular, spread, dynamic
applications, platform independent, language independent, highly interoperable, portable and having well defined interfaces.
In SOA (service- oriented application), first users request the service from the server. Servers get the entire request from users.
Before the server gets the entire request from users, first this request goes through the service brokers. When implementing SOA,
service users usually get a list of web services from service brokers or search engines that will meet the specific functional
requirements. It needs to identify the ideal one from the functionally equivalent candidates. It is hard to select the best performing one,
since service users have partial knowledge of their performance. It has issued to service selection and recommendation which is
urgently needed.
Quality of service is to represent the non functional performance of web service and it has issued service selection. QoS is
defined as set of user aware properties including response time, availability, and reputation etc. It is not easy to users to acquire QoS
information by evaluating all the service candidates, since it is conducting real-world web service invocations which takes long time
and is resource-consuming. Some properties of QoS are difficult to find like reputation and reliability etc because it need long time
observation and chant required.

Literature Survey
It introduces the related work on Collaborative Filtering, Web Service Recommendation, and Self Organizing Map.

A. Collaborative Filtering
Z. Zheng, H. Ma, M.R. Lyu, and I. King[1] have worked on a user-contribution mechanism for Web service QoS information
gathering. Web service QoS value prediction is generated by novel hybrid collaborative filtering algorithm. They have proven that
WSRec get the well expectation accuracy as compare to other methods.
There are some service user-perspective has the following difficulties:
1. It needs service calls; it executed the prices of the service users. Also, it consumes properties of the service providers.
2. It may estimate too many service applicants. It may not expose some suitable Web services to the service users.
912

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3. The estimation of web service is not specialist for service user.


It used method which is hybrid Collaborative filtering method with the help of this method they can reduce the above difficulties.
It uses the novel hybrid collaborative filtering algorithm for Web service recommendation; it recovers the recommendation value
associating with other outdated collaborative filtering methods.
This method divided into two parts:
A] In user-based collaborative filtering for Web services, PCC is working to describe the relation between two service users built on
the Web service items.
B] In Item-based collaborative filtering methods using PCC is working to describe the relation between Web service items in its place
of the service users.
The difficult of this work is to incomplete the collaborative filtering methods for Web service recommendation, so there is no
extensive Web service QoS datasets, which is obtaining from the review of QoS value expectation outcomes, without considerable and
adequate Web services.
J.S. Breese, D. Heckerman, and C. Kadie[2] have worked on Collaborative filtering or recommender systems usage a database
about user preferences to calculate subjects or goods a new user might similar. They have described another task which is depends on
correlation coefficients, vector-based same calculations, and arithmetical Bayesian methods. Collaborative filtering algorithm is used
two classes:
A] In Memory-based algorithms, work is to create expectation over the whole user database. Normally, this task is to predict the votes
of a specific user from a database of user elections from a section by using the Collaborative Filtering.
Advantage:
1. It is easy to implement.
2. It requires little or no training cost.
3. It can easily take new users ratings into account.
Disadvantage:
1. It cannot cope well with large number of users and items, since their online performance is often slow.
B] In Model-based collaborative filtering, it used to evaluate the user record, which is then used for calculations. From a probabilistic
viewpoint, this task can be observed as finding the predictable value of a vote, assumed what they know about the user. They request
to expect votes on unnoticed items for the active user.
Advantage:
1. It can quickly generate recommendations.
2. It can achieve good online performance.
3. It must be performed a new when new users or items are added to the matrix.
The difficulties of these networks are following:
1. It has lesser memory requirements.
2. It permits for quicker predictions than a memory-based technique such as correlation availability of votes with which to create
calculations.

M.R. McLaughlin and J.L. Herlocker[3]have proven that two of the greatest commended CF recommendation algorithms have
faults that outcome in an intensely undesirable user experience. Nearest-Neighbor algorithms work to make movie recommendations
with the all Picture establish that many of the topmost movies recommended were incorrect, highly doubtful, or unverifiable. This
algorithm implements poorly because it difficult to find out the best movie from recommendations.
Nearest-neighbor algorithm was dividing into two parts: User Nearest Neighbor or User-User algorithm is calculating the similarities
between each couple of users.
Advantages are as follows:1. It is easy to implement
2. It proves high correctness when measured with mean absolute error.

913

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Item Nearest Neighbor or Item-Item algorithm is to discoveries the users having different item with ranking. It has interests related to
the active user and it finds items ranked by the active user that are related to the item being expected.
But this algorithm contains two errors
1. The active user taking too little neighbors who had ranked an item.
2. The neighbors with very little connection to the active user ranked the picture and this fault demonstrated quantitatively by the little
modified Precision scores.
Belief Distribution Algorithm is to solve the above problems. This algorithm delivers a well user experiences.
The limitation idea is to execute a user study in which a whole group of rankings is composed. It is allowing us to estimate just how
exactly modified precision dealings the user experiences.
SongJieGong
[4]
Adapted
recommendation
systems
is
support
users
to
discover
exciting
things.
They have used the change of electronic exchange. Several recommendation methods are work with the collaborative filtering
technology; it has been showing to be one of the greatest important methods in recommended systems. With the rise of customers and
products in electronic exchange systems, the time taking nearest neighbor collaborative filtering examine the objective of customer in
the
whole
customer
space.
It
goes
from
it
is
bad
quality.
When several accounts is in the user database, it grows the sparsely of data set. The main causes of the bad quality have?
The previous methods have contained some drawbacks are as following:
1. Scalability in the collaborative filtering.
2. Sparsely in the collaborative filtering.
The recommendation method is to combines the user clustering technology and item clustering technology.
Users group are depending on users ranking on objects. Each users has a one group center. Depends on the comparison between
objective user and group centers, the NNs of objective user can be establish and plan where essential the expectation. The suggested
method can operates the item clustering collaborative filtering to create the recommendations.
Advantages are following:
1. This method is a more accessible.
2. This method is a more correct than the old one.
3. This method is scalable and sparsely in filtering.

B. Web Services Recommendation


Z. Maamar, S.K. Mostefaoui, and Q.H. Mahmoud[5]have worked on context. Web service personalization is developed by using
the contexts. Context is the information that characterizes the associations among people, applications, and location. Web services are
personalized so that users can be owned. Preferences are dissimilar types. Preferences are measured based on performance of Web
services when it starts and ends. Personalization has two types such as explicit or implicit.
Explicit Personalization: Straight participation of the users in the modification of applications can be used with explicit
personalization. Users are clearly defined the data that are needs to be preserved or rejected.
Implicit personalization: Implicit personalization does not call the some users participation and can be made upon learning plans that
path users' behaviors and benefits. Personalization is based on the features that can be connected to users such as stationary user,
mobile and locations. They have used some context such as U-context, W-context, and R-context.
U-context is used to represent status of a user and returns his individual preferences in relations of execution location and
implementation time of services.
R-context is used to represent the status of a resource.
W-context is used to represent status of a Web services and also the implementation constraints on the Web service. They have
provided some policies like consistency, feasibility and inspection.
Advantages are as following:
1. The interaction of the web services is well based on context.
2. It highlighted all the resources on which the web service performed.
914

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

M.B. Blake and M.F. Nowlan[6]have worked on a web service recommender system that proactively determines and achieves web
services. The main objective is to find and ranking algorithms that allow the recommendations and examined actual, fully-operational
web services. It is used some naming tendencies joined with improved syntactical methods and combined services by their mails and
exactly propose candidate services to users as a portion of daily routines. The different tendencies are as following:
Tendency 1 (Subsumption Relationships): This is a solid tendency for web service developers to used part names depends on mutual
names. It use mutual names, same mails to tend have solid subsumption relationships.
Tendency 2 (Common Subsets): Same to subsumption relationships, it is connected by taking mutual subsets with some web services.
Tendency 3 (Abbreviations in Naming): This tendency was for same names to be summarized into contractions. It is used some
another techniques such as LD(Levenshtein distance), LP(Letter Paring).
Levenshtein distance: LD is to determine the similar relation between two strings.
Letter Pairing (LP) approach is an algorithm that can be used to match strings that has same subsets.
Advantage:
1. With help of tendencies, users can easily find out the names of the web services.
The limitation is to vary the ability to characterize recommendations depends on real-time estimation of the condition.
E.M. Maximilien and M.P. Singh [7] it is used SOAs (Service Oriented Architectures). The aim of SOAs is to provide multiple
services to the users. User are dynamically choice the best services from the list which has provided by SOAs.They have developed a
multiagent framework depends on ontology for QoS and a new model of trust (Self-adjusting trust). The ontology has to delivers a
source for providers to present their offerings
.Consumers are to prompt their choices, and for evaluations of services to be collected.
There are three categories of Ontology:
The Upper QoS ontology that is covers simple definition for all qualities, with modeling relations between qualities. The Middle QoS
ontology is to extend the upper ontology. It is describes the qualities that are valid across dissimilar areas.
Lower QoS ontologies are defined for specific areas by adding some qualities in the middle ontology or generating new ones from the
upper ontology.
In Self-adjusting trust is autonomic. The evaluations are quality-specific and are achieved via automatic monitoring. The agents are
support each other. The Service agents are activities the quality for observing. It may have be involved to the ontology and
dynamically bootstrapped in the agents. To estimate the resultant system via simulation they shows the agents are able to dynamically
manage their trust tasks and thus frequently choice the well available services for their users. The direction of this worked is to be
improved the trust model to takings into account worker's responsibility.
X. Dong, A. Halevy, J. Madhavan, E. Nemes, and J. Zhang [8] have described the algorithms fundamental the Woogle search
engine for web services. It makes provisions for similarity search for web services, such as finding the similar web-service procedures.
It is determining procedures that are combining with a certain one. They have described novel techniques to funding these types of
searches. The main contribution of this work has a simple group of search functionalities that a web-service search engine should be
provision. They have clustered several sources of suggestion in order and also to decide similarity among web-service procedures. The
main part of this algorithm is novel clustering algorithms that are sets the names of factors of web-service procedures into
semantically important ideas. These ideas are leveraged to find similarity of inputs (or outcome) of web-service procedures. The result
of this paper considerably increases the precision and recalls matched with existing methods. To design Woogle is to contain
automatic web-service calls this is disadvantage of this system. Woogle is showing to fill in the input factors and raise the procedures
automatically for the customer after determining the good .procedures.
C. Self-Organizing Map
K. Tasdemir and E. Merenyi[9]The self-organizing map (SOM) method uses for visualization, cluster mining, and data mining.
When data are high dimensional and difficult then SOM is good. It considers data structure and catch the cluster limitations from the
SOM, the best method is to characterize the SOMs knowledge by visualization methods. The existing methods have given the poor
performance for SOM knowledge with included the Data topology. They have worked on data topology can be combined into the
visualization of the SOM and can deliver an extra intricate judgment of the cluster structure than current arrangements. They have
attained this by presenting a weighted Delaunay triangulation and covering it ended the SOM. In this novel visualization, CONNvis,
915

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

also displays both forward and backward topology along with the severity of onward ones, which specify the quality of the SOM
knowledge and data difficulty.
Forward topology linked neural units that are not direct neighbors in visualization.
Backward topology unlinked neural units that are direct neighbors in visualization.
CONNv is contributing the depth identification of cluster limitations. They have established the abilities on synthetic data groups and
on a actual 8-D remote knowing spectral image. Lastly, They have connectivity matrix CONN is valid to prototypes acquired by any
quantization process hence the knowledge is characterizing by CONN is self-determining the visualization. It combined into similarity
determines in any prototype-based clustering algorithms in calculation to the extra customary distance-based similarity.
CONCLUSION
This paper has aimed to give an overview of recent progress in automatic Web services Recommendation and collaborative
filtering. At first, the proposed system algorithm is to employs the characteristic of QoS by clustering users into different regions. The
previous recommendation system is consisting of service preferences, resources, evaluation and execution. Every step needs different
languages, platforms and methods. A nearest-neighbour algorithm is proposed to generate QoS prediction based on the region feature.
This recommendation system is added the correlation between QoS records with consideration of regions.
REFERENCES:
[1] Z. Zheng, H. Ma, M.R. Lyu, and I. King, WSRec: A Collaborative Filtering Based Web Service Recommendation System, Proc.
Intl Conf. Web Services, pp. 437-444, 2009.
[2] J.S.Breese, D. Heckerman, and C. Kadie, Empirical Analysis of Predictive Algorithms for Collaborative Filtering,Proc. 14th
Conf.Uncertainty in Artificial Intelligence (UAI 98), pp. 43-52, 1998.
[3] M.R. McLaughlin and J.L. Herlocker, A Collaborative Filtering Algorithm and Evaluation Metric That Accurately Model the
User Experience, Proc. Ann. Intl ACM SIGIR Conf., pp. 329-336, 2004.
[4] L.H. Ungar and D.P. Foster, Clustering Methods for Collaborative Filtering, Proc. AAAI Workshop Recommendation Systems,
1998.
[5] Z. Maamar, S.K. Mostefaoui, and Q.H. Mahmoud, Context for Personalized Web Services, Proc. 38th Ann. Hawaii Intl
Conf.,pp. 166b-166b, 2005.
[6] M.B. Blake and M.F. Nowlan, A Web Service Recommender system using Enhanced Syntactical Matching, Proc. Intl
Conf.Web Services, pp. 575-582, 2007.
[8] X. Liu, G. Huang, and H. Mei, Discovering Homogeneous Web Service Community in the User-Centric Web Environment,
IEEE Trans.Services Computing, vol. 2, no. 2, pp. 167-181, Apr.-June 2009.
[7] X. Dong, A. Halevy, J. Madhavan, E. Nemes, and J. Zhang,Similarity Search for Web Services, Proc. 30th Intl Conf. Very
Large Data Bases, pp. 372-383, 2004.
[9]K. Tasdemir and E. Merenyi, Exploiting Data Topology in Visualization and Clustering of Self-Organizing Maps, IEEE Trans.
Neural Networks, vol. 20, no. 4, pp. 549-562, Apr. 2009

916

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey of Large-scale Hierarchical Classification


Ms. Ankita A. Burungale1, Prof. Dinesh A. Zende2
1

Student, M. E. Computer Engineering, VPCOE, Baramati, Savitribai Phule Pune University, India
ankitaburungale@gmail.com

Assistant Professor, Information Technology Department, VPCOE, Baramati, Savitribai Phule Pune University , India
dineshzende@gmail.com

Abstract Large-scale classification taxonomies have thousands of classes, deep hierarchies and skewed category distribution over
documents. Hierarchical classification can speed up the classification process because problem is sub divided into smaller sub
problems, and each of which can be efficiently and effectively managed. Most commonly used method for multiclass classification is
one versus rest method. It is inflexible due to computational complexity. The top down method is usually accepted, but it has an error
propagation problem. The metaclassification method solves error propagation problem. In this paper, several challenges for
hierarchical document classification such as scalability, complexity, and misclassification are reviewed. The questions concerning
about the learning and the classification processes are reviewed.

Keywords Large-scale hierarchical classification, Top-down method, Metalearning, Metaclassification, Text classification,
Ensemble learning, Feature selection.

INTRODUCTION
Multiclass classification is one of the most important tasks in data mining. It is applicable in various areas like text
categorization, patent classification, protein function prediction, music genre classification and so on. Information on Internet is
growing day by day. Thus, it is very difficult to search required information and utilize this large information. The solution to this
problem is to classify the information into topic, where this topic is arranged hierarchically.
There are three techniques used for multiclassification are (1) One versus rest method, (2) Top down method, (3)
Metaclassification method.
In a one versus rest method, a single classifier is trained per class to distinguish that class from all other classes. It does not
consider structural relationships among them. The decision about assigning document to a category based on score of only one
classifier. This method has very high time complexity.
A top down method builds classifiers for every level of the category tree, where every classifier acts as a flat classifier at that
level [1-4]. At the root level in the hierarchy, first a document is classified into one or more sub-categories. The classification process
can be repeated for the document in each of the subcategories until it reaches to leaf categories or cannot be further classified into any
categories.
A metaclassification method solves error propagation problem of the top down method [6-8]. To solve this problem, it uses
the predication of all base classifier for training of the metaclassifier, and then metaclassifier reclassifies the sample based on score of
all base classifiers.

LITERATURE SURVEY
In this section, top-down methods and metaclassification method are reviewed.

1.TOP DOWN METHOD


A] Dumais and Chen [1] work on hierarchies of classes for classifying web content. The proposed paper uses support vector machine
for learning and classification. This technique is applicable for huge dynamic collections. The hierarchical structure is used for two
purposes. First, second-level category models are trained using different contrast sets (either by categories in the flat non-hierarchical
structure or by the same top-level category in the hierarchical structure). Then scores from the top-level and second-level models are
combined using different combination rules. These techniques combine the probabilities of first and second level by using Boolean
917
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

and Multiplicative rule. In Boolean rule, first a threshold is set at the highest level and only match next level categories that pass this
test, i.e., calculate by P (L1) && P (L2). These both constraints must be satisfied in order to classify a test instance. This method is
very efficient, meanwhile huge numbers of next level categories do not need to be tested. The Multiplicative rule is calculates by P
(L1) P (L2). This rule allows to match next level category even their scores are lower than threshold.
Advantages
1.
2.
3.
4.
5.
6.
7.

Many search results is confused at top level. To tackle this issue, these methods concentrate only on the top levels of the
hierarchy.
The performance is improved for automatically categorised search result by using an interface that strongly couples search
results and category structure.
The negative sample is smaller at top level, because it includes the item only from same top level. As a result of that, training
is faster at top level.
This classification method organizes test sample into existing hierarchical structure.
This classifiers are trained offline by using human label training set of document and web categories. Thus, run time
classification is very efficient and human label categories are easy to understand.
This technique is theoretically modest and scalable for hierarchical training and classification. This method is applicable for
large text categorization.
This methods uses taxonomy structure, because of that the classification efficiency and accuracy is improved.

Disadvantages
1.
2.
3.

This is a tree-based method. Therefore, there is problem of multiple taxonomies, evolving taxonomies or unnecessary
intermediate categories on the path from the root to deeper categories.
It is applicable only for three levels of hierarchy.
There is problem of an information organization, because huge collection of heterogeneous web content is considered for
training.

B] Sun et al [2] provide solution to the blocking problem of the top down method by using restricted voting, threshold reduction, and
extended multiplicative methods. The threshold reduction method is based on the principle of lower threshold for sub-tree classifier.
Hence, more documents can be passed to the classifiers at lower-levels. All classifier at same level use same threshold to minimize the
number of threshold combinations. Even though the threshold reduction is able to pass more documents to the classifiers at the lower
levels, there is still possibility that documents mistakenly rejected by the higher-level sub-tree classifiers.
The restricted voting method solves the error propagation problem, by giving a chance to low-level classifiers to access
documents before the sub-tree classifiers of their parent nodes forbid them. This method generates secondary channels, so that the
local or sub-tree classifier of a node can able to receive documents from the sub-tree classifier of its grandparent node. Here the
hierarchy is modified for passing sample down to grandchild nodes, so it results in increased computational complexity.
The extended multiplicative method as its name suggests, is resulting from the multiplicative method proposed by Dumais
and Chen [1]. While the multiplicative method works only for three level of hierarchy. The extended multiplicative method handles
the category trees with more than three levels. It passes the sample down to next level if products of two-classifier probability are
accepted by the threshold strategy.
Advantages
1.
2.
3.
4.
5.

It solves the error propagation problem.


All sub-tree classifiers at the same level use same threshold value. Thus, this technique reduces the number of threshold
combination.
The restricted voting method gives chance to low-level classifiers to access documents, before the sub-tree classifiers of their
grandparent nodes reject them.
The restricted voting method works tremendously well for classes with a minor number of positive test documents (and these
classes have a smaller number of training documents as well).
The restricted voting method provides good classification performance.

Disadvantages
1.
918

The use of threshold reduction method can result in the blocking problem.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.
3.
4.
5.

If the thresholds of all ancestors sub-tree classifiers are zero, then threshold reduction method will degenerate into a flat
classification method.
In the threshold reduction method, the challenge is to determine the thresholds for sub-tree classifiers.
In the threshold reduction method, documents erroneously rejected by the higher-level sub-tree classifiers.
In the restricted voting method, second level channel is added so complexity is increased.

C] Bennett and Nguyen proposed a technique called expert refinements [3]. The tree-based approach has two main problems. First,
documents are wrongly rejected at higher level (false negative), and second, documents are wrongly come at lower level (false
positive). To pass correct document to lower level node, stronger indicator is required. The Refined expert method uses the predictions
from the lower nodes and cousins as meta attribute for the higher levels. To do this, refined expert method trains classifiers at the leaf
nodes using cross-validation on the training data, and then uses the predictions from the training data, that are collected during crossvalidation as meta features to next higher level. This method uses bottom-up training followed by top down training. The bottom up
training solves false negative problem, and top down training halts the transmission of false positives document to an incorrect branch.
The predication from cousins are included as meta attribute, because a high probability at a cousin node denotes document are
belonging to a sibling, so it cannot pass down to next level.
Advantages
1.
2.
3.

Accuracy is improved as it uses complete set of feature for training.


It utilizes additional features that are precise to a particular domain, thus classification performance is improved.
It enhances the top down method.

Disadvantages
1.
2.
3.
4.

It has deficiency in classification accuracy, i.e. its accuracy is lower than the one-versus-rest method. It is caused by the error
propagation in deep levels of the hierarchy.
The training of the root classifier is performed on training set, which is very time consuming.
It requires complex decision at the top level of the hierarchy.
It does not use structure of hierarchy for the task of feature extraction.

D] H. Malik combines the benefits of the flat and hierarchical schemes [4]. This technique flattens the original hierarchy to kth level,
earlier to training hierarchical classifiers (where k is a user-defined parameter). Flattening replaces some categories by their
descendant nodes. Flatten hierarchy is similar to flat structure having less levels, thus an error propagation problem is solved. The
flattening is pre-processing step. The novel lazy classification approach is used for selecting the most promising classes for test
sample. It uses primary and secondary classifiers. It does not depend on confusion graph to find the classes used for secondary
classifier. Instead of training secondary and primary classifier in earlier fashion, it defers the training of secondary classifier to the
classification phase. In the training phase, this method trains a top-down hierarchical classifier in normal way. To classify document,
this method first identify the most positive classes using hierarchical classifier, and then trains a multi-class classifier based on only
the selected classes. Then new classifier is used to make the final estimate.
Advantages
1.

The hierarchical structure does not always provide better classification quality than the flat structure, because of computing
errors at each level. To tackle this issue, this method flattens the hierarchy up to kth level.
2. It uses kth level hierarchy. Hierarchical classifiers are required to work with considerably fewer classes. As a result, the
hierarchical scheme uses substantially fewer computational resources.
3. This method combines the benefits of flat and hierarchical schemes.
4. This method organizes set of categories hierarchically, so reasonable classification time is required.
5. It does not use few levels. Hence, it reduces the risk of making an error in the top-down classification process.
Disadvantages
1.
2.
3.

Its complexity is higher because this technique flattens the hierarchy.


It is difficult to decide which levels in the hierarchy should be flattened.
Excessive flattening increases the time for training and prediction.

E] Koller and Sahami proposed an approach for classification [5], that uses structured hierarchy of topics, instead of ignoring
categorical structure and constructing a single large classifier for the entire task. This method breaks the classification problem into
919

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

manageable sub problem by using structure of hierarchy. The basic perception supporting this approach is the subjects that are close to
each other in the hierarchy; logically have a lot more in mutual with each other than subjects that are far at a distance.
Each sub-problem is simpler than the original problem, as the classifier at a node in the hierarchy need to differentiate between a small
numbers of categories. Therefore, it is possible to make decision based only on a small set of features. This feature set avoids the
overfitting problem. For feature selection, probabilistic framework is used.
Advantages
1.
2.
3.
4.
5.

It reduces the computational complexity, because it uses reduced feature set.


It arranges predefine category into hierarchy.
The vocabulary of category is built for each node, so it permits to use probabilistic model.
The accuracy is improved as feature extraction removes irrelevant features.
It provides few advantages when focus is made on single classifier.

Disadvantages
1.
2.
3.
4.

It assigns document to only leaf node.


This technique works effectively for small features.
It uses greedy method for selecting branches. Thus, this method is error prone.
It has blocking problem.

2. METACLASSIFICATION METHOD
A] Todorovski and Dzeroski developed a meta decision tree [6]. This method used to combine the predication of all base classifier
prompted from different learning algorithm. This method uses the probability distributions of classes predicted by the base-level
classifiers. It uses the predicted class values for identification of the set of meta-level attributes. The meta decision tree (MDT) is used
to decide which base classifier should be used to classify a test sample. The arrangement of a meta decision tree is similar to the
arrangement of an ordinary decision tree. A decision (inner) node states a test to be performed on a single attribute value. Each test
result has its own branch leads to the suitable sub-tree. The leaf node of a meta decision tree specify which classifier should be used
for classification, instead of guessing the class value directly. The Meta decision tree is domain independent, because it uses metalevel attributes as set of class distribution properties and does not use class attribute at internal nodes.
Advantages
1.
2.
3.
4.
5.
6.
7.

It uses only ordinary attribute at internal nodes.


The spilt goodness for internal nodes is differently calculated.
MDT gives better performance than ordinary decision tree.
It reduces the size of tree, so it improves comprehensibility of meta decision trees.
MDTs are more accurate than ordinary decision tree because of expressive influence of meta decision tree leaves.
MDTs are generally too small so it is easy to understand.
It is useful when data include instances from heterogeneous subdomain.

Disadvantages
1.
2.

It is having high complexity.


It cannot applicable to large-scale data set.

B] Kong, Zhao and Luan proposed an adaptive ensemble learning strategy using an assistant classifier [7]. The proposed scheme
divides imbalanced and large binary classification problem into independent balanced binary sub-problems. In the training phase, a
large imbalanced training data set is segmented into many balanced training subsets and processed in parallel. Then base classifiers are
trained on all these subsets separately. For every known sample in the original training set, the outputs of these base classifiers are
prepared into vectors and an assistant classifier will learn from these vectors to discover an effective ensemble way to output a class
label for each specified sample. For the classification phase, an unknown sample is given to all the base classifiers, and the outputs of
all the base classifiers are integrated to make a solution to the original problem according to the assistant classifier.
Advantages
1. An imbalanced complex classification problem is divided into several smaller independent binary classification problems.
Hence, it will improve efficiency and performance of patent classification.
2. This technique takes advantages of outputs of all base classifier. So an error propagation problem is solved.
920

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.
4.

This method uses an assistant classifier with base classifier. Therefore, accuracy is improved.
An assistant classifier based on module selection strategies for better adaptive ability and strong generalization, so any
classifier algorithms can be used as the assistant classifier.

Disadvantages
1.

Its time complexity is too high.

C] Kittler et al. [8], the dissimilar classifiers within a combination would never support to a misclassification i.e., same incorrect class
must not assigned to a test instance by two or more voter classifiers. The dissimilar classifiers can be trained either by using different
input representations for the information, or using different parameters for the similar type of classifier (e.g. different value of k for
KNN classifier; different value of weights for an MLP classifier), or using different classifiers totally (e.g. Nave Bayes and Decision
Trees). There are set of rules for combining the outputs of dissimilar classifiers within a combination. The most common rule is the
majority vote rule, which assign category to test instances if category receives the most votes. Other product, sum, min, max and
median are based on mathematical functions.
Advantages
1.
2.

The combined classifiers improve an efficiency and accuracy.


This method can integrate different types of features. Hence, it can work with heterogeneous classifier.

Disadvantages
1.
2.

One problem with fixed set of rule is challenging to predict which rule would accomplish best result.
The rule considers reliable confidence and noise free estimates. It will fail if these estimates are accidentally zero or very
small.

CONCLUSION
In this survey, we have studied hierarchical classification method with their pros and cons. We came to conclusion that by combining
top down and metaclassification method, problem of hierarchical classification can be solved very effectively.

REFERENCES:
[1] S. Dumais and H. Chen, Hierarchical Classification of Web Content, Proc. 23rd Ann. Intl ACM SIGIR Conf. Research and
Development in Information Retrieval (SIGIR 00), pp. 256-263, 2000.
[2] A. Sun, E.P. Lim, W.K. Ng, and J. Srivastava, Blocking Reduction Strategies in Hierarchical Text Classification, IEEE Trans.
Knowledge and Data Engineering, vol. 16, no. 10, pp. 1305-1308, Oct. 2004.
[3] P.N. Bennett and N. Nguyen, Refined Experts: Improving Classification in Large Taxonomies, Proc. 32nd Intl ACM
(SIGIR09), pp. 11-18, 2009.
[4] H. Malik, Improving Hierarchical SVMS by Hierarchy Flattening and Lazy Classification, Proc. ECIR Large-Scale
Hierarchical Classification Workshop, 2010.
[5] D. Koller and M. Sahami, Hierarchically Classifying Documents Using Very Few Words, Proc. Intl Conf. Machine Learning
(ICML 97), pp. 170-178, 1997.
[6] Todorovski and S. Dzeroski, Combining Classifiers with Meta Decision Trees, Machine Learning, vol. 50, no. 3, pp. 223-249,
2003.
[7] Q. Kong, H. Zhao, and B.L. Lu, Adaptive Ensemble Learning Strategy Using an Assistant Classifier for Large-Scale
Imbalanced Patent Categorization, Proc. 17th Intl Conf. Neural Information Processing: Theory and Algorithms, pp. 601-608,
2010.
[8] J. Kittler, M. Hatef, R.P.W. Duin, and J. Matas, On Combining Classifiers, IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 20, no. 3, pp. 226-239, Mar. 1998

921

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Complete Bug Report Summarization using Task-Based Evaluation: A Survey

Miss. R. K.Taware, 2Prof. S. A. Shinde

ME II Computer,VPCOE,Baramati. Pune University(MH),India.


E-mail: rututaware11@gmail .com

Assistant Professor, VPCOE, Baramati. Pune University(MH), India.


E-mail. Meetsan_shinde@yahoo.com

Abstract Automatic text summarization is based on numerical, linguistical and empirical methods where the summarization
system calculates how often certain key words presents. The key words belong to the so named as open class words. The
summarization system computes the frequency of the key words in the text, which sentences they are existing in, and where these
sentences are in the text. In other words, summaries save time in our daily work. To write a summary of a text is a non-trivial process
where one, on one hand has to extract the most central information from the original text, and on the other has to consider the reader
of the text and her previous knowledge and possible special interests. Automatic Text Summarization is a technique where a computer
summarizes a text. A text is given to the computer and the computer returns a shorter less redundant extract of the original text. So far
automatic text summarization has not yet reached the quality possible with manual summarization, where a human interprets the text
and writes a completely new shorter text with new lexical and syntactic choices. However, automatic text summarization is untiring,
consistent and always available. Evaluating summaries and automatic text summarization systems is not a straightforward process.
Generally speaking there is one can also perform task-based evaluations where one tries to discern to what degree the resulting
summaries are beneficent for the completion of a specific task. It focuses on different aspects of creating an environment for
evaluating information extraction systems, with a center of interest in automatic text summarization.

Keywords Empirical Software Engineering, Summarization of Software Artifacts, Bug Report Duplicate Detection, Extractive
System , Abstractive System , Text Summarization ,Email threats ,state- of threat system.
INTRODUCTION

Those from outside the profession of software development at times mistakenly trust that the profession is all about programming.
Those involved in software development know that the profession has a strong factor of information management. Any successful
large and complex software system needs the creation and management of many artifacts: requirements, designs, bug reports, and
source code with surrounded documentation to name just a few. To perform work on the system, a software developer must often read
and understand artifacts related with the system development. For example a developer trying to fix a performance bug on a system
may be told that a similar bug was solved before several days.
Finding the bug report that seized the knowledge about what was fixed will likely involve the developer to perform examines and
read several bug reports in search of the report of interest. Each report read may cover several sentences of description as well as tens
of sentences representing discussion amongst team members. Sometimes, the amount of information may be overwhelming, causing
searches to be out of control and duplicate or non-optimized work to be performed, all because the previous history of the project has
been unnoticed. One way to reduce the time a developer spends getting to the right artifacts to perform their work is to provide a
summary of each artifact.
Generally, there are two approaches to automatic summarization: extraction and abstraction.
In this approach the possibility of automatic summary generation, focusing on one kind of project artifact, bug reports, to make the
investigation manageable and to focus on these reports as there are a number of cases in which developers may make use of existing
bug reports, such as when triaging bugs or when performing change tasks and these reports can often be lengthy, involving discussions
between multiple team membersYou can copy and past here and format accordingly to the default front. It will be easy and time
consuming for you.

Need OF Automatic Summarization


922

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A. Key word extraction:


a. Task description : The task description is the method in which by taking a part of text, such as some textual part of
any document , and create a list of keywords, but most text shows absence of pre-existing key words. In that extractor
might select key words from th document. These are pulled directly from the text. As compare to an abstractive
system, it somehow internalize the content and generate key words that might be more descriptive and more like what a
human would produce. Key words have many applications, such as to improve document browsing by providing a
short summary.
b. Key word extraction as supervised learning :There are known key words available for a set of training documents.
Using the known key words, assign positive or negative tags to the part of text. Then train a classifier that can
distinguish among positive and negative values like features..After training a classifier , select keywords for text
documents.
c. Unsupervised key word extraction (TextRank) :In this, consider key words available for a set of training documents.
Using the known key words, assign positive or negative values to the given part of text. Then train a classifier that can
distinguish amongst positive and negative values. After that, select key words for text documents. In this method
probabilities are given,so a threshold value is used to select the key words. Key words extractors are generally weighed
using precision and recall. Example is TextRank method.

B. Document summarization
Like keyword extraction, document summarization hopes to identify the core of a text. Summarization systems are typically estimated
using summarization approaches. The most common way is using the so-called ROUGE (Recall-Oriented Understudy for Gisting
Evaluation) measure. This is a recall-based measure that concludes how well a system-generated summary covers the content present in
one or more human-generated model summaries known as references.
a. Supervised learning approaches: This technique is like supervised key words extraction. Basically, if we have a collection of
documents and human-generated summaries for them, we can learn features of sentences that make them good candidates for
inclusion in the summary. Features may include the position in the document and the number of words in the sentence, etc.
Disadvantage of supervised extractive summarization is that the known summaries must be manually created by extracting
sentences so the sentences in an original training document can be labeled as in summary or not in summary.
b. Unsupervised approaches :TextRank and LexRank :This approach is quite similar to unsupervised key words extraction. It gets
around the issue of costly training data. Certain unsupervised summarization methods are based on finding a centroid sentence,
which is the mean word vector of all the sentences in the text. LexRank is an algorithm basically identical to TextRank, and both
use this approach for document summarization.It is used for key words extraction or any other NLP methods.

C.

Multi-document summarization method

It is an automatic technique directed at extraction of information from multiple characters written about the same topic. And summary
report permits individual users, such as professionals data, to quickly make clear themselves with information kept in check in a large
cluster of documents.
This summarization makes information reports that are both concise and comprehensive. The aim of a brief summary is to simplify
information search and cut the time by pointing to the most relevant source documents. So there is limitation to the need for accessing
original files

METHOD 1 : EXTRACTION AND ABSTRACTION APPROACH


Nenkova and K. McKeown,2011[2]: Automatic Text Summarization Technique is divided into two categories as follows:
1) Extraction technique:
Extraction technique is simply extracting important sentences into final summary, where importance of sentence is calculated
based on weights assigned to sentences using statistical and linguistic features of text. Extractive methods work by selecting a
subset of existing words, phrases, or sentences in the original text to form the summary. The main challenge is to identify
important parts of document and extract them for final summary. Here most work presented on single-document
summarization using extraction method.
923

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2) Abstraction technique :
In contrast, abstractive methods build an internal semantic representation and then use natural language generation techniques
to create a summary that is closer to what a human might generate. Such a summary might contain words not explicitly
present in the original. Abstraction technique involves semantic based text summarization. Abstraction technique uses purely
linguistic feature, which are very hard to calculate.
Advantages :In order to make summary more reliable, accurate, complete and less redundant following process is applied:

Final weight of every sentence using weight-ranking equation is computed.

Final weights are sorted in reverse order.

Top weighted sentences are selected for summary according to compression ratio required.
Disadvantages :There are a lot of challenges which needs to be resolved such as:
A corpus for different languages stop words is not available.

Features for different language are diverse and are very difficult to process.

Pronoun level ambiguity is very difficult to remove.

METHOD 2: SUMMARIZING SPOKEN AND WRITTEN CONVERSATIONS


This method G. Murray and G. Carenini, ,2008[3] refer to research on summarizing conversations in the meetings and emails
domains. It present a conversation summarization system that works in multiple domains utilizing general conversational features, and
compare our results with domain-dependent systems for meeting and email data.In this method uses an extractive approach to
summarization, presenting a novel set of conversational features for locating the most significant sentences in meeting speech and
emails. So this approach demonstrate that using these conversational features in a machine-learning sentence. classification framework
yields performance that is competitive or superior to more restricted domain-specific systems, while having the advantage of being
portable across conversational modalities. This robust the performance of the conversation-based system is attested via several
summarization evaluation techniques, and this give an in-depth analysis of the effectiveness of the individual features and feature
subclasses used.

Conversation Summarization System


Conversation summarization approach, it treat emails and meetings as conversations comprised of turns between multiple participants.
This method follow Carenini et al. (2007) in working at the finer granularity of email fragments, so that for an email thread, a turn
consists of a single email fragment in the exchange. The features it derive for summarization are based on this view of the
conversational structure.

924

Length features - For each sentence, it derive a word-count feature normalized by the longest sentence in the conversation
(SLEN) and a word-count feature normalized by the longest sentence in the turn (SLEN2).
Structural features - Including position of the sentence in the turn (TLOC) and position of the sentence in the conversation
(CLOC). It also include the time from the beginning of the conversation to the current turn (TPOS1) and from the current
turn to the end of the conversation (TPOS2).
Pause-style features: Include the time between the following turn and the current turn (SPAU), and the time between the
current turn and previous turn (PPAU), both normalized by the overall length of the conversation. These features are based
on the email and meeting transcript timestamps.
Conversation participants features : Include two types of features. One measures how dominant the current participant is in
terms of words in the conversation (DOM), and the other is a binary feature indicating whether the current participant
initiated the conversation (BEGAUTH), based simply on whether they were the first contributor.
Several lexical features : Used in these experiments. For each unique word, determine two conditional probabilities. For each
conversation participant, system compute the probability of the participant given the word, estimating the probability from
the actual term counts, and take the maximum of these conditional probabilities as the first term score, which will call as
Sprob.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Comparison Summarization Systems :In order to compare the Conversation Summary system with state-of-the-art systems for
meeting and email summarization, respectively, this method also present results using the features described by Murray and Renals
(2008) for meetings and the features described by Rambow (2004) for email. Because the work by Murray and Renals used the same
dataset. However, Rambow carried out summarization work on a different, unavailable email corpus, and so in this method reimplemented their summarization system for current email data.
Advantages:

Here the conversation feature set that is similarly effective in both the meetings and emails domains.
A general conversation summarization approach can achieve results on par with state-of-the-art systems that rely on features
specific to more focused domains.
A general conversation summarization system is valuable in that it may save time and effort required to implement unique
systems in a variety of conversational domains.

Disadvantages :

It is only for particular domain so by extending this system to other conversation domains such as chats, blogs and telephone
speech, it will give better results.

METHOD 3: COPING WITH AN OPEN BUG REPOSITORY


J. Anvik, L. Hiew, and G. C. Murphy,2005 provides the method of an initial characterization of two open bug repositories from the
Eclipse and Firefox projects, describe the duplicate bug and bug triage problems that arise with these open bug repositories, and
discuss how applying machine learning technology to help automate these processes. This method present data to fill this gap,
providing a characterization of the data in and the use of parts of the bug repositories for two open source projects: Eclipse (V3.0) and
Firefox (V1.0). This data confirms two problems that arise with open bug repositories that open source developers have communicated
to us previously . The difficulty of detecting which bug reports are duplicates of those already in the repository and the difficulty of
assigning new bug reports to the appropriate developer. At this time, the approaches taken to these problems are human-oriented;
humans must read the bugs and decide upon whether they are duplicates, and to whom they should be assigned. So these processes
can, at least in part, be automated by using the historical information about the bug processes stored in the bug repository. The data
and ideas presented in this method provide a basis for considering the kind of support that should be integrated into a development
environment to support better bug reporting and tracking.
Examples of heuristics that used by systems are as follows:

925

If a report is resolved as FIXED, it was fixed by whoever submitted the last patch that was approved. (Firefox)
If a report is resolved as FIXED, it was fixed by whoever marked the report as resolved. (Eclipse)
If a report is resolved as a DUPLICATE, it was resolved by whoever resolved the report of which this is a duplicate of.
(Eclipse and Firefox)
If a report is resolved as WORKSFORME, it was marked as such by the person doing the bug assignment, so it is unclear
who the developer would have been. The the report is labeled as unclassifiable. (Firefox) Similar to the analysis we
performed for who submits bugs , we determine the top five domains of developers who resolved bugs.
Advantages :
This duplicate detection uses a statistical model built from the knowledge of past reports using machine learning techniques.
An incremental approach allows to detect duplicates of new bug reports and adapt to the changing composition of bugs in
the repository.
By using cosine similarity [9], the model classifies new bug reports as either being unique or duplicate.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

METHOD 4: SUMMARIZING EMAIL THREADS


O. Rambow, L. Shrestha, J. Chen, and C. Lauridsen,introduced summarizing email threads, i.e., coherent exchanges of email
messages among several participants. Summarizing threads of email is different from summarizing other types of written
communication as it has an inherent dialog structure. So in this method initial research which shows that sentence extraction
techniques can work for email threads as well, but profit from email-specific features. In addition, the presentation of the summary
should take into account the dialogic structure of email communication. This system shows Email is a written medium of
asynchronous multi-party communication. This means that, unlike for example news stories but as in face-to-face spoken dialog, the
email thread as a whole is a collaborative effort with interaction among the discourse participants.
However, unlike spoken dialog, the discourse participants are not physically co-present, so that the written word is the only channel of
communication. But some time replies do not happen immediately, so that responses need to take special precautions to identify
relevant elements of the discourse context . Thus, email is a distinct linguistic genre that poses its own challenges to summarization. In
the approach the paradigm used for other genres of summarization, namely sentence extraction: important sentences are extracted
from the thread and are composed into a summary. Given the special characteristics of email, here predict that certain email-specific
features can help in identifying relevant sentences for extraction. In addition, in presenting the extracted summary, special wrappers
ensure that the reader can reconstruct the interactional aspect of the thread, which assume the part is crucial for understanding the su
Disadvantages:

Need to perform a qualitative error analysis and investigate in more detail .


Need to improve the automatic extraction. mmary. We acknowledge that other techniques should also be explored for email
summarization, but leave that to separate work

METHOD 5 : IDENTIFYING DUPLICATE DEFECT REPORTS WITH NATURAL LANGUAGE


PROCESSING
Various processes that are currently being used in software engineering require or produce reports that are written in natural language.
Defect reports that are generated by a number of testing and development activities is such an object that is structured in natural
language, which makes it very hard to compare two reports for similarity. In this document, a method that makes use of Natural
Language Processing (NLP) techniques to identify duplicate defect reports will be explained. A situational example that shows the
different steps within the method is given and related literature is also described.
When complex software products are being developed, it is very common that defects slip into the product. These defects can lead to
software failures or unexpected behaviour. Testing procedures will likely find these defects and report them in so called defect reports
that are put in a defect management system. If the same architecture is being used in different products or development is parallel, the
same defects may be reported by testing activities of different products. The same error might be reported several times in the defect
management system. The method that will be described can be used to identify these duplicates in the system. It is largely based on
Natural Language Processing (NLP).
There are five stages in the NLP (Manning and Schtze, 1999):

Tokenization
Stemming
Stop words removal
Vector space representation
Similarity calculation

ACKNOWLEDGMENT
Thanks to prof. Santosh. A. Shinde, for useful discussion on summarization and for his great efforts of supervising and leading me
926

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

to accomplish this satisfactory work. To college and department staff , they were a great source of support and encouragement. To my
friends and family, for their warm wishes and loves. Thanks to every person who gave something too light my pathway.

CONCLUSION
In this review paper ,we discussed the three methods for bug report summarization based on text summarization concept i.e. Keyword
extraction , Document summarization etc. and their advantages and disadvantages. Performance of extraction based summarization is better
than abstractive approach. According to results of different bug report summarization and duplicate detection task. We conclude that there is
no single method to generate summaries as well as duplicate detection task of documents. There are several methods are available for these
task.We have discussed the need and challenges of document summarization and duplicate detection task and their application. We have tried
to present almost all possible techniques of bug report summarization.

REFERENCES:
[1] Sarah Rastkar, Gail C. Murphy and Gabriel Murray, "Automatic Summarization of Bug Reports,IEEE Transactions on
Software Engineering,2013.
[2] Nenkova and K. McKeown, Automatic summarization, Foundations and Trends in Information Retrieval, vol. 5, no. 2-3,
pp. 103233, 2011.
[3] G. Murray and G. Carenini, Summarizing spoken and written conversations, in EMNLP08: Proc. of the 2008 Conference
on Empirical Methods on Natural Language Processing, 2008.
[4] J. Anvik, L. Hiew, and G. C. Murphy, Coping with an open bug repository, in Proc. of the 2005 OOPSLA Workshop on
Eclipse Technology eXchange, 2005, pp. 3539.
[5] J. Anvik, L. Hiew, and G. C. Murphy, Who should fix this bug? in ICSE06: Proc. of the 28th International Conference on
Software Engineering, 2006, pp. 361370.
[6] J. Anvik, L. Hiew, and G. C. Murphy, Who should fix this bug? in ICSE06: Proc. of the 28th International Conference on
Software Engineering, 2006, pp. 361370.
[7] J. Davidson, N. Mohan, and C. Jensen, Coping with duplicate bug reports in free/open source software projects, in
VL/HCC11: Proc. of the 2011 IEEE Symposium on Visual Languages and Human-Centric Computing, 2011, pp. 101 108.
[8] O. Rambow, L. Shrestha, J. Chen, and C. Lauridsen, Summarizing email threads, in HLT-NAACL04: Proc. of the Human
Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, 2004.
[9] X. Wang, L. Zhang, T. Xie, J. Anvik, and J. Sun, An approach to detecting duplicate bug reports using natural language and
execution information, in ICSE08: Proc. of the 30th International Conference on Software Engineering, 2008, pp. 461470
[10] P. Runeson, M. Alexandersson, and O. Nyholm, Detection of duplicate defect reports using natural language processing, in
ICSE07: Proc. Of the 29th International Conference on Software Engineering, 2007, pp. 499510

927

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review on Techniques of Collaborative Tagging


Ms. Benazeer S. Inamdar1, Mrs. Gyankamal J. Chhajed2
1

Student, M. E. Computer Engineering, VPCOE Baramati, Savitribai Phule Pune University, India
benazeer.inamdar@gmail.com

Assistant Professor, Computer Engineering Department, VPCOE Baramati, Savitribai Phule Pune University, India
gjchhajed@gmail.com

Abstract- Collaborative tagging is one of the most diffused and prominent services available on the web. It has emerged as one of
the best ways of associating metadata (tag) with web resources like image, bookmark, post etc. With the increase in the kinds of web
objects becoming available, collaborative tagging of such resources is also developing along new dimensions. In this survey paper, we
have reviewed and analyzed different privacy enhanced technologies (PET) of collaborative tagging like tag suppression, tag
perturbation, tag recommendation and tag prediction.

Keywords Social bookmarking, Collaborative tagging, Tag prediction, Tag recommendation, Data perturbation, Granularity,
Filtering.

INTRODUCTION
Collaborative tagging became popular with the launch of sites like Delicious and Flickr. Since then, different social systems have
been built that support tagging of a variety of resources. Given a particular web object or resource, tagging is a process where a user
assigns a tag to an object. On Delicious, a user can assign tags to a particular bookmarked URL. On Flickr, users can tag photos
uploaded by them or by others. Whereas Delicious allows each user to have her personal set of tags per URL, Flickr has a single set of
tags for any photo. On blogging sites like Blogger, Wordpress, Livejournal, blog authors can add tags to their posts.
The main purpose of collaborative tagging is to classify resources based on user feedback, expressed in the form of tags. It is used
to annotate any kind of online and offline resources, such as Web pages, images, videos, movies, music, and even blog posts.
Nowadays collaborative tagging is mainly used to support tag-based resource discovery and browsing. Consequently, collaborative
tagging would require the enforcement of mechanisms that enable users to protect their privacy by allowing them to hide certain user
generated contents, without making them useless for the purposes they have been provided in a given online service. This means that
privacy preserving mechanisms must not negatively affect the accuracy and effectiveness of the service, e.g., tag-based browsing,
filtering, or personalization. Tag suppression is the privacy-enhancing technology (PET) is used to protect end user privacy. Tag
suppression is a technique that has the purpose of preventing privacy attackers from profiling users interests on the basis of the tags
they specify. It can affect the effectiveness of policy based collaborative tagging systems.

TECHNIQUES OF PRIVACY PRESERVATION


1. Collaborative Filtering Using Data Perturbation
Collaborative filtering techniques are becoming increasingly popular in E-commerce recommender systems as data filtration is
most demanding way to reduce cost of searching in E-commerce application. Such techniques suggest items to users employing
similar users' preference data. People uses recommender systems to deal with information overload. Although collaborative filtering
systems are widely used by E-commerce sites, they fail to preserve users' privacy as data is exposed to filter engine in unencrypted
form. Since many users might decide to give wrong information because of privacy concerns, collecting high quality data from users
is very tough task. Collaborative filtering systems using these data might produce inaccurate recommendations.
1.1 Randomized Perturbation Techniques
In this paper, H. Polat and W. Du propose a randomized perturbation technique to protect individual privacy while still producing
accurate recommendations results. Although the randomized perturbation techniques attach randomness to the original data to prevent
the data collector from learning the private user data, the method can still provide recommendations with decent accuracy.
928

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

These approaches basically suggest perturbing the information provided by users. In this, users add random values to their ratings
and then submit these perturbed ratings to the recommender system. After receiving these ratings, the system perform an algorithm
and sends the users some information that allows them to compute the prediction [1].
Advantage:
This approach makes it possible for servers to collect private data from users for collaborative filtering purposes without
compromising users' privacy requirements. This solution can achieve nearly accurate prediction compared to the prediction based on
the original data.
Limitations:
The accuracy of this scheme can be provide most accurate result if more aggregate information is disclosed along with the
concealed data, especially those aggregate information whose disclosure does not compromise much of users' privacy. This kind of
information includes distribution, mean, standard deviation, true data in a permuted manner, etc.
1.2 SVD(Singular Value Decomposition) Based Collaborative Filtering
In this paper, H. Polat and W. Du proposed SVD-based collaborative filtering technique to preserve privacy. The method used is a
randomized perturbation-based system to protect users' privacy while still providing recommendations with decent accuracy. In this,
the same perturbative technique is applied to collaborative filtering algorithms based on singular-value decomposition [2].
Limitations:
Even though a user disguises all his/her ratings, it is evident that the items themselves may uncover sensitive information. The
simple fact of showing interest in a certain item may be more revealing than the ratings assigned to that item.
1.3 Random Data Perturbation Techniques
In this paper, H. Kargupta, S. Datta, Q. Wang, and K. Sivakumar proposed a technique which is used to preserve data privacy by
adding random noise, while making sure that the distort noise still preserves the signal from the data so that the patterns can still be
accurately estimated. Randomization-based Techniques are used to generate random matrices [3] .
The following information could lead to reveal of private information from the perturbed data.
a) Attribute Correlation: Real time data has strong correlated attributes, and this correlation can be used to filter off additive white
noise.
b) Known Sample: The attacker sometimes has specific background knowledge about the data or a collection of independent samples
which may overlap with the original data. This may not happen every time in collection of independent samples from background
knowledge.
c) Known Inputs/Outputs: There is large probability that the attacker knows a small set of private data and their perturbed
counterparts. This equivalency can help the attacker to estimate other private data.
d) Data Mining Results: The particular pattern discovered by data mining also provides a certain level of knowledge which can be
used to guess the private data to a higher level of accuracy.
e) Sample Dependency: Most of the attacks assume the data as independent samples from some unknown distribution. This
consideration may not hold true for all real applications.
Limitations:
Some of the challenges that these techniques face in preserving the data privacy. It showed that under certain conditions it is
relatively easy to breach the privacy protection offered by the random perturbation based techniques.
1.4 Deriving Private Information from Randomized Data
In this paper, Z. Huang, W. Du, and B. Chen are proposed a method to correlation which affects the privacy of a data set
disguised using the random perturbation scheme. There are two methods to reconstruct original data from a disguised data set. One
scheme is based on PCA (Principal Component Analysis), and the other scheme is based on the Bayes estimate. Results have shown
929

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

that both the PCA-based schemes and the Bayes estimate (BE) based scheme can reconstruct more accurate data when the correlation
of data increases [4].
The BE based scheme is always better than the PCA-based schemes. To defeat the data reconstruction methods that exploit the
data correlation, authors proposed a modified random perturbation, in which the random noises are correlated. The experiments show
that the more the correlation of noises resembles that of the original data, the superior privacy preservation can be achieved.
2. Tag Prediction
Tag prediction concerns the possibility of identifying the most probable tags to be associated with a non tagged resource. Tags are
predicted based on resources content and its similarity with already tagged resources.
2.1 Social Tag Prediction
In this paper, P. Heymann, D. Ramage, and H. Garcia-Molina proposed a tag prediction technique. Tag is predicted based on
anchor text, page text, surrounding hosts, and other tags applied to the URL. An entropy-based metric which captures the generality of
a particular tag and informs an analysis of wellness of the tag which can be predicted. Tag-based association rules can produce very
high-precision predictions as well as giving deeper understanding into the relationships between tags [5].
Limitations:
The predictability of a tag when the classifiers are given balanced training data is negatively correlated with its occurrence rate
and with its entropy. More popular tags are harder to predict and higher entropy tags are harder to predict. When considering tags in
their natural (skewed) distributions, data sparsity issues lead to dominate, so each tag improves classifier performance. This method
perform poor in case of popular tags and distribution becomes poor with overall performance
2.2 Granularity of User Modeling
In this paper, Frias-Martinez, M. Cebrian, and A. Jaimes proposed a tag prediction technique based on granularity. One of the
characteristics of tag prediction mechanisms is that, all user models are constructed with the same granularity. In order to increase tag
prediction accuracy, the granularity of each user model has to be adapted to the level of usage of each particular user. In this,
canonical, stereotypical and individual are the three granularity levels which are used to improve accuracy. Prediction accuracy
improves if the level of granularity matches the level of participation of the user in the community [6].
Limitations:
This approach doesnt investigate the following two areas: (1) how to identify the scope of information used in the construction of
the models (i.e., size and shape of clusters in the stereotypical case), and (2) how and when user models evolve from one granularity to
the next.
3. Recommendation Approach
In this paper, G. Adomavicius and A. Tuzhilin proposed a tag recommendation approach. It suggests to users the tags to be used
to describe resources they are bookmarking. It is enforced by computing tag based user profiles and by suggesting tags specified on a
given resource by users having similar characteristics/interest [7].
3.1 Content-based Recommendation Approach
Content-based recommendation systems try to recommend items similar to those a given user has preferred in the past. The basic
process performed by a content-based recommender consists in matching up the attributes of a user profile in which preferences and
interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items.
a)Heuristic-based
In this item profile (keyword format) is searched by using TF-IDF. User profile (weights of keywords for each user) and cosine
similarity are calculated.
930

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

b) Model-based
In this Bayesian classifiers and Probability measures are used in content-based approach. Some of the model-based approaches
provide rigorous rating estimation methods utilizing various statistical and machine learning techniques.
Limitations:
1. Limited Content Analysis (insufficient set of features).
2. Overspecialization (recommend too similar items).
3. New User Problem (not enough information to build user profile).
3.2 Collaborative based
In this, the user is recommended items that people with similar tastes and preferences liked in the past. Collaborative
recommender systems (or collaborative filtering systems) try to predict the utility of items for a particular user based on the items
previously rated by other users. The utility u(c, s) of item s for user c is calculated based on the utilities u (cj, s) assigned to item s by
those users cjC who are similar to user c.
a) Heuristic-based
In this, correlation coefficient and cosine-based Similarity measurements are used. Heuristic based methods are also known as
memory based methods. Memory-based algorithms essentially are heuristics that make rating predictions based on the entire
collection of previously rated items by the users.
b) Model-based
In this, Cluster models and Bayesian networks are used. Some of the model-based approaches provide various rating estimation
methods utilizing various statistical and machine learning techniques.
Limitations:
1. New User Problem (not enough information to build user profile).
2. New Item Problem (too few have rated on new items).
3. Sparsity (too few pairs of users have sufficient both-rated items to form a similar group among them).
3.3 Hybrid based
Hybrid based methods combine collaborative and content-based methods. It predicts the absolute values of ratings that individual
users would give to the yet unseen items.
a) Heuristic-based
i) Adding Content-based Characteristics to Collaborative Models. In this, content-based profile is used to calculate similarity between
users and a user can be recommended an item not only when this item is rated highly by users with similar profiles.
ii) Adding Collaborative Characteristics to Content-based Models and developing a Single Unifying Recommendation Model.
b) Model-based
In this, combine content-based and collaborative components by:
i) Incorporating one component as a part of the model for the other.
ii) Building one unifying model.

CONCLUSION

931

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In this paper we have reviewed and analyzed different methods to preserve privacy of collaborative tagging. We have reviewed
different techniques like Tag perturbation, tag prediction and tag recommendation. We can conclude that each approach has its own
significance and importance in preserving privacy of end user. Each tag based approach uses different algorithm and evaluation
technique for preserving privacy.

REFERENCES:
1) H. Polat and W. Du, Privacy-Preserving Collaborative Filtering Using Randomized Perturbation Techniques, Proc. SIAM Intl
Conf. Data Mining (SDM), 2003.
2) H. Polat and W. Du, SVD-Based Collaborative Filtering with Privacy, Proc. ACM Intl Symp. Applied Computing (SASC), pp.
791-795, 2005.
3) H. Kargupta, S. Datta, Q. Wang, and K. Sivakumar, On the Privacy Preserving Properties of Random Data Perturbation
Techniques, Proc. IEEE Intl Conf. Data Mining (ICDM), pp. 99- 106, 2003.
4) Z. Huang, W. Du, and B. Chen, Deriving Private Information from Randomized Data, Proc. ACM SIGMOD Intl Conf.
Management Data, pp. 37-48, 2005.
5) P. Heymann, D. Ramage, and H. Garcia-Molina, Social Tag Prediction, Proc. 31st Ann. Intl ACM SIGIR Conf. Research
Development Information Retrieval, pp. 531-538, 2008.
6) E. Frias-Martinez, M. Cebrian, and A. Jaimes, A Study on the Granularity of User Modeling for Tag Prediction, Proc. IEEE/
WIC/ACM Intl Conf. Web Intelligence Intelligent Agent Technology (WIIAT), pp. 828-831, 2008.
7) G. Adomavicius and A. Tuzhilin, Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and
Possible Extensions, IEEE Trans. Knowledge Data Eng., vol. 17, no. 6, pp. 734-749, June 2005

932

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey on Graph-Based Video Sequence Matching &BoFs method for


Video Copy Detection
Avinash H. Kakade
M.E. Computer Network-II,
G.H Raisoni college of Engineering,Ahmednagar
Prof. JadhavMeenal.
Assistant Professor (Computer Engineering Dept.)
G.H Raisonicollege of Engineering & Management Wagholi, pune.
Abstract Number of videos are uploaded daily on different web servers like you-tube. From such videos there may chances of
uploading the duplicate video or modified videos. There is some copy right videos are uploaded from reputed institutes and industries
which may download by unauthorized one and re-uploaded on server which may affects their standards. To avoid such misuse of copy
right videos there are different techniques invented which are basically detectscopied videos respect to the content of the videos. Such
techniques uses different methods and algorithms like SVD, SIFT features and BOFs techniques. These techniques work over contents
i.e. images of the videos and detects the copy of video after comparing binary values (Features) extracted from the frames/images of
the videos. Due to such techniques its possible to detect the copied video on particular motion or activity.
Keywords Dual threshold, SIFT Features, Visual Cues, CBCD, Facial Detection, SVD, Inverted files,LSH.
INTRODUCTION
Content Based Video Copy detection (CBCD)is a challenging problem in computer vision due to the following reasons. First
of all, the problem domain is exceptionally wide. Depending on the purpose of a video copy
Detection system, different solutions can be applied. For example, Facial Detection and activity based detection can detects the exact
copy of particular face and also detects the particular activity in the video[1] .On the other hand, matching news stories across
different channels (camera viewpoints) is a totally different problem, and will probably require interest point matching techniques.In
such situation extracting binary SIFT features [2]from the key frames and comparing them using different method like SVD(Singular
Value Decomposition)[2] will detects the copied video robustly. To reduce the time complexity and to avoid large database searching,
LSH (Locality Secure Hashing) [3] provides better solution. At point of view to reduce large database searching different storage
structures are invented like Inverted file structure .Therefore, no general solution can be proposed to video copy detection problem.
Secondly, the problem space is extremely large, which often requires real-time solutions.
RELATED WORK
O. Kucuktunc et al[1] states thatmostly video copy detection techniques need to extract the frames and feature values from
the video. Different methods are used to extract key features from the different types of videos.Video copy detection using multiple
visual cues detects the copied videos by fusing the result of three different techniques.First technique is facial shot matching in which
high level facial detector identifies facial frames/shot in video clips. Matching face with extended body regions gives the flexibility to
discriminate the same person in different scenes.Second technique is activity sequence matching in which a spatio-temporal sequence
matching technique is employed to match video clips that are similar in terms of activity.Lastly, the non-facial shots are matched using
low level MPEG-7 descriptors and dynamic-weighted feature similarity calculation. The proposed framework is tested on the query
and reference dataset of CBCD task of TRECVID 2008.
Advantages:
Detects exact matching of facial & activity based videos.
Non facial shots are also detected by using MPEG-7 descriptor.
Disadvantages:
Not able to detects changes in rotation and scale invariant in video.
M. Douzeet al [2] states thatthis technique introduces a video copy detection system which efficiently matches individual
frames and then verifies their spatio-temporal consistency. The approach for matching frames relies on a recent local feature indexing
method, which is at the same time robust to significant video transformations and efficient in terms of memory usage and computation
time. We match either key frames or uniformly sampled frames.This system addresses the problem of searching for strongly deformed
videos in relatively small datasets. The first step consists in extracting local signatures for a subsample of frames from the video. A
query is then performed using a structure derived from text retrieval: the inverted file.
933
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Advantages:
Better work for finding spatial temporal results
Query retrieval is fast by using inverted file structure.
Disadvantages:
Only matches individual frames with respect to spatial movement.
Cannot work with large size videos and databases.
Limited for only spatial temporal image result.
G. Willems et al [3] states that this technic uses content based video copy detection based on the spatial temporal features
value.The use of local spatio-temporal features instead of purely spatial ones brings additional robustness and discriminativelyThis
system begins with splitting video into small frames. After dividing the video into frames the local features are extracted from each
frame.The orientation of the video as well as the speed is almost never changed, however, and as such invariance to in-plane rotation
and changes in the temporal scale are not valuable traits. Because of this reason this technique choose the smallest temporal scale
possible which has a kernel width of 9 frames and choose the typical magnification radius of 3, which means that each descriptor is
computed over 27 frames.For Indexing and retrieving of high-dimensional data on a large-scale is far from trivial as exact nearestneighbor searches suffer from the curse of dimensionality. LSH (Locality Secure Hashing) is such an approximate high dimensional
similarity search scheme which is able to find matches in sub linear time. It is able to avoid the curse of dimensionality by hashing the
descriptors through a series of projections onto random lines and concatenating the results into a single hash.
Advantages:
Extract binary unique fingerprint from each frame.
Retrieval result is fast and unique by using LSH method.
High dimensional similarities detected.
Disadvantages: Similarity between two different videos not detected.
Only same videos are detected as copied.
Video within video not detected.
E. Delponte et al[4] states that in this technique a version of the SVD-matching proposed by Scott and Languet- Higgins and later
modified by Pilu that we elaborate in order to cope with large scale variations. To this end we add to the feature detection phase a key
point descriptor that is robust to large scale and view-point changes. Furthermore, we include this descriptor in the equations of the
proximity matrix that is central to the SVD-matching.According to the length of video, numbers of segments are extracted from the
video. But any type of video is made up of large number of frames so there is need to select key frames from extracted
segments/frames. Key frame selected on the base of similarity between its next and previous frames.This technique extract the features
from the key frames called as SIFT features and then its singular value is calculated to match the SIFT feature point sets of images.
This technique works on the base of timestamp of the video.
Advantages:
Frames count reduced due to key frame selection.
SVDvalue remains unique for each key frame..
Detects the variation between the frames.
Detects the change in rotation, scale change.
Disadvantages:
Video in video not detected.
Calculation of SVD value is complex task.
ACKNOWLEDGMENT
I express great many thanks to Prof. MeenalJadhav for his great effort of supervising and leading me, to accomplish this fine
work. Also to college and department staff, they were a great source of support and encouragement. To my friends and family, for
their warm, kind encourages and loves. To every person gave us something too light my pathway, I thanks for believing in me.
CONCLUSION
In this paper we studied different techniques used for content based video copy detection. Most of them works on the
contents of the video i.e. frames/images. Detection of the copied video based on its contents includes use of complex algorithm which
is disadvantage of such technique because they needs more time to execute. They need to work on the segments and then selected key
frames are used to extract the features value. There is need to develop the new technologies which may detects copied video inn less
time and provides better protection.
934
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] O. Kucu ktunc, M. Bastan, U. Gudu kbay, and O .Ulusoy, Video Copy Detection Using Multiple Visual Cues and MPEG-7
Descriptors, J. Visual Comm. Image Representation, vol. 21,pp. 838-849, 2010.
[2] M. Douze, H. Jegou, and C. Schmid, An Image-Based Approach to Video Copy Detection with Spatio-Temporal
Post-Filtering, IEEE Trans. Multimedia, vol. 12, no. 4, pp. 257-266, June 2010.
[3]G. Willems, T. Tuytelaars, and L.V. Gool, Spatio-Temporal Features for Robust Content-Based Video Copy Detection, Proc.
ACM Intl Conf. Multimedia Information Retrieval (MIR), pp. 283- 290, 2008.
[4]E. Delponte, F. Isgro` , F. Odone, and A. Verri, SVD-Matching Using Sift Features, Graphical Models, vol. 68, no. 5, pp. 415431, 2006
[5] J. Yuan, L.-Y.Duan, Q. Tian, S. Ranganath, and C. Xu, Fast and Robust Short Video Clip Search for Copy Detection, Proc.
Pacific Rim Conf. Multimedia (PCM), 2004
[6] F. Dufaux, Key Frame Selection to Represent a Video, Proc. IEEE Intl Conf. Image Processing, vol. 2, pp. 275-278, 2000

935

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey of Social Influence Analysis


Swati A. Adhav
M.E. Computer-II,
Vidya Pratishthan`s College Of Engineering-Baramati.
swati.adhav10@gmail.com

Abstract - Social media networks is becoming popular these days, where user interacts with each other to form social networks.
Social media has revolutionized the way people share and access information. The photo sharing websites include Flicker, Picasa,
YouTube support users to create, annotate, share, and comment on Media data. The users social tagging includes metadata in the
form of keywords that reflects users preference over photos and tagging can be better utilized to mine the users preference. It
explicitly model user interaction and social relations in the tag generation process. It propose a regularized hypergraph learning
solution to refine the correlations among user, image, tag and other metadata. While the traditional social tag analysis work focus on
analyzing the image tag binary correlation, taking user factor and metadata into consideration shows superior performance in image
tag refinement task.
Keywords Hypergraph Learning, social image search, tag refinement.

INTRODUCTION
In social multimedia computing, current social multimedia platforms allow users interacting with multimedia through
uploading, annotating, commenting, and interacting with each other through social media networks. In communities there are many
social sharing websites, which allow users to share photos, web links, songs, pictures etc. The photo sharing websites include Flicker,
Picasa, and YouTube support users to create, annotate, share and comment Media data. What makes Flickr special among other social
media networks is its aspect that motivate users to perform various actions such as sharing photos with tags, joining in interested
groups, contacting other users with similar interest as friends, as well as expressing their preference on photos by tagging,
commenting, sharing. The fact is that users social tagging options that add metadata in the form of keywords that reflects users
preference over those photos and tagging can be better utilized to mine the users preference, leading to a huge amount of social
images with user contributed tags, noisy and missing tags are predictable, which limit the performance of social tag-based retrieval
system. Therefore, the tag refinement is used to denoise and enrich tags for an image is desired to tackle this problem.
Only about fifty percent of the tags provided by Flickr users are really correlated to the images. Second, it lacks an optimal
level approach. Consider Flickr Photo sharing website as an example.
There are dual ranking options for tag based social image search.
1. Time-based ranking It ranks images derived from the uploading time of each image
2. Interestingness-based ranking- It ranks images derived from each images interestingness in Flickr.
Disadvantage of This method is it does not take the visual content and tags of images into consideration. Thus, both of these
two ranking strategies are not based on relevance measure, and thus the search results are not sufficiently good in terms of
significance. Therefore, well-organized tag based social image search methods are highly preferred.
In large social networks, users are influenced by others for various reasons. Social influence is most important topic in Social
Networks and looks at how individual thoughts, actions and feelings are influenced by social groups. For example, the colleagues on
LinkedIn will mostly impact ones choices in work, while the friends from Facebook have strong influence on ones preferences in daily
life. Social influence mining in social media networks has critical importance in real world applications such as friend suggestion and
photo recommendation. Much effort has been made for social network analysis and a large number of works has been done. Analyzing
social influence in social media networks has received significant interest.
Existing works on tag refinement exploited the semantic association between tags and visual relationship of images to
address the noisy and missing problems, Therefore Social Influence is one of important issue in the social media network.

Literature Survey
A] Multi-correlation Regularized Tensor Factorization (MRTF)
J. Sang et.al [2] proposed in, User-centric Social Multimedia computing, User-created tags not only help the users in
sharing and organizing Images, but also provide large amount of meaningful data for image retrieval. General studies on usergenerated tags for tag based applications focused on exploiting the photos-tag, image-image and tag-tag relationships. Considering
that user is the creator of the tagging activity and user involves with image and tag in many views, so the problem of tag refinement is
tackled by improving user information.
936
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A Tensor Decomposition Consist of


1.
2.

To jointly model the ternary user-image-tag interrelation and respective intra-relations.


The users, images and tags are represented in the corresponding latent subspaces. For a specified image, the tags with the
highest cross-space associations are reserved as the ultimate annotation.

These Web 2.0 websites allocate users as owners, taggers, or commenters for their contributed images to interact and cooperate
with each other in a social media networks. Typically, in a photo sharing website, three types of interrelated entities are involved, i.e.,
image, tag and user. From this view, user contributed labeling data can be deemed as the products of the ternary interactions among
images, tags and users. Given such a large-scale web dataset, noisy and missing tags are unavoidable, which limits the performance of
social tag based retrieval system. Therefore, the tag refinement to denoise and tags for images is desired to tackle this problem.
Presented works on tag refinement oppressed the semantic association between tags and visual similarity of images to tackle the
noisy and missing issues, while the user interaction as one of important entities in the social tagging data is neglected. Users are the
inventor of the tagging action and it involve with images and tags in many views. It is assumed that the integration of user information
contributes to a improved understanding and explanation of the tagging data. Consider a example to explain this examination, both
images are labeled with apple by the two different user, but they have different image content, i.e. apple as a fruit and apple as iphone
respectively. Due to the familiar semantic space, conventional work on image content understanding cannot solve the problem. User
related and surroundings information can be benefit to give the image semantics. So, iphone fan will possibly use apple to tag a phone
image, while a fruit specialist will use apple to tag a fruit.
It is not necessary to explicitly know the users benefit or profiles. Aim of our work is to improve the associations between the
images and tags provided with the unprocessed tagging data from photo sharing websites. So here it is solved from a factor
investigation perspective and aim at construction of the user sensitive image and tag issue representations.
A new method named Multi-correlation Regularized Tensor Factorization (MRTF) is used to deal with the tag improvement task.
1.
2.
3.

Tensor factorization is utilized to jointly model the multiple factors.


To improve the sparsity problem, many intra-relations are used as the smoothness constraints and then the factors inference is
regularized tensor factorization problem.
It encode the compact users, visual and tags representation over their hidden subspaces, tag refinement is performed by
computing the cross-space image-tag associations.

Advantages of this method:


1.
2.

User information is introduced into the social tag processing and jointly models the many issue of user, image and tag by
tensor factorization.
The MRTF model is used to extract the latent factor representations. The Sparsity problem is alleviated by imposing the
smoothness constraints.

Limitation is that it survive different forms of metadata, such as descriptions, comments, and ratings. While focus was more on tags,
so how to form other metadata for overall perceptive is one of drawback of this method.

B) Separated Method
Li et al. [4] proposed in, Tag based social image search with visual-text joint hypergraph learning, in which simply the
visual contents is used to calculate the significance score for each image.
In separated method

Tags or the visual contents are utilized separately to calculate the significance of the social image.
The relevance score of the social image is calculated only by using the visual or the textual content of the image.

Liu et al. intended a tag ranking method, which is able to rank the tags that are related with an image according to their relevance
levels. Li et al. introduce an approach that learns the relevance scores of tags by a neighborhood voting approach. Xu et al. propose a
937

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

tag refinement algorithm from the topic modeling point of view. Liu et al. propose a tag ranking method, which is able to rank the tags
that are related with an image according to their relevance levels.
Disadvantages of this method:
1.
2.

Only image content or the tags are used for image search, in which the useful information is lost.
Search result may not be satisfactorily good.

C) Sequential Method
Li et al. [4] proposed in, Tag based social image search with visual-text joint hypergraph learning, in sequential approach
visual contents and tags used sequentially for social image search. In most of the existing methods textual-content based analysis is
performed first and then the visual content based analysis is performed next.
It is the relevance based ranking method used for social image search. First the relevance scores are calculated based on the
tags of the images and then these calculated relevance scores are refined using the visual content of the images. Though more than half
of the tags are noisy there are also meaningful tags that are useful for the searching of the image. Liu et al. propose a significancebased ranking technique for social image search, which first learns significance scores based on the tags of photos and then refines the
significance scores by exploring the image content.

D) Hypergraph Learning
In many real-world problems, relationships surrounded by the objects of our interest are more difficult than pairwise.
Complex relationships into pairwise ones will unavoidably lead to loss of information that can be valuable for our learning tasks. In
many real-world problems, representing a set of complex relational objects as undirected or directed graphs is not complete. To
construct an undirected graph in which two vertices are grouped together by an edge. if there is at least one common user of their
corresponding articles and then an undirected graph based clustering approach is applied.
In undirected graph based clustering:
1.
2.

It ignore the information on whether the same user joined writing three or more articles or not.
Information loss is unpredicted because the articles by the same user likely belong to the same topic and hence the
information is useful for our grouping task.

Therefore use hypergraph instead to completely represent complex relationships surrounded by the objects of our interest. Yu et
al. [5] proposed in, Higher order learning with graphs used for image classification. Hypergraph is a graph in which an edge can
connect more than two vertices. In other words, an edge is a subset of vertices. The tag-based social image search cannot achieve
suitable results because of the high amount of noise present in the user-provided tags. Most of the tags are mistakenly spelled so they
can be considered as irrelevant. Hypergraph learning method is used to tackle this problem. It can completely represent the complex
relationships among objects by using hypergraph.
Hypergraph has been used in many data mining and information recovery responsibilities, for their effectiveness for higherorder relationship modeling. Hypergraph learning technique it simultaneously utilized the visual and the tags content in hypergraph
learning method. In this method, each image is represented by a vertex in the constructed hypergraph, and the visual clustering results
are employed to construct the hyperedges.
Hypergraph learning method is used to rank images according to the relevance levels of images. This mechanism established
the usefulness of hypergraph structure in capturing higher order relationship. In the hypergraph, the vertices signify users and images,
and the hyperedges are utilized to capture visual, textual content relations among images, and social links between users and images.
Hypergraph is used to model users, images, and various types of relations in the Flickr network.
Advantage of this method:
1.
2.

938

Noisy textual and image features can be reduced, and this makes our approach more robust than the previous methods.
It is used to effectively model users, images, and various types of relations.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Comparison of all methods:-

Author

J.Sang
et.al [2]

Method

Multi correlation
Regularized
Tensor
Factorization
(MRTF)

User

Representation
Image
tag
Metadata

Yes

Yes

Yes

No

Li et al.
[4]

Separated
method

No

Yes

Yes

No

Li et al.
[4]

Sequential
method

No

Yes

Yes

No

Yu
et
al.[5]

Hypergraph
learning method

Yes

Yes

Yes

Yes

Results

The Sparsity problem is


improved by imposing
the
smoothness
constraints.

Only visual information


or the tags are used for
image search, in which
the useful information is
Missing.

The lack of an optimal


ranking approach is
reason for the insufficient
search results.
Noisy textual and visual
features can be reduced,
and this makes approach
more resilient.

CONCLUSION
It explored related research efforts that generally focused on information retrieval tasks. Our intention is to recognize the
trends in the surveyed area and categorize them in a new way that would integrate and add understanding to the work in the field with
respect to the Flickr social media network. Users with similar feature representations can be recommended to each other to connect
people with common interests and motivate people to contribute and share more content.

REFERENCES:
[1] J. Sang, Q. Fang, "Topic-Sensitive Influencer Mining in Interest-Based Social Media Networks via Hypergraph Learning", IEEE
Trans. Multimedia, vol. 16, no. 3, Apr. 2014.
[2] J. Sang, User-Perceptive Multimedia Content Analysis, Springer, 2014.
[3] J. Sang, Changsheng Xu, "Right Buddy Makes the Difference: an Early Exploration of Social Relation Analysis in Multimedia
Applications", ACM Multimedia, Nov. 2012.
[4] Y. Gao, M. Wang, H.-B. Luan, J. Shen, S. Yan, and D. Tao, Tag based social image search with visual-text joint hypergraph
learning, in Proc. ACM Multimedia, 2011, pp. 15171520.
[5] S. Agarwal, K. Branson, and S. Belongie, Higher order learning with graphs, in Proc. ICML, 2006, pp. 1724

939

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey of Searching Nearest Neighbor Based on Keywords using Spatial


Inverted Index
Shilpa B. Patil
M.E. Computer-II,
Vidya Pratisthan`s College Of Engineering-Baramati.
patil.shilpa962@gmail.com
`
Abstract - Many search engines are used to search anything from anywhere; this system is used to fast nearest neighbor search using
keyword. Existing works mainly focus on nding top-k Nearest Neighbors, where each node has to match the whole querying
keywords .It does not consider the density of data objects in the spatial space. Also these methods are low ecient for incremental
query. But in intended system, for example when there is search for nearest restaurant, instead of considering all the restaurants, a
nearest neighbor query would ask for the restaurant that is, closest among those whose menus contain spicy, brandy all at the same
time, solution to such queries is based on the IR2-tree, but IR2-tree having some drawbacks. Eciency of IR2-tree badly is impacted
because of some drawbacks in it. The solution for overcoming this problem should be searched. The spatial inverted index is the
technique which will be the solution for this problem.
Keywords Nearest Neighbor Search, IR2-tree, Nearest, Range search, Spatial inverted index.

Introduction
Nearest neighbor search (NNS), also known as closest point search, similarity search. It is an optimization problem for nding
closest (or most similar) points. Nearest neighbor search which returns the nearest neighbor of a query point in a set of points, is an
important and widely studied problem in many elds, and it has wide range of applications. We can search closest point by giving
keywords as input; it can be spatial or textual. A spatial database use to manage multidimensional objects i.e. points, rectangles, etc.
Some spatial databases handle more complex structures such as 3D objects, topological coverages, linear networks. While typical
databases are designed to manage various NUMERICS and character types of data, additional functionality needs to be added for
databases to process spatial data types eciently and it provides fast access to those objects based on dierent selection criteria.
Keyword search is the most popular information discovery method because the user does not need to know either a query
language or the underlying structure of the data. The search engines available today provide keyword search on top of sets of
documents. When a set of query keywords is provided by the user, the search engine returns all documents that are associated with
these query keywords. Solution to such queries is based on the IR2-tree, but IR2- tree having some drawbacks. Efficiency of IR2-tree
badly is impacted because of some drawbacks in it. The solution for overcoming this problem should be searched. Spatial inverted
index is the technique which will be the solution for this problem. Spatial database manages multidimensional data that is points,
rectangles.
This paper gives importance spatial queries with keywords [5] [6] [9] [10]. Spatial queries with keywords take arguments
like location and specified keywords and provide web objects that are arranged depending upon spatial proximity and text relevancy.
Some other approaches take keywords as Boolean predicates [1] [2], searching out web objects that contain keywords and rearranging
objects based on their spatial proximity. Some approaches use a linear ranking function [7] [8] to combine spatial proximity and
textual relevance. Earlier study of keyword search in relational databases is gaining importance. Recently this attention is diverted to
multidimensional data [3] [4] [11]. N. Rishe, V. Hristidis and D. Felipe [12] has proposed best method to develop neighbor search
with keywords. For keyword-based retrieval, they have integrated R-tree [14] with spatial index and signature file [12]. By combining
R-tree and signature they have developed a structure called the IR2-tree [12]. IR2-tree has merits of both R-trees and signature files.
The IR2-tree preserves objects spatial proximity which important for solving spatial queries.
Literature Survey
Literature review is contains the points IR2 - Tree, Drawbacks of the IR2-tree, Previous methods.

IR2 Tree
The IR2 Tree [12] combines the R-Tree and signature file. First we will review Signature files. Then IR2-trees are
discussed. Consider the knowledge of R-trees and the best- first algorithm [12] for Near Neighbor Search. Signature file is known as a
hashing-based framework and hashing -based framework is which is known as superimposed coding (SC)[12].
Drawbacks of the IR2-Tree
IR2-Tree is first access method to answer nearest neighbour queries. IR2-tree is popular technique for indexing data but it
having some drawbacks, which impacted on its efficiency. The disadvantage called as false hit affecting it seriously. The number of
940
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

false positive ratio is large when the aim of the final result is far away from the query point and also when the result is simply empty.
In these cases, the query algorithm will load the documents of many objects; as each loading necessitates a random access, it acquires
costly overhead [12].
Keyword search on spatial databases
This work, mainly focus on nding top-k Nearest Neighbors, in this method each node has to match the whole querying
keywords. As this method match the whole query to each node, it does not consider the density of data objects in the spatial space.
When number of queries increases then it leads to lower the eciency and speed. They present an ecient method to answer top-k
spatial keyword queries. This work has the following contributions: 1) the problem of top-k spatial keyword search is dened. 2) The
IR2-Tree is proposed as an ecient indexing structure to store spatial and textual information for a set of objects. There are ecient
algorithms are used to maintain the IR2-tree, that is, insert and delete objects. 3) An ecient incremental algorithm is presented to
answer top-k spatial keyword queries using the IR2-Tree. Its performance is estimated and compared to the current approaches. Real
datasets are used in our experiments that show the signicant improvement in execution times.
Disadvantages: 1. Each node has to match with querying keyword. So it aects on performance also it becomes time consuming and
maximizing searching space.
2. IR2-tree has some drawbacks.
Processing Spatial-Keyword (SK) Queries in Geographic Information Retrieval (GIR) Systems.
Location based information stored in GIS database. These information entities of such databases have both spatial and textual
descriptions. This paper proposes a framework for GIR system and focus on indexing strategies that can process spatial keyword
query. The following contributions in this paper: 1) It gives framework for query processing in Geo- graphic Information Retrieval
(GIR) Systems. 2) Develop a novel indexing structure called KR*-tree that captures the joint distribution of keywords in space and
significantly improves performance over existing index structures. 3) This method have conducted experiments on real GIS datasets
showing the eectiveness of our techniques compared to the existing solutions. It introduces two index structures to store spatial and
textual information.
A) Separate index for spatial and text attributes:
Advantages: 1.
2.

Easy of maintaining two separate indices.


Performance bottleneck lies in the number of candidate object generated during the ltering stage.

Disadvantages: 1.

If spatial ltering is done rst, many objects may lie within a query is spatial extent, but very few of them are relevant to
query keywords. This increases the disk access cost by generating a large number of candidate objects. The subsequent
stage of keyword ltering becomes expensive.

B) Hybrid index
Advantages and limitations: -

1. When query contains keywords that closely correlated in space, this approach suer from paying extra disk cost accessing
R*-tree and high overhead in subsequent merging process.
Hybrid Index Structures for Location-based Web Search.
There is more and more research interest in location-based web search, i.e. searching web content whose topic is related to a
particular place or region. This type of search contains location information; it should be indexed as well as text information. text
search engine is set-oriented where as location information is two-dimensional and in Euclidean space. In previous paper we see same
two indexes for spatial as well as text information. This creates new problem, i.e. how to combine two types of indexes. This paper
uses hybrid index structure, to handle textual and location based queries, with help of inverted les and R*-trees. It considered three
strategies to combine these indexes namely: 1) inverted le and R*-tree double index.2) rst inverted le then R*-tree.3) rst R*-tree
then inverted le. It implements search engine to check performance of hybrid structure, that contains four parts:(1) an extractor which
detects geographical scopes of web pages and represents geographical scopes as multiple MBRs based on geographical coordinates. (2)
The work of indexer is use to build hybrid index structures integrate text and location information. (3) The work of ranker is to ranks
941
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the results by geographical relevance as well as non-geographical relevance. (4) an interface which is friendly for users to input
location-based search queries and to obtain geographical and textual relevant results.
Advantages: 1.

Instead of using two indexes for textual and spatial information. this paper gives hybrid index structures that integrate
text indexes and spatial indexes for location based web search.

Disadvantages: 1.

Indexer wants to build hybrid index structures to integrate text and location information of web pages. To textually
index web pages, inverted les are a good. To spatially index web pages, two-dimensional spatial indexes are used, both
include dierent approaches, this cause to degrading performance of indexer.

2.

In ranking phase, it combine geographical ranking and non-geographical ranking, combination of two rankings and the
computation of geographical relevance may aects on performance of ranking.

Conclusion
In this report, we have surveyed a Searching Nearest Neighbor based on Keywords using Spatial Inverted Index and evaluate
the needs and challenges present in Nearest Neighbor Search. This report covers existing techniques for that and also covers upon new
improvements in current technique. In this paper, we have surveyed topics like IR2 Tree, Drawbacks of the IR2-Tree, Spatial
keyword search, Solutions based on Inverted Indexes.

REFERENCES:
[1] I. De Felipe, V. Hristidis, and N. Rishe. Keyword search on spatial databases. In ICDE, pp. 656665, 2008.
[2] D. Zhang, Y. M. Chee, A. Mondal, A. K. H. Tung, and M. Kitsuregawa. Keyword search in spatial databases: Towards searching
by document. In ICDE, pp. 688 699, 2009
[3] R. Hariharan, B. Hore, C. Li, and S. Mehrotra, Processing Spatial- Keyword (SK) Queries in Geographic Information Retrieval
(GIR) Systems, Proc. Scientific and Statistical Database Management (SSDBM), 2007.
[4] X. Cao, G. Cong, and C. S. Jensen. Retrieving top-k prestige-based relevant spatial web objects. PVLDB, 3(1):373384, 2010.
[5] Y.-Y. Chen, T. Suel, and A. Markowetz. Efficient query processing in geographic web search engines. In SIGMOD, pp. 277288,
2006.
[6] G. Cong, C. S. Jensen, and D. Wu. Efficient retrieval of the top-k most relevant spatial web objects. PVLDB, 2(1):337348, 2009.
[7] I. De Felipe, V. Hristidis, and N. Rishe. Keyword search on spatial databases. In ICDE, pp. 656665, 2008.
[8] Y. Zhou, X. Xie, C. Wang, Y. Gong, and W.-Y. Ma, Hybrid Index Structures for Location-Based Web Search, Proc. Conf.
Information and Knowledge Management (CIKM), pp. 155-162, 2005.
[9] I.D. Felipe, V. Hristidis, and N. Rishe, Keyword Search on Spatial Databases, Proc. Intl Conf. Data Eng. (ICDE), pp. 656-665,
2008.
[10]C. Faloutsos and S. Christodoulakis, Signature Files: An Access Method for Documents and Its Analytical Performance
Evaluation, ACM Trans. Information Systems, vol. 2, no. 4, pp. 267-288, 1984.
[11]N. Beckmann, H. Kriegel, R. Schneider, and B. Seeger, The R- tree: An Efficient and Robust Access Method for Points and
Rectangles, Proc. ACM SIGMOD Intl Conf. Management of Data, pp. 322-331, 1990.
[12]G.R. Hjaltason and H. Samet, Distance Browsing in Spatial Databases, ACM Trans. Database Systems, vol. 24, no. 2, pp. 265318, 1999

942

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey of Public Sentiment Interpretation on Twitter


Ms. Devaki V. Ingule1, Prof.Gyankamal J. Chhajed2
1

Student, M. E. Computer Engineering, VPCOE, Baramati, SavitribaiPhulePuneUniversity, India


devaki.ingule@gmail.com

Assistant Professor, Computer Engineering Department, VPCOE, Baramati, Savitribai Phule PuneUniversity, India
gjchhajed@gmail.com

AbstractOpinion mining and sentiment analysis is a Natural Language Processing task. Number of Users shared
what they think on microblog services. Twitter is important platform for follow sentiment of public which is a very challenging
problem. Public sentiment analysis is a very essentialto explore, analyze and organize users views for better decision making.
Sentiment analysis is process of identifying positive and negative opinions, emotions and evaluations in text. This is useful for
consumers for research the sentiment of products before they actually purchase, or companies that want to monitor the public
sentiment of their brands. In this paper we have reviewed and analyzed number of techniques for public sentiments analysis and their
classification.

KeywordsTwitter,Public sentiment, Emerging Topic Mining, Event tracking, sentiment classification, supervisedmachine
learningmethods, Correlation between Tweets and Events.
INTRODUCTION

Mining public sentiments and analysis of them on twitter data has provided easy way to expose public opinion, which helps for
decision making in various domains. Twitter is important and popular platforms for peoples interaction. By using twitter platform
number of users share their views and opinions. For making important decision it is necessary to mine public opinions and to find
reasons behind variation of sentiments is valuable. For example, acompany can analyze opinions of public for obtainingusers feedback
about its products in tweets. In general, opinion mining helps to collect information about the positive and negative aspects of a
particular topic. Finally, the positive and highly scored opinions obtained about a particular product are recommended to the user. In
this paper number of different methods are used for analysis of public sentiments, opinion is classified in various approaches on text
using some Supervised machine learning algorithm like Maximum Entropy classification, Support Vector Machines, Naive Bayes.
Combination of two state-of-the-art sentiment analysis tools is used for obtaining sentiment information.

1. SENTIMENT ANALYSIS
Sentiment Analysis, analyze the opinions which are extracted from various sources like the comments on forums, reviews about
products, different policies or topic related with social networking sites and tweets.
OConnor et al.studied on analyzing sentiments of public share on Twitter. This is the first work in microblogging services to
interpret the variations in sentiment.
A] Pang et al. [1] studied the existing methods on analysis of sentiments in details i.e.supervised machine learning methods.
Advantages:
1.
2.

Machine learning methods minimize the structural risks.


For predicts sentiment of documents Supervised machine learning approach are used.

Disadvantages:
1.
2.
3.

943

It cannot analyze possible reasons behind public sentiments.


Supervised methods demand large amounts of labeled training data that are very expensive.
It may fail when training data are insufficient.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

B] M.Hu and B. Liu [2] also works on mining and summarizing customer reviews based on data mining and natural language
processing methods. This paper develops different novel techniques to summarize reviews of customers. It summarizes reviews by
following three ways:
1. It mine features of products on which customers have been commented.
2. It Identifies opinions and decides whether it is positive or negative in each review.
3. It also summarizing the results of opinions.
Advantages:
1.
2.
3.

It predicts movie sales and elections so its easy to make decisions.


It provides a feature-based summary of a large number of customer reviews of a product sold online.
Summarizing the reviews is useful to common shoppers, also to product manufacturers.

Disadvantages:
1.
2.

It cannot determine the opinions strength.


It also not expressed opinions with adverbs, verbs and nouns.

C] W.zhang et al. [3] conducted detailed study of opinions retrieval from blogs. In this paper, they have presented a three-component
opinion retrieval algorithm. The first component is an information retrieval module. The second one classifies documents into
opinionative and no opinionative documents, and keeps the former. The third component ensures that the opinions are related to the
query, and ranks the documents in certain order.
Meng et al.collected opinions in Twitter for entities by mining hash-tags to conclude the presence of entities and sentiments from each
tweets.
Advantages:
1.
2.

It mines hash tags for opinions.


It has higher performance than a state-of-art opinion retrieval system.

Disadvantages:
1.
2.
3.
4.

It cannot handle more general writings and crossing domains.


It cannot select detail features.
It cannot implement better NEAR operator.
Because of all tweets cannot contain hash tags, it is difficult to gain sufficient coverage for an events.

Sentiment classification is one of the best applications of sentiment analysis which classifies text into number of categories.
D] Li.Jiang et.al [4] described the target-dependent Twitter sentiment Classification. The state-of-the-art technique solves the problem
of target dependent classification which adopts the target-independent strategy. To given target it always assign irrelevant sentiments
.For agiven query, it classifies the sentiments of the tweets according to there categories whether they contain positive, or negative or
any neutral sentiments about the query. Here the query considers as the target of the sentiments.
Advantages:
1.
2.

This technique gives the high performance for target-dependent twitter sentiment classification.
By using words which are connected to the given target it incorporates syntactic features that are generated using words
syntactically to decide sentiment is about the given target or not.

Disadvantages:
1.
2.
3.

944

It is not search out the relations between a target and its extended targets.
It is not able to explore relations between twitter accounts that classifies the sentiments.
It decreases the performance of very short and ambiguous tweets.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2. EVENT DETECTION AND TRACKING


Events are the good reasons behind the variations of sentiments related to target. By tracking the events, and detecting sentiment
analysis about events and also tracking variations in sentiments and finding reasons for changes the sentiments completes this task.
A] Leskovec et al.[5] proposed work on tracking memes, for example quoted phrases and sentences. This work offers some of analysis
of the global news cycle and the dynamics of information propagation between mainstream and social media. It identifies short
distinctive phrases that travel relatively intact through online text as it evolves over time.
Advantages:
1.

It provides temporal relationships such as the possibility of employing a type of two-species predator-prey model with blogs
and the news media as the two interacting participants.

Disadvantages:
1.
2.

It is applicable only for representative events, such as biggest event in whole twitter message stream.
Fine grained events can be detected very hardly.

B] B-S Lee and J.Weng [6] concentrate on detection of events through analysis of the contents which are published in Twitter. This
paper proposes EDCoW that is Event Detection with Clustering of Wavelet based on Signals for detecting events.
Advantages:
1.
2.
3.

EDCoW (Event Detection with Clustering of Wavelet-based Signals) signal independently treats each word.
EDCoW (Event Detection with Clustering of Wavelet-based Signals) achieves good performance.
This could possibly contribute to study the temporal evolution of event.

Disadvantages:
1.
2.

It cannot contribute relationship among users to event detection.


This design of EDCoW not applicable on time lag when it computing for the cross correlation between a pair of words.

3. DATA VISUALIZATION
A] D.Tao et.al [7] deeply studied subspace learning algorithms and ranking. Retrieving of images from large databases is very active
research field today. For retrieving images content based image retrieval (CBIR) technique is used. It is related by semantically to
query of user from an collected database of images.SVM classifier always unstable for a smaller size training set. SVMRF becomes
poor if there are number of samples of positive feedback are small.SVM has also an problem of over fitting.
Advantages:
1.
2.
3.

It improves the performance of relevance feedback.


SVM classifier always unstable on a smaller size of training set to address this it develops an asymmetric bagging-based
SVM (AB-SVM).
For over fitting problem it used the random subspace method and SVM together for relevance feedback.

Disadvantages:
1.
2.
3.

945

It cannot use tested tuning method and not select the parameters of kernel based algorithms.
These works are not useful for text data especially for noisy text data.
Because of explicit queries are not present in task, ranking methods cannot solve the reason mining task.

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4. CORRELATION BETWEEN TWEETS AND EVENTS


A] T.Sakaki et al.[8] developed novel models to map tweets in a public segmentation. They detect real-time events in Twitter such as
earthquakes. Theyalso proposes an algorithm for monitoring tweets detect event. Each Twitter user is considering as a sensor.Kalman
filtering and particle filtering are used for estimation of location.
Advantages:
1.
2.

Earthquakes are detected by this system and sends e-mails to users who are registered users.
Kalman filtering and particle filtering detects and provides estimation for location.

Disadvantages:
1.
2.
3.

It cannot detect multiple event occurrences.


It cannot provide advanced algorithms for query expansion.
There was only one target event at a time.

B] Y.Hu et.al [9] proposed a joint statistical model ET-LDA that characterizes topical effects between an event and its related Twitter
feeds. This model enables the topic modeling of the event and the segmentation of the each event or tweet.
Advantages:
1.
2.

It extracts a variety of dimensions such as sentiment and polarity.


It describes the temporal correlation between overall tweets.

Disadvantages:
1.

They model each tweet as a multinomial mixture of all events, which is obviously unreasonable due to short lengths of
tweets.

C] Chakrabarti and Punera [10] have described a variant of Hidden Markov Models in event summarization from tweets. It gets an
intermediate representation for a sequence of tweets relevant for an event. In this paper use of sophisticated techniques to summarize
the relevant tweets are used for some highly structured and recurring events. Hidden Markov Models gives the hidden events.
Advantages:
1.
2.
3.
4.

It provides benefits for existing query matching technologies.


It works well for one-shot events such as earthquakes.
It tackled the problem of constructing real time summaries of events.
It learns an underlying hidden state representation of an event.

Disadvantages:
1.
2.
3.
4.

It does not use the continuous time stamps present in tweets.


It is not possible to gets minimal set of tweets which are relevant to an event.
It cannot provide a summary of long occurrences and unpredictable events.
In above novel model noises and background topics cannot be eliminated.

[D]Shulong Tan et.al [11] proposes LDA model for interpreting public sentiment variations on Twitter. WhereLatent Dirichlet
Allocation (LDA) propose two models 1.Foreground and Background LDA (FB-LDA) and 2.Reason Candidate and Background
LDA (RCB-LDA) in which FB-LDA model can remove background topics and then extract foreground topics to show possible
reasons and Reason Candidate and Background LDA(RCB-LDA) model provides ranking them with respect to their popularity within
the variation period. RCB-LDA model also finding the correlation between tweets and their events.

946

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CONCLUSION
This survey discusses various approaches to Opinion Mining and Sentiment Classification. It provides a detailed view of different
applications and potential challenges of Sentiment Classification. Some of the machine learning techniques like Naive Bayes, Maximum
Entropy and Support Vector Machines and their pros and cons has been discussed. The emerging topics are related to the actual or genuine
reasons behind the variations are very important. So emerging topics will consider as possible reasons behind variations.It is necessary to
interpret sentiment variation and finding the reasons behind them for overcome above limitations.One of the technique that is Latent Dirichlet
Allocation (LDA) which propose two models such as Foreground and Background LDA (FB-LDA) and Reason Candidate and Background
LDA (RCB-LDA)willinterprets sentiment variations.It concludes that these models can mine number of reasons behind variations of public
sentiments.

REFERENCES:
[1] B. Pang and L. Lee, Opinion mining and sentiment analysis,Found. Trends Inform. Retrieval, vol. 2, no. (12), pp. 1135,
2008.
[2] M. Hu and B. Liu, Mining and summarizing customer reviews,in Proc. 10th ACM SIGKDD, Washington, DC, USA, 2004.
[3] W. Zhang, C. Yu, and W. Meng, Opinion retrieval from blogs,in Proc. 16th ACM CIKM, Lisbon, Portugal, 2007.
[4] L. Jiang, M. Yu, M. Zhou, X. Liu, and T. Zhao, Target-dependent twitter sentiment classification, in Proc. 49th HLT,
Portland, OR,USA, 2011.
[5] J. Leskovec, L. Backstrom, and J. Kleinberg, Meme-tracking and the dynamics of the news cycle, in Proc. 15th ACM
SIGKDD,Paris, France, 2009.
[6] J. Weng and B.-S. Lee, Event detection in twitter, in Proc. 5thInt. AAAI Conf. Weblogs Social Media, Barcelona, Spain, 2011
[7] D. Tao, X. Tang, X. Li, and X. Wu, Asymmetric bagging and random subspace for support vector machines-based relevance
feedback in image retrieval, IEEE Trans. Patt. Anal. Mach. Intell.,vol. 28, no. 7, pp. 10881099, Jul. 2006
[8] T. Sakaki, M. Okazaki, and Y. Matsuo, Earthquake shakes twitter users: Real-time event detection by social sensors, in Proc.
19thInt. Conf. WWW, Raleigh, NC, USA, 2010.
[9] Y. Hu, A. John, F. Wang, and D. D. Seligmann, Et-lda: Joint topic modeling for aligning events andtheir twitter feedback, in
Proc.26th AAAI Conf. Artif. Intell., Vancouver, BC, Canada, 2012.
[10] D. Chakrabarti and K. Punera, Event summarization using tweets, in Proc. 5th Int. AAAI Conf. Weblogs Social Media,
Barcelona, Spain, 2011.
[11] Shulong Tan, Yang Li, Huan Sun, Ziyu Guan, Xifeng Yan, Interpreting the Public Sentiment Variations on Twitter, IEEE
Transactions on Knowledge and Data Engineering, VOL. 26, NO.5, MAY 2014.
[12]L. Zhuang, F. Jing, X. Zhu, and L. Zhang, Movie review mining and summarization, in Proc. 15th ACM Int. Conf.
Inform.Knowl.Manage., Arlington, TX, USA, 2006.
[13]X.Wang, F.Wei, X. Liu, M. Zhou, and M. Zhang, Topic sentiment analysis in twitter: A graph-based hashtag sentiment
classification approach, in Proc. 20th ACM CIKM, Glasgow, Scotland, 2011.
[14]J. Bollen, H. Mao, and A. Pepe, Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena, in
Proc.5th Int. AAAI Conf. Weblogs Social Media, Barcelona, Spain, 2011

947

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Hollow Nano Spheres for Lithium-ion Batteries


Venkatesh Yepuri*
venkatesh.yepuri91@gmail.com
*

Assistant Professor, Department of Science and Humanities,


Malineni Lakshmaiah Womens Engineering College,
Pulladigunta, Guntur, Andhrapradesh, India-522017

Abstract:Technology in the Twentieth century requires miniaturization of devices into nanometer sizes while their ultimate
performance is dramatically enhance. This raises many issues regarding to new materials for achieving specific functionality and
selectivity. Nanomaterials a new branch of materials research are attracting a great deal of attention because of their potential
applications in areas such as Electronics. There are two ways of approaching the properties of nanoscale objects: the bottom-up
approach and the top-down approach. In the first, one assembles atoms and molecules into objects whose properties vary discretely
with the number of constituent entities, and then increases the size of the object until this discretisation gives way in the limit to
continuous variation. The relevant parameter becomes the size rather than the exact number of atoms contained in the object. In the
second case, one considers the evolution of the properties of a sample as its size is whittled down from macroscopic toward
nanometric lengths. It is this approach that we shall examine here, whilst mentioning zones of overlap and exclusion between the two
approaches. Nanomaterials plays a very important role in various applications. One of the best applications of Nanomaterials is for
Energy Storage Applications. Since the surface reactivity increases in Nano dimensions, their structures play a very important role
under these dimensions. The Nanoparticles shape and morphology can also help to improve the performance of the devices. This
paper mainly deals with the importance of Nanoparticles, particularly Hollow Nanospheres helps the movement of ions to circulate
between the hollow spaces and to act as a transport media for the ions. Various methods are adopted for the synthesis of Hollow
Nanospheres, The best methods among these are Hydrothermal Sol-gel Method, SoftMicelle template method, Hard Micelle Template
method. One of these Soft Micelle template methods is reported in this paper.

Keywords:Nanotechnology, Nanomaterials, Hollow Nanospheres, Soft micelle template method.


1. Introduction
Nanomaterials are classified into Nanostructured materials and Nanophase/Nanoparticle materials. The former refer to condensed bulk
materials that are made of Grains with grain size in the nanometer range, while latter are the usually dispersive nanoparticles. The
nanometer range here covers a wide range which can be as large as 100-200 nm. To distinguish nanomaterials from bulk, it is vitally
important to distinguish the unique properties of nanomaterials and their prospective impact in the field of Science and Technology.
Nanomaterials plays a very important role in various applications. One of the best applications of Nanomaterials is for Energy Storage
Applications. Since the surface reactivity increases in Nano dimensions, their structures play a very important role under these
dimensions. The Nanoparticles shape and morphology can also help to improve the performance of the devices. This paper mainly
deals with the importance of Nanoparticles, particularly Hollow Nanospheres helps the movement of ions to circulate between the
hollow spaces and to act as a transport media for the ions. Hollow colloidal particles are exceptionally promising materials in diverse
fields of technology, including catalysis, drug delivery, photonics, biotechnology, and electrochemical cells [1-7]. This promise has
motivated intense research efforts by groups, worldwide, seeking to develop both specific and general strategies for synthesizing
hollow nanostructures,[8-13] and for functionalizing their interior and exterior spaces with desirable chemistries.[13-15]
Hollow colloidal particles are exceptionally promising materials in diverse fields of technology, including catalysis, drug delivery,
photonics, biotechnology, and electrochemical cells. Various methods are adopted for the synthesis of Hollow Nanospheres, The best
methods among these are Hydrothermal Sol-gel Method, Soft Micelle template method, Hard Micelle Template methods. One of these
Soft Micelle template methods is reported in this paper. Hollow colloidal particles are exceptionally promising materials in diverse
fields of technology, including catalysis, drug delivery, photonics, biotechnology, and electrochemical cells. A common difficulty in
template-based syntheses arises from the well-known challenge of creating uniform coatings of desired materials (or their precursors)
on the corresponding templates. In many cases incompatibility between the substrate and coating material requires prior surface
modification/ functionalization before the coating step can be performed. Preparation of hollow particles with nonspherical shapes
introduces additional challenges. These range from the difficulty in forming a uniform coating around surfaces with large variation in
curvature to the paucity of nonspherical templates available for the synthesis. [15-17].
948

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Generally, it is believed that when material sizes are reduced down to nanometer scale, usually exhibit significantly enhanced
functionalities in their properties. For instance, hollow nanoparticles with spherical morphology are unique candidates with high
mechanical strength, surface permeability, and high surface area. Hollow nanospheres with thin shell domain not only enhance the fast
lithium insertion/deinsertion kinetics but also the hollow void could serve as an effective buffering medium [18].Recently, we have
reported the fabrication TiO2, La2O3 and V2O5 hollow nanospheres using anionic polymeric micelles with core-shell-corona
architecture [19-21]. Herein, we report the fabrication of ZnO hollow nanospheres using anionic micelles through electrostatic
interaction of Zn2+ ions with anionic micelles followed by precipitation under mild alkaline conditions. ABC triblock copolymer
poly(styrenebacrylic acidbethylene oxide)(PS-PAA-PEO) with anionic COO was used as a template and Zn(OAc)2 was used as
metal source. The hollow nanospheres were characterized by TEM, XRD, FTTR, SEM/EDX, thermal analysis with TG/DTA and
nitrogen sorption analyses. The electrochemical characteristics of ZnO hollow nanospheres were further investigated as anode
materials for rechargeable lithium-ion batteries.

2. Experimental
The fabrication of hollow zinc oxide nanospheres with micelles of poly(styrene-b-acrylic acid-b-ethylene oxide (PSbPAAb
PEO) using zinc accetate as metal precursor was carried out as follows. Polymeric micelles solution was prepared by dissolving the
required amount of the above polymer in distilled water and then transferred to a volumetric flask to obtain a stock solution with a
concentration of 0.5 gL1. The micelle solution was adjusted to pH 9 by using dilute NaOH solution. On addition of zinc salt solution,
the clear micelle solution slowly turns to turbid due to formation of Zn(OH) 2 in shell domain of micelles and the contents were gently
stirred for overnight at room temperature using a magnetic stirrer. The synthesis protocol was tuned by changing the ratio of
Zn2+/PAA between 5 and 10 to obtain monodispersed hollow nanospheres. The composite particles were repeatedly washed with
distilled water and ethanol and dried at 60 C to remove moisture. In order to remove the polymeric template as well as to crystallize
the ZnO hollow particles, the composite particles were calcined at 500 C for 4 h in a muffle furnace under air.
Hydrodynamic diameter (Dh) of the template micelles ofPSPAAPEO was measured with an Otsuka ELS800 instrument.
Wide-angle X-ray diffraction (WXRD) patterns were recorded using CuKradiationwitha Shimadzu XRD-7000 diffractometer. The
textural properties such as BET surface area and mesoporesize distribution were obtained using nitrogen adsorption/desorption
isotherms with a BELSORB instrument. The morphology of the samples was observed from JEOL TEM1210 (acceleration voltage:
80 KV) and JEOL TEM2100 electron microscopes(acceleration voltage: 200 KV). FTIR spectra were recorded on a Jasco FTIR7300 spectrometer. TG and DTA analyses were carried out using MAC Science TGDTA 2100. Energy dispersed X-ray analysis was
carried out with Hitachi S-3000N.
For lithium insertion studies, the zinc oxide hollow nanosphere (5 mg) was mixed mechanically with teflonized acetylene black
(TAB2, 3 mg) and then the mixture was pressed on a stainless steel mesh as the current collector under a pressure of 500 kg/cm2 and
dried at 160 C for 4 hours under vacuum. The electrochemical characterizations were carried out using CR2032 coin type cells with
lithium as an anode. The electrolyte used was 1M LiPF6EC: DMC (1:2 volume ratios, Ube Chemicals Co. Ltd.). The coin cell
assembling was performed in a glove box filled with argon (dew point, lower than 80 C). The galvanostatic chargedischarge tests
of the coin cell were performed at the constant current density of 0.5 mAcm 2. The cyclic voltammograms (CV) were recorded with a
Hokuto Denko HSV100 in a beaker type cell containing zinc oxide hollow nanospheres as working electrode and a lithium foil as a
counter and reference electrodes. The chargedischarge performance was carried out in the voltage range of 3.00.05 V.

3. Results and Discussion


The hydrodynamic diameter Dh (67 nm) and the potential (56 mV) of the micelles (pH 9) were obtained from dynamic light
scattering (DLS) and electrophoretic light scattering (ELS) experiments, respectively. Nearly monodispersed spherical micelles with
average diameter ca. 46 1 nm estimated from TEM image (not shown) were used as template for fabrication hollow nanospheres.
The difference in the micelle particles size between DLS and TEM is due to the fact that the latter accounts for only the core-shell
part and excludes corona part of the micelle [22-25]. The TG/DTA analyses of ZnO/polymer composite particles revealed that all
the organic contents were decomposed between 200 and 420 C (not shown) and the total organic content was found to be 16.2
%. Thus the calcination step is essential to create a hollow void space by removing the core-domain of polymeric micelles. The
absence of CH, C=C, and COOH bond stretching vibrations of phenyl groups of polymer backbone in the FTIR spectrum of the
calcined sample (not shown) is also consistent with the thermal analyses [26]. The phase purity and crystallinity of zinc oxide
949

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Intensity, a.u

hollow nanospheres was investigated by powder X-ray diffraction (XRD). Comparison of wide-angle X-ray diffraction
(WXRD) patterns of zinc oxide nanospheres (after calcination) and composite particles (before calcination) (Figure 1)
suggested highly pure zincite crystalline phase. All diffraction peaks can be indexed as the hexagonal phase of ZnO with the
lattice constants a = 0.325 nm and c = 0.521 nm, which is in good agreement with the JCPDS data, No. 36-1451.

A
10

20

30

40

50

60

70

2 theta/degree

Fig. 1:Powder X-ray diffraction patterns of: (A) ZnO-composites and (B) ZnO hollow nanospheres.

200
nm
Fig. 2: TEM image of ZnO hollow nanospheres (Zn2+/PAA = 8)

950

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figures 2 exhibits TEM image of material with Zn2+/PAA mole ratio of 8. The average particle size and void space diameter were
found to be 32 2 and 18 1 nm, respectively. The wall thickness estimated by TEM (Figure 2) was approximately 7 1 nm.
However, at lower Zn2+/PAA ratios (Zn2+/PAA = 3 and 5), the degree of aggregation of nanospheres is relatively low compared to
Zn2+/PAA 10, due to deposition of zinc precursor species outside micelle domain. Furthermore, the PS block core size estimated from
the TEM observation was found to be 23 1 nm; however, after calcinations the void space diameter was approximately 18 1 nm
due to shrinkage. In addition, for hollow particles with high precursor concentration (Zn 2+/PAA = 10), the shell thickness increased
marginally to 8 1 nm. In addition, the pore sizedistribution curves based on BJH model showed disordered mesopores. The total
pore volume and BET surface area were found to be 0.59 cm3g1 and 119 m2g1, respectively. The high-resolution transmission
electron microscope (HRTEM, not shown) allows the resolution of lattice fringes of the crystals to be correlated to the (110) planes of
the zincite lattice structure. Furthermore, energy-dispersive X-ray spectroscopy spectrum (EDX, not shown) shows strong peaks for
zinc and oxygen suggestive of the pure zinc oxide phase confirming the highly pure zinc oxide hollow nanospheres.
Figure 3A exhibits the cyclic voltammograms (CV) of ZnO hollow nanospheres measured between 0 and 3 V at a scan rate of 3
mV/min. In the first cathodic scan, there is a major strong peak at 0.23 V and a minorpeak at 0.57 V, which are related to the
electrochemical process of the ZnO material. This process contains

st

1 cycle
nd
2 cycle
th
5 cycle
th
25 cycle
th
50 cycle

Voltage/ V

Current / mA

0
-1

A
-2

1
-3

-4
0.0

0.5

1.0

1.5

2.0
+

Potential vs. Li/Li

2.5

300

600

900

Capacity/ mAh.g

1200

1500

-1

Fig. 3: (A) CV curves of ZnO hollow nanospheres in 1.0 m LiPF 6 (EC/DMC = 1/2 (v/v)) at a 3 mV/min sweep rate and (B) The
Charge/discharge profiles ZnO hollow nanospheres at 0.25 C rate in the voltage region of 0.005-3.0 V.

the reduction of ZnO into Zn, the formation of Li-Zn alloy, and the growth of the gel-like solid electrolyte interphase (SEI) layer. The
potentials of these reactions are very close, so it shows one major broad peak and a minor shoulder. In the first anodic scan, the four
peaks located at 0.38 (broad), 0.51, 0.68, and 1.37 V are attributed to the multi-step dealloying process of Li-Zn alloy [27]. The second
cathodic sweep differs from the first one; the major peak at 0.23 V is vanished and new peaks at 0.4 and 0.72 V (broad) were appeared
for the second cycles. The anodic peaks of the second cycle, however, are more similar to that of first cycle according to literature
[28].
Figure 3B shows discharge-charge curves in the voltage window of 0.0053.0 V (vs. Li) at a rate of 0.25 C up to 50 cycles and for
clarity, only selected cycles are shown in the voltage versus capacity profiles. It is worth to note that the plateaus on the voltage
profiles coincide with the cathodic and anodic peaks in the CV curves (Fig.3A). A very obvious long plateau located at about 0.5 V
appears in the first discharge curve. However, in the first charge curve, the plateaus are not so obvious. It can be seen four slopes,
which are located at 0.28, 0.45, 1.33, and 2.5 V, respectively. After the first cycle, the slopes in discharge curves are around 0.8 and
0.4 V, and the curves are similar in shape, indicating that the reactions become more reversible. The ZnO hollow nanospheres deliver
a first discharge capacity of 1304 mAh.g1, and a first charge capacity of 730 mAh.g1. It is important to mention that the very high
discharge-capacity observed in the first cycle must originate from electrolyte decomposition in the low-potential region and
subsequent formation of solid electrolyte interphase (SEI) on the hollow nanospheres [29]. The discharge capacities of the ZnO
951
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

hollow particles based electrode (Fig.4) in the 1st, 2nd, 5th, 25th, and 50th cycles are 1304, 804, 494, 252 and 249 mAh g 1, respectively.
The corresponding charge capacity values are 730, 554, 434, 251, and 248, mAh g 1 for the 1st, 2nd, 5th, 25th, and 50th cycles,
respectively. The coulombic efficiency of first cycle was found to be 62.5 %. However, the dense ZnO powder exhibited significantly
lower charge-discharge capacities (not shown) compared to ZnO hollow nanospheres under similar experimental conditions. For
instance, the discharge capacities of ZnO dense particles are 471, 352, 214, 151, and 144 mAh.g 1 for 1st, 2nd, 5th, 25th, and 50th cycles;
whereas the corresponding charge capacities were found to be 275, 210, 162, 141, 139 mAh.g 1 for 1st, 2nd, 5th, 25th, and 50th cycles,
respectively.
Fig.4A shows the capacity versus cycle number plot for ZnO hollow nanospheres. As can be seen, both discharge and charge
capacities decrease up to 10 cycles and thereafter the capacity stabilizes. After 100 cycles with 100 % depth of discharging and
charging at a rate of 0.1 C, the electrode capacity decreased to 246 mAh g 1. After about 10 cycles, the columbic efficiency, the ratio
of discharge/charge capacity is nearly 100 %. However, cycle performance of dense ZnO particles was poor as the discharge
capacities reduced from 1090 to 118 mAh.g1 within a few cycles of repeated charge/discharges (not shown). Fig. 4B shows the rate
performance of ZnO hollow nanospheres. At a low rate of 0.1 C, hollow particles show a discharge capacity of about 1414 mAh g1.
However, the discharge capacities gradually decrease to 306, 195, and 116 mA g 1 at 1C, 5C and 10C, respectively. However, the
ZnOhollow particle basedelectrodes almost regains their original high-capacities when the rate was again lowered to 0.1 C after being
exposed to high current loads (10 C),which indicatesthe high stability of hollow nanospheres based electrodes. The improved
electrochemical performance is attributed to the unique hollow spherical morphology. More importantly, the void space not only
effectively buffers against charge storage and local volume change but also provides better electrical contact and shorter diffusion path
length providing better rate capability.

2000

1200

Charge
Discharge

1000

-1

Capacity / mAh.g

Capacity/ mAhg

-1

1600

1200

800

400

800
600
400

0.1 C

0.1 C

200

1.0 C
5.0 C

0
0

20

40

60

80

100

10

10.0 C

15

20

25

30

Cycle Number

Cycle Number

Fig. 4: (A) Charge-discharge cycling performance ZnO hollow nanospheres at 0.1 C rate in the voltage region of 0.005-3.0 V vs.
Li/Li+ and (B) Rate performance in the voltage region of 0.005-3.0 V.

4. Conclusions
Core-shell-corona micelle obtained from poly(styrene-b-acrylic acid-b-ethylene oxide) successfully produce ZnO hollow
nanosphere of size about 32 2 nm. TEM, SEM/EDX, and XRD analyses confirmed the formation of hollow nanospheres and purity
or crystallinity of ZnO nanoparticles. ZnOhollow nanospheres exhibited good cycling performance even after 100 cycles of repeated
charge/discharges and the discharge capacity at 100th cycle was found to be 246 mAh g1. The ZnO hollow nanospheres based
electrode exhibited high rate capability and cycling performance than the dense ZnO particles. The void space not only act as buffer
medium against charge storage and local volume change but also provides better electrical contact and shorter diffusion path length
and therefore provide better rate capability.

952

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] F. Caruso, R. A. Caruso, H. Mo hwald, Science 1998, 282, 111
[2] Y. D. Yin, R. M. Rioux, C. K. Erdonmez, S. Hughes, G. A. Somorijai, A. P. Alivisatos, Science 2004, 304, 711.
[3] Y. Zhu, J. Shi, W. Shen, X. Dong, J. Feng, M. Ruan, Y. Li, Angew. Chem. Int. Ed. 2005, 44, 5083.
[4] A. N. Zelikin, O. Li, F. Caruso, Angew. Chem. Int. Ed. 2006, 45, 7743.
[5] W. H. Suh, A. R. Jang, Y. H. Suh, K. S. Suslick, Adv. Mater. 2006, 18, 1832.
[6] a) H. P. Liang, H. M. Zhang, J. S. Hu, Y. G. Guo, L. J. Wan, C. L. Bai, Angew. Chem. Int. Ed. 2004, 43, 1540. b) S. Ikeda, S.
Ishino, T. Harada, N. Okamoto, T. Sakata, H. Mori, S. Kuwabata, T. Torimoto, M. Matsumura, Angew. Chem. Int. Ed. 2006, 45,
7063. c) S. W. Kim, M. Kim, W. Y. Lee, T. Hyeon, J. Am. Chem. Soc. 2002, 124, 7642.
[7] X. W. Lou, Y. Wong, C. Yuan, J. Y. Lee, L. A. Archer, Adv. Mater. 2006, 18, 2325.
[8] R. K. Rana, V. S. Murthy, J. Yu, M. S. Wong, Adv. Mater. 2005, 17,
1145.
[9] J. Yu, H. Guo, S. A. Davis, S. Mann, Adv. Funct. Mater. 2006, 16, 2035.
[10] a) Q. Wang, Y. Liu, H. Yan, Chem. Commun. 2007, 2339. b) Y. X. Hu, J. P. Ge, Y. G. Sun, T. R. Zhang, Y. D. Yin, Nano Lett.
2007, 7, 1832. c) C. I. Zoldesi, A. Imhof, Adv. Mater. 2005, 17, 924.
[11] a) Q. R. Zhao, Y. Gao, X. Bai, C. Z. Wu, Y. Xie, Eur. J. Inorg. Chem. 2006, 1643. b) H. L. Xu, W. Z. Wang, Angew. Chem. Int.
Ed. 2007, 46, 1489.
[12] H. G. Yang, H. C. Zeng, Angew. Chem. Int. Ed. 2004, 43, 5206
[13] X. W. Lou, C. Yuan, Q. Zhang, L. A. Archer, Angew. Chem. Int. Ed. 2006, 45, 3825.
[14] W. S. Choi, H. Y. Koo, D. Y. Kim, Adv. Mater. 2007, 19, 451.[15] X. W. Lou, C. Yuan, Q. Zhang, L. A. Archer, Angew. Chem.
Int. Ed. 2006, 45, 3825.
[16] F. Caruso, Adv. Mater. 2001, 13, 11
[17] Y. Lu, Y. Yin, Y. N. Xia, Adv. Mater. 2001, 13, 271.
[18] M. Sasidharan, K. Nakashima, N. Gunawardhana, T. Yokoi, M. Inoue, S. Yusa, M. Yoshio, T. Tatsumi, Chem. Commun., 2011,
47, 6921
[19] M. Sasidharan, N. Gunawardhana, M. Inoue, S. Yusa, M. Yoshio, K. Nakashima, Chem. Commun. 2011, 48,3200.
[20] M. Sasidharan, N. Gunawardhana, H. N. Luitel, T. Yokoi, M. Inoue, S. Yusa, T. Watari, M. Yoshio, T. Tatsumi, K. Nakashima,
J. Colloid and Interface Science, 2011, 370, 51.
[21] M. Sasidharan, N. Gunawardhana, M. Yoshio, K. Nakashima, J Electrochemical Soc. 2012, 159, A618.
[22] M. Sasidharan, N. Gunawardhana, M. Inoue, S. Yusa, M. Yoshio, K. Nakashima, Chem. Commun. 2011, 48, 3200.
[23] M. Sasidharan, N. Gunawardhana, H. N. Luitel, T. Yokoi, M. Inoue, S. Yusa, T. Watari, M. Yoshio, T. Tatsumi,
[24] K. Nakashima, J. Colloid and Interface Science, 2011, 370, 51.
[25] M. Sasidharan, N. Gunawardhana, M. Yoshio, K. Nakashima, J Electrochemical Soc. 2012, 159, A618.
[26] M. Sasidharan, K. Nakashima, N. Gunawardhana, T. Yokoi, M. Ito, M. Inoue, S. Yusa, M. Yoshio, T. Tatsumi, NanoScale,
2011, 3, 4768.
[27]F. Belliard, J.T.S. Irvine, J. Power Sources, 2001, 97-98, 219.
[28]X.Z. Huang, X.H. Xia, Y.F. Yuan, F. Zhou, Electrochimica Acta, 2011, 56, 4960.
[29] W. B. Xing, J. R. Dahn, J. Electrochem. Soc. 1997, 144, 1195
953

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design of efficient Linear Feedback Shift Register for BCH Encoder


S.Aiswarya, S.Aravinth, S.Uma
M.E Student, Embedded System and Technologies, Velalar College of Engineering and Technology,
Erode, Tamilnadu. E-Mail : aishu1309@gmail.com

Abstract The sequential circuit designed was Look-Ahead Transformation based LFSR in which a hardware complexity was
present and it may limit their employ in many applications. The design of efficient LFSR for BCH encoder using TePLAT (Term
Preserving Look-Ahead Transformation) overcame this limitation by opening the employ of minimizing the iteration bound and
hardware complexity in wide range of applications. A TePLAT convert LFSR formulation behaves in the same way to achieve much
higher throughput than those of a native implementation and a Look-Ahead Transformation-based.

Keywords Linear Feedback Shift Register (LFSR), Term Preserving Look-Ahead Transformation iteration (TePLAT) bound,
loop unrolling, look-ahead transformation (LAT) , Linear Feedback Shift Register (LFSR), Longest Path Matrix Algorithm (LPM),
Minimum Cycle Mean (MCM), Bit manipulation unit (BMU), Code Composer Studio (CCS).

I INTRODUCTION
Linear Feedback Shift Register is used to generate test vectors. It uses feedback and modifies itself on every rising edge of clock.
LFSR algorithms have found wide applications in wireless communication, including scrambling, error correction coding, encryption,
testing, and random number generation. A LFSR is specified by its generator polynomial over the Galois Field GF (2). Some generator
polynomials used on modern wireless communication applications are summarized in Tables 1 and 2 [7], [8] respectively.Traditionally
LFSR can be implemented in hardware. But due to complexity in hardware LFSR can be implemented in software defined radios [1].
Due to the mismatch of data types between the bit-serial operations of the LFSR and the word-based data path, it has been reported
that 33 percent of CPU cycles of those for implementing an OFDM transmitter are dedicated to the scrambler operations. The
software-implementation of the LFSR algorithm is also too slow to support real-time implementation of the 802.11 standard. This
work will focus on efficient implementation of LFSR for BCH Encoder. The first approach aims at increasing execution speed at the
expense of additional special purpose hardware [2]. These hardware units may interface with the host microprocessor via instruction
set extensions or interrupt. The second approach seeks to reformulate LFSR algorithm so that inherent bit-level parallelism afforded
by a word-based microarchitecture may be fully exploited [4]. Since a word may be regarded as a vector of binary bits, traditional
vectorized compilation techniques such as loop unrolling [3] may be applied. The iteration bound is the inverse of theoretical
maximum throughput rate an algorithm may achieve. Many LFSR polynomials such as those listed in Tables 1 and 2 have rather large
loop bounds and hence cannot take full advantage of the benefit of unrolling. Fortunately, a look-ahead transformation (LAT) [3]
promises to resolve this difficulty. However, LAT comes with a price: it often introduces additional operations. ForLFSR, this implies
LAT-transformed LFSR formulation may contain many more terms [5] than the original LFSR. These overhead may offset the
potential benefit of applying LAT.
The main contribution of this work is on exploiting the low overhead property of term-preserving look-ahead transformation
(TePLAT) which guarantees the number of terms of the transformed generator polynomial will remain unchanged [8]. This term
preserving property makes it feasible to apply TePLAT aggressively to achieve maximum throughput rate with respect to a particular
micro architecture discussed in the context of parallel recurrent equations. This work provides critical implementation details such as
initial conditions, experimental outcomes as well as applications to specific SDR platforms. The speedup factor varies from 1.5 to 18
depending on the structure of the generator polynomials.

II BACKGROUND AND DEFINITIONS


A. Linear Feedback Shift Register
954

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

LFSR is a shift register whose input bit is a linear function of its previous state. The XOR gate is used to provide feedback to the
register that shifts bits from left to right. The maximum sequence contains all possible state except the "0000" state. Normally XOR
gate is preferred forlinear function of single bits. Thus, an LFSR is often a shift register whose input bit is driven by the exclusive-or
(XOR) of some bits of the overall shift register value.
Table 1
Common LFSR-Polynomials in [7] and [8]
LFSR Index

Generator Polynomial

1
2
3
4
5
6
7
8
9
10
11
12
13
14

1+x+x3
1+x+x4
1+x2+x5
1+x+x6
1+x4+x5+x6+x7
1+x+x5+x6+x8
1+x4+x9
1+x3+x10
1+x3+x4+x7+x12
1+x3+x16
1+x5+x12+x16
1+x5+x23
1+x2+x3+x7+x32
1+x+x2+x4+x5+x7+x8
+x10+x12+x16+x22+x23
+x25+x3
1+x10+x33
1+x7+x42
1+x35+x42
1+x9+x49
1+x49+x52
1+x35+x74
1+x18+x29+x42+x57
+x67+x80

15
16
17
18
19
20
21

An LFSR can be specified by its generator polynomial over a Galois field GF (2)
P(x) =1+=1 (1)
Where both x and 0,1 , and K is the order of P(x). Each generator polynomial uniquely characterizes a linear ifference equation in
GF (2).
Y[n]+.=1[n-k] = 0 (2)
Where y[n] {0, 1}, + is the logical XOR (exclusive-OR) operator, . (Multiplication) is the logical AND operator.
The beginning value of the LFSR is called the seed. Since the operation of the register is deterministic, the large number of values
occurring from the register is completely determined by its present (or previous) state. Similarly, the register has a fixed number of
possible states [4]; it must finally enter a repeating cycle. However, an LFSR with a better feedback function can generate a sequence
of bits which appears without any definite purpose and which has a larger cycle. The existing state may be obtained by right shifting
the current state by 1 bit and filling in y[n - K - 1]. Substituting n by n -1 into (2), and XORingy[n - K 1] on both sides, one has (note
that p0 = = 1).
y [n-K-1]=.[1]1=0 (3)
The series of numbers created by a LFSR or its Exclusive-NOR counter section can be treated as binary just as Gray code or the
natural binary code. The settlement of taps for feedback in an LFSR can be declared in finite field arithmetic with a polynomial of
mod 2.The coefficients of the polynomial should be 1's or 0's. This in terms known as the reciprocal characteristic polynomial [11].
In the Galois configuration, when the system is clocked, the bits which are not tapped are right shifted one position unchanged. Before
they are stored in the next position the taps, are XORed with the output bit. The new output bit is the next input bit. All the bits in the
register shift to the right unchanged, and the input bit becomes zero only when the output bit is zero. When the output bit is one, the
955

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

bits in the tap positions all flip (if they are 0, they become 1, and vice versa, finally the input bit becomes 1 only when the entire
register is shifted to the right.
To generate the large number of same output, the tap order is the similar function (see above) of the order for the normal LFSR;
otherwise the stream will be in reverse. It is not necessary that the LFSRs internal should be same. The Fibonacci register and Galois
register has the large number of same output as the in the initial section.
The large number of output of LFSR is based on determinism. You can predict the next state only when you know the current state
and the arrangement of the XOR gatesin the LFSR. It is not possible when random events occur. It is much easier to calculate the next
state, with minimal-length LFSRs, as there are only an easily limited number of them for each length. The stream of output is
reversible; an LFSR with similar taps will occur through the output sequence in reverse order.
Table 2
LFSR as Scrambler in SDR
Wireless Standards
Wi-Fi
Wimax
LTE(Gold Code)

Generator Polynomial
1+x4+x7
1+x14+x15
1+x28
1+x28+x29+x30+x31

A block diagram of this LFSR is depicted in Fig. 1. Applying (4), one has (with K = 16).
y [n-17]=y[n-1]y[n-4] (4)
B. Iteration Bound
Recursive and adaptive digital filters are belongs to DSP algorithms which contain feedback loops, that impose an inherent essential
lower bound on the achievable iterative steps or sample period. Iteration bound is the maximum of loop bound, a fundamental limit for
recursive algorithms. Loopbound is the computation time divided by delay element in a loop.
Iteration bound is the inverse of theoretical maximum throughput rate an algorithm may achieve [9]. If no delay element in the loop,
then iteration bound is infinite. Clock period is lower bounded by the critical path computation time. Critical path of a DFG is the path
with the longest computation time among all paths that contain zero delays.
The data dependence imposes an upper bound on how many times a loop can be unrolled to explore the inherent Inter operation
parallelism. Theoretically, this kind of inter operation dependence relation is Characterized by a notion called iteration bound [7].
Roughly, the iteration bound equals to the inverse of the number of iterations that can be unrolled into the same iteration. To increase
throughput, the iteration bound must be minimized.
When the DFG is recursive, the iteration bound is the fundamental limit on the minimum sample period of a hardware implementation
of the DSP program. Two algorithms to compute iteration bound are Longest Path Matrix Algorithm (LPM) and Minimum Cycle
Mean (MCM) [10].In Longest Path Matrix Algorithm (LPM) a series of matrix is constructed and the iteration bound is found by
examining the diagonal elements of the matrices.
An arbitrary reference node is chosen in Gd (called this node s). The initial vector f (0) is formed by setting f(0)(s)=0 and setting the
remaining nodes off(0) to infinity and find the min average length of the edge in the loop in orderto compute iteration Bound by using
Minimum Cycle Mean (MCM) Method.

C. Loop Unrolling
Loop unrolling (loop unfolding) is a well-known compiler optimization technique [3]. It consolidates loop bodies of consecutive
iterations into a single iteration to expose inherent parallelism. For example, the LFSR-10 depicted in Fig. 1 can be represented as a
loop (^: bitwise XOR).
956

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

However, loop unrolling cannot achieve arbitrary level of parallelism. Using LFSR-10 as an example, if one wants to unroll the loop
three times, instead of two times, the following equation will need to be added into the unrolled loop body.
y[n+3]=y[n] ^ y[n-13] (5)
However, this statement cannot be executed in the same iteration with the statement.
y[n]=y[n-3] ^ y[n-16] (6)
Since y[n] needs to be evaluated first before it can be used to evaluate y [n+3]. This data dependence imposes an upper bound on how
many times a loop can be unrolled to explore the inherent interoperation parallelism.
The iteration bound equals to the inverse of the number of iterations that can be unrolled into the same iteration. To increase
throughput, the iteration bound must be minimized [12]. In Fig. 2, the LFSR-10 has an iteration bound of 1/3. Hence, three successive
iterations can be unrolled into the same iteration. Any path in the original DFG containing J or more delays leads to J paths with 1 or
more delay in each path. Therefore, it cannot create a critical path in the J-unfolded DFG. Unfolding a DFG with iteration bound T
results in a J-folded DFG with iteration bound JT.J-unfolding of a loop with wl delays in the original DFG leads to gcd(wl,J) loops
in the unfolded DFG, and each of these gcd(wl,J) loops contains wl/ gcd(wl , J) delays and J/ gcd(wl,J) copies of each node that
appears in the loop.
Unfolding a DFG with iteration bound T results in a J unfolded DFG with iteration bound JT. Unfolding preserves the number of
delays in a DFG. This can be stated as follows:
[w/J] + [(w+1)/J]+..+ [(w + J - 1)/J] = w (7)

Fig. 2 A loop-unrolled version of LFSR-10


D. Look Ahead Transformation
Look Ahead transformation is a kind of block transformation and has the properties of block processing. In look-ahead transformation,
the linear recursion is first iterated a few times to create additional concurrency. The iteration bound of this recursion is same as the
original version, because the amount of computation and the number of logical delays inside the recursive loop have both doubled
Look ahead Approach is also applied for Sequential Nature Decoder Algorithms. Look Ahead technique can enhance its parallel
processing or block processing implementations.
Higher-order IIR digital filters can be pipelined by using clustered look-ahead or scattered look-ahead techniques. (For 1st-order IIR
filters, these two look-ahead techniques reduce to the same form).In Clustered look-ahead Pipelined realizations require a linear
complexity in the number of loop pipelined stages and are not always guaranteed to be stable. Scattered look-ahead can be used to
derive stable pipelined IIR filters.
In this the poles of the systems will approach origin. This implies the system is more stable and limit cycle effects are reduced. The
data dependency relation will be reduced.A generalized look-ahead transform in GF (2) is
Q(x) =P(x)G(x)=P(x)+=1() (8)

III. TERM PRESERVING LOOK AHEAD TRANSFORMATION


TePLAT of a LFSR with a generator polynomial P(x) is a LFSR with a generator polynomial Q(x) = [P(x)] ^2. Although Q(x) is a
polynomial of twice the order of P(x), both of them have the same number of terms. Since this property is always true for power of
two but may not necessarily hold for other exponents, we only consider power of two in TePLAT.
957

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

If the iteration bound of an LFSR is T, then the iteration bound after a TePLAT = T/2. While TePLAT promises full exploitation of
bit-level parallelism, one should not lose sight on the fact that the purpose of LFSR implementation is to generate the maximal length
pseudo-random bit stream.
Each time the TePLAT is applied, the order of the transformed generator polynomial doubles. As such, twice as many bits of on-chip
storage space will be needed to store the increased number of states.To fully exploit inherent bit-level parallelism, auxiliary data
format conversion operations such as bit-vector packing and unpacking, and word boundary

Fig. 3 TePLAT, loop unrolled, vectorized LFSR

If the number of states after TePLAT transformation approaches the length of the original maximal length sequence, then it may be
more cost effective to simply cache the entire maximal length sequence and save all the computation. Assume that the TePLAT is
repeatedly applied m times, termed as mth-level TePLAT thereafter, and then the number of bits to store the states will be 2^m*K bits.
After TePLAT parallelize the LFSR to the full length of word, the number of consumed registers doubles each time we apply the
technique.
LFSRs are designed for efficient implementation such as Grain stream cipher. They are generally very long and the iteration bounds
are small. TePLAT result in high-order generator polynomials, more registers will be required to hold the additional bits. Hence, the
memory and register-footprint of executing the LFSR algorithm should be treated as a cost function In this case; the parallelism can be
achieved by simply applying loop unrolling.
In terms of cost, since LAT and TePLAT both result in high-order generator polynomials, more registers will be required to hold the
additional bits. Hence, the memory and register-footprint of executing the LFSR algorithm should be treated as a cost function. the
improvement does not induce any hardware overhead. However, after TePLAT parallelize the LFSR to the full length of word, the
number of consumed.
The conventional LFSRs are similar to the applied loop unrolling (LU) technique. TePLAT may also be applied, with careful tradeoffs
between area and throughput, to hardware-based LFSR implementation. Assume that the TePLAT is repeatedly applied m times,
termed as mth-level TePLAT thereafter, and then the number of bits to store the states will be 2^m K bits. Using above argument, one
must limit m such that2.K 2After simplification, one has m [K -log2]

IV SIMULATION RESULT
The cycle-accurate simulator can profile convincible outcome for demonstrating this algorithm. Therefore, Texas instruments Inc.,
Code Composer Studio (CCS) and advanced RISC Machines Ltd. Instruction Set Simulator (ARMulator) is used.
958

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In this work, an in-house source-to-source compiler that generates LFSR codes with TePLAT factors ranging from 2^0 to 2^8 is built.
The generated codes on the corresponding simulators are simulated and determined the best TePLAT factor for the LFSR

.
Fig. 4 Comparison of algorithm on TI C6416Architecture
The best performance look-ahead transformed LFSR found based on the experiments is termed as best in the following results.
Popular and representative processor for mobile devices such as TI-C6416 digital signal processor is used. The best look-ahead
transformed LFSRs improvement depends on the LFSR generator polynomial and the Processor architecture. A comparison of the
optimization technique is provided in Fig 4.
Throughput numbers are given for all the LFSRs. The conventional LFSRs are similar to [12], [13] that applied loop unrolling (LU)
technique .The best look-ahead transformed LFSRs improvement depends on the LFSR generator polynomial and the processor
architecture. Our experimental results show that the best LFSR can usually be found by TePLAT factor ranging from 2^0 to 2^8. The
best look-ahead transformed LFSRs can perform at most 18 * (LFSR-8) to 50 percent (LTE) faster.
In [8], bit manipulation unit (BMU) hardware was proposed to accelerate a communication DSP and was implemented on XILINX
VirtexII. The throughput of Wifi scrambler in [8] is 0.6 bit/cycle. Our method can achieve 0.7 bit/cycle on ARM and 2.9 bit/cycle on
TI.
If the throughput is measured using bit/sec (bps), the180 nm DSP in [8] achieves throughput of 168 Mbps and our proposed
framework is 280 Mbps and 1.74 Gbps on 130 nm ARM926 and 130 nm TI C6416, respectively. They are generally very long and the
iteration bounds are small. In this case, the parallelism can be achieved by simply applying loop unrolling, and thus our TePLAT
methods have negligible improvement.

V CONCLUSION
The Design of Efficient Linear Feedback shift Register is to minimize iteration bound without introducing any additional operations.
A term preserving look-ahead transformation (TePLAT) is used for efficient parallel implementation of LFSR in several applications.
Compared to existing approaches there will be significant speedup performance and also the hardware utilization will be minimized.
Compared to existing approaches, significant speedup has been observed in numerous simulations. TePLAT may also be applied, with
careful tradeoffs between area and throughput, to hardware-based LFSR implementation.

REFERENCES:
[1] J.Mitola III, Cognitive Radio Architecture.John Wiley & Sons, 2011.
[2]
J.Glossner et al., A Software-Defined Communications Baseband Design, Proc IEEE Comm.Magazine,vol. 41,no. 1,pp. 120128, 2009.
[3]
K.K. Parhi,VLSI Digital Signal ProcessingSystems Design and Implementation.John Wiley & Sons, Inc., 2005.
[4]
Y.Tang, L. Qian, and Y. Wang, Optimized Software Implementation of a Full-Rate IEEE 802.11a Complaint Digital
Baseband Transmitter on a Digital Signal Processor, Proc. IEEE Global Comm. Conf, vol. 4, pp.2194-2198, 2009.
[5]
S.Sriram and V.Sundararajan, Efficient Pseudo-Noise Sequence Generation forSpread-Spectrum Applications, Proc. IEEE
WorkshopSignalProcessingSystems (SIPS 02), pp. 80-86, 2011.
[6]
J.Linetal., Cycle EfficientScrambler Implementation for Software Defined Radio, Proc. Intl Conf. Acoustics, Speech, and
Signal Processing (ICASSP 10), pp. 1586-1589, 2010.
959

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

R.S. Katti, X. Ruan, and H. Khattri,Multiple- Output Low-Power Linear Feedback Shift Register Design,IEEETrans.Circuits
and System I, vol. 53, no. 7, pp. 1487- 1495, July 2009.
[8]
IEEE Std. 802.16e-2005, Part 16: Air Interface for Fixed Broadband Wireless Access Systems, IEEE Std. 802.16, 2009.
[9]
M. Hell, T. Johansson, and W. Meier, GrainA Stream Cipher for Constrained Environment, Intl J. Wireless and Mobile
Computing, vol. 2,no. 1, pp. 86-93, 2010.
[10] Chao Cheng and KeshabParhi, "High-Speed Parallel CRC Implementation Based on Unfolding, Pipelining And Retiming", in
proc, IEEE, vol. 53, No. 10, October 2006.
[11] Naresh Reddy, B. Kiran Kumar and K. MonishaSirisha," On the Design of High Speed Parallel CRC Circuits Using DSP
Algorithms" in IJCSIT, vol. 3 (5), 2012.
[12] JohnG.ProakisMasoudSalehi,"Digital-Communications-Linear block codes, cyclic codes, BCH codes, Reed-Solomon codes,"
McGrawhill 5th Edition, 2008
[7]

960

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review on Color Transfer Between Images


Ms. Anjali A. Dhanve1, Mrs. Gyankamal J. Chhajed2
1

Student, M. E. Computer Engineering, VPCOE, Pune University, Baramati, Maharashtra, India


anjalidhanve233@gmail.com

Assistant Professor, Computer Engineering Department, VPCOE, Pune University, Baramati, Maharashtra, India
gjchhajed@gmail.com

Abstract In this paper we have reviewed and analyzed different techniques for transferring color among images. We have
reviewed various methods for color transfer between images like Histogram Matching (HM), Means and Variance, Color Category
Based Approach, Gradient Preserving Model, N-dimensional Probability Density Function, Dominant Color Idea, Principal
Component Analysis (PCA). For a purpose of smoothing of an image there are methods like EPS filter, JBF filter, Two Multiscale
Schemes, Local Laplacian Pyramid available today. We have also represented analysis of these techniques in the form of table
considering different factors like color effect, grain effect and details of image. It is concluded that some corruptive artifacts remains
in these methods like color distortion, occurring noise, loss of details.

Keywords Histogram matching, PCA, EPS filter, JBF filter, Image, Grain effect, Color effect.
INTRODUCTION

The availability of high dynamic range images increase due to advances in lighting simulation. Because of this there is an
increasing demand to display these images more clearly. Every image has its individual color that significantly influences the
sensitivity of human observer. Color manipulation is one of the most common tasks in image editing.
Color transfer between images is very applicable in various areas like photography, film industry, CCTV camera, medical, Hubble
telescope, so on. For color transfer some automatic color transfer approaches developed and for edge preserving smoothing some
filters are investigated for grain effect suppression and detail preservation but still they are not satisfying desired goals. Todays
techniques develop methods to transfer color between images but it create some corruptive artifacts like color distortion, grain effect,
loss of details. The need for adequate solutions is growing due to the increasing amount of digitally produced images in different areas
as discussed earlier.
Ideally, color transfer between reference and target images should satisfy the goals like color fidelity, grain suppression, details
preservation and enhancement.
Color fidelity: The color distribution of the target should be close to that of the reference image.
Grain suppression: No visual artifacts (grain/blocky artifacts) should be generated in the target image.
Details preservation: Details in the original target should be preserved after the transfer.

COLOR TRANSFER TECHNIQUES


1. Histogram Matching
The histogram of an image is a plot of the gray level values or the intensity values of a color channel versus the number of pixels at
that value [1]. The shape of the histogram provides us with information about the nature of the image, or sub image if we are
considering an object within the image. For example, a very narrow histogram implies a low contrast image, a histogram skewed
toward the high end implies a bright image, and a histogram with two major peaks, called bimodal, an object that is in contrast with
the background. The histogram features that we will consider are statistical based features, where the histogram is used as a model of
the probability distribution of the intensity levels. These statistical features provide us with information about the characteristics of the
intensity level distribution for the image.
The histogram matching is able to specify the shape of the referred histogram that we expect the target image to have. However,
histogram matching can only process the color components of the color image. Since the relationship of the color components are
separated.
Advantage:
Histogram matching is easy to calculate.
961

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Limitation:
Histogram matching approach produces the unsatisfactory look, e.g. grain effect, color distortion.
2. Means and Variance
Reinhard et al. [2] firstly proposed a way to match the means and variances between the target and the reference image. The means
and standard deviations are compute for each axis separately in correlated L color space.
Ruderman et al. developed a color space, called L, which minimizes channels correlation. Means and Variance
contains mainly two steps:
2.1 Color space conversion
Reinhard first converted RGB color space into L color space.
(a) RGB To XYZ
This conversion depends on the phosphors of the supervise that the image was originally intended to be displayed on.
(b) XYZ To LMS
The data in this color space shows a great compact of skew, which can largely eliminate by transferring the data to logarithmic space.
L log L
M log M
S log S

(c) LMS ToL


Thiscompute decorrelation between the three axes using principal components analysis (PCA). Assume the L channel as red, the M as
green, and the S as blue, this is a variant of many opponent-color models.
Achromatic r g b
Yellowblue r g b
Redgreen r g
2.2 Compute means and variance
Compute the means and standard deviations for each axis separately in L space.
Following steps are taken in this approach:
i) Subtract the mean from the data points.
ii) Scale the data points comprising the synthetic image by factors determined by the respective standard deviations. After this
transformation, the resulting data points have standard deviations that will be conventional to the photograph. Finally, it converted the
result back to RGB passing through log LMS, LMS, and XYZ color spaces.
Limitation:
The means and variance matching was produce slight grain effect and major color distortion .
3. Color Category Based Approach
To prevent from the grain effect, Chang et al. [3], [1] proposed a color category based approach that categorized each pixel as one
of the basic categories. Then a convex hull was generated in L color space for each category of the pixel set, and the color
transformation was applied with each pair of convex hull of the same category. Following steps conduct in this method as:
3.1. Handling the Illuminant Color in Images
First, apply the color-by-correlation method [9] to estimate the illuminant color. Each pixel color is transformed using the VonKries
equation. The color-by-correlation method works well, but in case the estimation is not performed correctly.
3.2. Color Naming Method
Each pixel is categorized to BCCs(Basic Color Categories). The color naming method consists of two steps:
i) Initial color naming
ii) Fuzzy color naming
962

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

3.3 Compute Related Color Values in the Chromatic Categories


Basic convex hull is a convex hull that encloses all of the pixel color values within the basic color category. If the color distribution
in the input image is smaller than that of the reference image, then the distribution keeps its original size. This guarantees that no
pseudo contours will appear between pixels that belong to the same BCC (Basic Color Categories).
3.4 Compute Related Color Values in the Achromatic Categories
It separates the achromatic categories from chromatic categories. Because it is important that shadows or highlights of image
remains same even after the color transformation. It separates the color transformations of achromatic pixels from the chromatic
pixels. It balances the tones in the input and output images. It also deals with shadow or highlight regions for more accuracy.
3.5 Transferring Colors
The final equivalent color value of the pixel is computed by linearly interpolating the equivalent color values in each category.
After this use the illuminant color of the reference image using the Von Kries equation.
Advantage:
Color Category-Based Approach was easily and quickly created a color transformed image or video.
Limitations:
The color-by-correlation method considered as an illuminant estimation method. If the numbers of surfaces are small, then the
estimation ability is decrease.
4. Gradient Preserving Model
Xiao and Ma proposed a gradient-preserving model [6] to convert the transfer processing to an optimization, and balanced the color
distribution and the detail performance.
A gradient mesh is a regularly connected 2D grid. The primal component of a gradient mesh is a Ferguson patch, which is
determined by four nearby control points. Different from raster images, gradient meshes are defined in a parametric domain and have
curvilinear grid structures. Based on gradient meshes, image objects are represented by one or more planar quad meshes, each forming
a regularly connected grid. Every grid point has the position, color, and gradients. The image represented by gradient meshes is then
determined by bicubic interpolation of these specified grid information. A grid point in a gradient mesh not only has color as a pixel
does in an image, but also has color gradients defined in parametric domain.
In gradient preserving model, it used an extended PCA-based transfer to handle the color range, and propose a minimization
scheme to generate color characteristics. These characteristics of source image are more similar to that of the reference image.
Afterwards, a gradient-preserving algorithm is performed to suppress local color inconsistency. Finally, it develop a multi-swatch
transfer scheme to provide more user control on color appearance.
Advantages:
Gradient preserving model maintain the grid structure of gradient meshes as well as achieve fast performance. By using fusion-based
minimization scheme it improve the quality of the recolored gradient mesh. For flexible user control a multi-swatch color transfer
scheme is developed.
Limitation:
Global optimal solution usually required large computational cost.
5. N-dimensional Probability Density Function
Pitiet al. [7], proposed an N-dimensional probability density function transfer approach to reduce the high-dimensional PDF
matching problem to the one-dimensional PDF matching by Radon Transform[9]. This operation can reduce the color correlation and
keep the color distribution of the transferred result consistent with that of the reference image. As the pixel intensity is changed it
would lead to the change in the variance of image. Therefore, the Poisson reconstruction was introduced to medication the result.
In this method f(x) and g(y) are the pdf of X and Y, the original and target N-dimensional continuous random variables
respectively. For example in color transfer, the samples x i of X encapsulate the three color components xi = (ri, gi, bi). The goal is
to find a continuous mapping function t that transforms f in g.
Dimension:N = 1. This is a well known problem which offers a simple solution:

t ( x) CY1 (C X ( x))
963

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Where, CX and CY are the cumulative pdfs of X and Y. This can be easily solved using discrete lookup tables.
Dimension: N 2. The idea is to reduce the problem from N dimensions to the 1-dimensional case. The projections of the Ndimensional samples for X and Y are computed along the axis. Matching these two marginal using the previous 1D pdf matching
scheme results in a 1D mapping. This mapping can be applied along the axis to transform the original N-dimensional samples. The
new distribution of the transformed original samples is proved to be actually closer to the target distribution than before
transformation.
Advantages:
The pdf transfer operates in 1D so, the overall algorithm has a linear computational complexity of O(M),where M is the number of
samples processed. It means that N-dimensional probability density function required low computation costs. If more bins are present
in the histogram then this method achieve higher accuracy in the color mapping.
6. Dominant Color Idea
Dong et al. [8] proposed a dominant color idea for color transfer. When the amount of dominant colors of the target was consistent
with that of the reference, the color of the reference would be transferred to obtain a satisfactory result.
Limitation:
When the amount of dominant colors was not balanced, the unsatisfactory result would be produced.
7. Principal Component Analysis (PCA)
Abadpouret al. [13] proposed the principal component analysis (PCA) and created a low correlated and independent color space to
minimize color correlation.
The PCA approach is done as follows:
i) As per the given source image and the given reference image it create destination image. Firstly, both images are pass to the
FPCAC. The results of the clustering are the two sets of membership maps. It describe the membership of each pixel in the source to
that of each pixel of reference image. This is measured with respective to the cluster parameters. Here, the number of the clusters
which should be input to the FPCAC.
ii) Now, using all the pixels in the reference image that belong to the some cluster, it produces the intermediate image. In fact, the set
of artificial images make a pallet for the source image. In the same way the pallet for the reference image is produced.
iii) Now, for the color vector in the source image is a result image.

EDGE PRESERVING SOOTHING


The grain effect can be considered as a special type of noises, and it can be removed by linear smoothing. Even though the linear
smoothing can remove the grains, the over-blurring destroys the original image details and decrease the sharpness of edges.
1. Edge Preserving Smoothing (EPS)
Edge-preserving smoothing (EPS) filters [9], are proposed to overcome the problem of over-blurring and low sharpness of edges.
They can prevent the edge blurring by linear filtering according to their intensity or gradient-aware properties.
2. Joint Bilateral Filter (JBF)
Joint bilateral filter (JBF) [10], is the first guided edge-preserving smoothing method. The JBF exploits the pixel intensity of the
reference which is correlated to the target to improve the filtering effect. This is a non-linear filter, where the weight of each pixel is
computed using a Gaussian in the spatial domain multiplied by an influence function in the intensity domain that decreases the weight
of pixels with large intensity differences.
Advantages
JBF preserve edges in smoothing process. It is a simple and intuitive. It is a non iterative method.

964

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Limitations:
Like the bilateral filter (BLF), JBF cannot avoid the halo artifact and gradient reversal problem. This method does not preserve
gradients.
3. Two Multiscale Scheme
Fattal et al. [11] proposed an elaborate scheme for details, but their adoptive bilateral decomposition has defects as aforementioned.
Farbman et al. [12] proposed two multiscale schemes which are simpler than Fattals, because the WLS-based decomposition
overcomes the defects of bilateral decomposition. Farbman et al. [13] introduced the diusion maps as a distance measurement to
replace the Euclidean distance in their weighted least square filter.
Limitation
Two multiscale schemes create halo artifact.
4. Local Laplacian Pyramid
Paris et al. [14] explored the local Laplacian pyramid to yield the edge-preserving decomposition for fine-level detail manipulation.
5. Guided Filter
Guided filter derived from a local linear model. The guided filter generates the filtering output by considering the content of a
guidance image. This guidance image can be the input image itself or different image. This filter can perform as an edge-preserving
smoothing operator like the popular bilateral filter. This has better behavior near the edges. It also has a theoretical connection with the
matting Laplacian matrix. It is a more generic concept than a smoothing operator and can better utilize the structures in the guidance
image. This approach can be done by following ways:
(a) First define a general linear translation-variant filtering process. It includes a guidance image, an input image, and an output
image.
(b) Now define the guided filter and its kernel.
i) The local linear model ensures that filter output has an edge only if input image has an edge.
ii) To determine the linear coefficients by minimizing the difference between filter output and the filter input.
iii) Find linear regression.
iv) Apply the linear model to all local windows in the whole image.
Advantages
The guided filter has a fast and non-approximate linear-time algorithm. Its computational complexity is independent of the filtering
kernel size.
Limitations
Guided filter may create halo artifact near edge of image. As Guided filter is based on local operator, it is not directly applicable for
sparse inputs like strokes.

965

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ANALYSIS
Following table contains analysis of different color transfer methods between images:
Table 4.1: Analysis of different color transfer methods
Sr.
No.

Name of Method

Color
Distortion

Grain Effect

Loss of
Details

Computational
Complexity

Histogram Matching

Yes

Yes

Very Less

Very Low

Means and Variance

Yes

Less

Very Less

Very Low

Yes

Very Less

Very Less

Low

Very Less

Less

Very Less

High

Very Less

Very Less

Yes

Very Low

Yes

Very Less

Very Less

Low

Very Less

Very Less

Very Less

High

3
4
5
6
7

Color Category-Based
Approach
Gradient Preserving
Model
N-dimensional
probability density
Dominant Color Idea
Principal Component
Analysis (PCA)

CONCLUSION
In this paper, we have reviewed and analyzed different color transfer techniques. We observed thatHistogram Matching is easy to use
but create unwanted result, Means and Variance avoid grain but create color distortion. Gradient Preserving Modeland Principal
Component Analysis(PCA)are overall good methods but having large computational complexity. It is observed that color distortion
artifact is common problem in most of the methods

REFERENCES:
[1] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed.Upper Saddle River, NJ, USA: Prentice Hall, 2008.
[2] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, Color transfer between images,IEEE Comput. Graph. Applicat., vol.
21, no. 5, pp.3441, 2001.
[3] C. Xiao, Y. Nie, and F. Tang, Efficient edit propagation using hierarchicalData structure, IEEE
Trans. Visualiz. Comput. Graph., vol. 17, no. 8, pp. 11351147, 2011.
[4] Y. Chang, S. Saito, and M. Nakajima, Example-based color transformationof image and video usingbasic color
categories,IEEE Trans.Image Process, vol. 16,no. 2, pp. 329336, 2007.
[5] Y. Chang, S. Saito, K. Uchikawa, and M. Nakajima, Example-based color stylization of images,ACM Trans. Appl.
Percept., vol. 2, no. 3,pp. 322345, 2005.
[6] X. Xiao and L. Ma,Gradient-preserving color transfer, Comput.Graph. Forum, vol. 28,no. 7, pp. 18791886, 2009.
[7] F. Piti, A. C. Kokaram, and R. Dahyot, N-dimensional probablility density function transfer and its application to colour
transfer, in Proc. 10th IEEE Int. Conf. Computer Vision, 2005, vol. 2, pp. 14341439.
[8] W. Dong, G. Bao, X. Zhang, and J.-C. Paul, Fast local color transfer via dominant colors mapping, ACM SIGGRAPH Asia
2010Sketches, pp. 46:146:2, 2010.
[9] S. Paris, P. Kornprobst, J. Tumblin, and F. Durand,Bilateral filtering:Theory and application, in Proc. Computer Graphics
and Vision 2008.
[10] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, Digital photography with flash and noflash image pairs, ACM Trans. Graph. (Proc. ACM SIGGRAPH 2004), vol. 23, no. 3, pp. 664672, 2004.
[11] R. Fattal, M. Agrawala, and S. Rusinkiewicz, Multiscale shape and detail enhancement from multi-light image collections,
ACM Trans.Graph., vol. 26, 2007.
[12] B. Wang, Y. Yu, T.-T. Wong, C. Chen, and Y.-Q. Xu, Data-driven image color theme enhancement, ACM Trans. Graph.,
vol. 29, no. 6, pp. 146:1146:10, 2010.
[13] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, Edge-preserving decompositions for multi-scale tone and detail
manipulation, ACMTrans. Graph., vol. 27, no. 3, pp. 6776, 2008.
[14] S. Paris, S. W. Hasinoff, and J. Kautz, Local Laplacian filters: Edge-aware image processing with a Laplacian pyramid,
ACM Trans.Graph., vol. 30, no. 4, pp. 112, 2011
966

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Assess the Knowledge Rearding Lung Cancer Among Men at a Selected


Hospital
Dr.S.Aruna, M.Sc(N), Ph.D1, Mrs.Kavitha2.S, Mrs.P.Thenmozhi, M.Sc(N)3
Saveetha College of Nursing, Saveetha University, Saveetha Nagar, Thandalam, Chennai 602105 Tamilnadu India
Mobile No: +91 9840764106
Email id: arunabhargavi@yahoo.com
Subject to be published under: Nursing

ABSTRACT
Introduction: Lung cancer is a leading cause of cancer related death commonly for men. There are many causes for lung cancer like
inhalation of carcinogenic pollutants by susceptible host. Out of all the risk factors, cigarette smoking is the most important risk factor
in the development of lung cancer in men. Objective: To assess the knowledge regarding lung cancer among men Methods: Non
experimental descriptive research Design was employed. 60samples were selected by using convenient sampling technique at OPD in
selected Hospital. Knowledge on lung cancer was assessed by structured questionnaire and data were analyzed by descriptive and
inferential statistics. Results: The findings of the study revealed that many of them were not aware of lung cancer. Conclusion: It is to
be concluded that the study participants may get benefited by self instructional module regarding lung cancer and its prevention
Keywords: lung cancer, knowledge, carcinogen, Cigarette smoking, Carcinoma

Introduction
Lung cancer is a leading cause of cancer related death commonly for men. There are many causes for
lung cancer like inhalation of carcinogenic pollutants by susceptible host. Out of all the risk factors, cigarette
smoking is the most important risk factor in the development of lung cancer in men.
Lung cancer is the uncontrolled growth of abnormal cell in one or both of the lungs. While
normal cell reproduce and develop in to healthy lung tissue, these abnormal cells reproduce faster and never
grow in to normal lung tissue. The cancer cells can spread from the tumor in to the blood stream or lymphatic
system where they can spread to the organs. Cigarette smoking is by far the most important cause of the lung
cancer and the risk from smoking increases with the number of cigarettes smoked and the length of time spent
smoking. Some occupational chemicals and air pollution like benzene, formaldehyde and disseat air pollution
asbestos also an important cause of lung cancer.
Lung carcinoma is the leading cause of cancer-related death worldwide. About 85% of cases are
related to cigarette smoking. Symptoms of lung carcinoma include cough, chest discomfort, weight loss, and
less commonly hemoptysis.
An increased risk of lung cancer was found for workers ever employed on a sugarcane farm
(odds ratio (OR) 1.92, 95% confidence interval. (95%CI)1.08to 3.40). Increased risks were found for work
involving preparation of the farm (OR 1.81,95% CI 0.99to 3.27) and burning of the farm after harvesting (OR
1.82,95% CI 0.99 to 3.34). Non significant increases in risks were found for harvesting the crop (OR 1.41,95%
CI 0.70 to 2.90) and processing the cane in the mills (OR 1.70,95% CI 0.20 to 12.60)
Various factors have been associated with the development of lung cancer including tobacco
smoke, second hand smoke, environmental and occupational exposures, gender, genetics and dietary factors
967

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

also leads to lung cancer. Lung cancer is the leading cancer killer among men and women in the United States.
Tobacco smoking is the died list and most dangerous habit.
Need for the Study
The term lung cancer is a malignant tumor arising from the bronchial epithelium. It is the
commonest of all the pulmonary neoplasm and is eight times more common in males than females between the
ages of 50 to 70 years. Some of the symptoms include chronic cough, coughing up blood or chest pain.
However person with only persistent coughing do not always seek medical advice early enough.
Most experts agree that Lung cancer is attributed to inhalation of carcinogenic pollutants by
susceptible host. More than 85% of lung cancer is caused by the inhalation of carcinogenic chemicals, most
commonly cigarette smoking. In US about 90% of lung cancer deaths in men and 80% death in women are due
to smoking. Smoking of cigarettes a day increases the risk of lung cancer by 30 times. Passive smoking also has
been identified as possible cause of lung cancer in non smokers. In other words people who are exposed to
tobacco smoke in a closed environment are at risk of developing Lung cancer.
Research methodology
The research design chosen for the study was non experimental descriptive design. The study was done in the
outpatient department of selected hospital in Chennai with 60 samples. The samples were selected by using non
probability convenient sampling method. The knowledge on lung cancer was assessed by using structures
questionnaire. The data were analyzed by using descriptive and inferential statistics.
Results
The results reveal that, 43.3% were between the age group of 45-55 years and 54% of them were
having the habit of smoking,
The frequency and percentage distribution of level of knowledge on causes of lung cancer among men
attending OPD
Level of knowledge

968

Frequency

Percentage

Inadequate Knowledge

46

76.6%

Moderately Adequate Knowledge

12

20.0%

Adequate Knowledge

3.3%

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

90.00%
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
Adequate Knowledge Moderately adequate Inadequate knowledge
knowledge

Discussion
The purpose of the study was done to assess the knowledge of men on causes of lung cancer
The results shows that most of them (76.6%) were having inadequate knowledge on causes of
lung cancer .The finding was supported by Redhwan Ahmed Al-Naggar1 ,et al(2013)
findings. They done a study to determine knowledge about lung cancer among secondary
school male teachers in Kudat, Sabah, Malaysia. A cross-sectional study was conducted among
three secondary schools located in Kudat district, Sabah, Malaysia during the period from June
until September 2012. After explaining about the study to participants informed consent was
obtained.T he self administered questionnaire was used to assess the socio demographic
characteristics and general knowledge on lung cancer. Once all 150 respondents completed the
questionnaire, they passed it to their head master for collecting and recording. All the data were
analyzed using Statistical Package for the Social Sciences (SPSS) version 13. ANOVA and t-test
were applied for univariate analysis; and multiple linear regression for multivariate analysis. A
total of 150 male secondary school teachers participated in this study. Their mean age was
35.66.5 (SD) and regarding the knowledge about lung cancer, 57.3% of the participants
mentioned that only males are affected by lung cancer. Some 70.7% mentioned that lung cancer
can be transmitted from one person to another. More than half (56.7%) reported that lung cancer
is not the leading cause of death in Malaysian males.. In conclusion Overall, the knowledge of
969

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

school male teachers about lung cancer was low. Interventions to increase lung cancer awareness
are needed. The overall finding of the present study showed that, most of them were having the
habit of smoking and around 76.6% of them were inadequate knowledge on causes of lung
cancer. So the nurses working in the hospital should periodically educate the patient attending
OPD about causes of lung cancer and its prevention.
Conclusion
On the basis of the study findings of the present study, most of them were unaware about the
causes of lung cancer, hence self instructional module about causes and its prevention can be
given with pictorial representation and posters can be placed in the OPD department of
respiratory medicine and in common places regarding causes and its prevention.

REFERENCES:
1. Brunner & Suddarths. Text Book of Medical Surgical Nursing. 10th Ed. 2004. Lippincott Williams &
Wilkins; 554-555.
2. S.K. Jindal, A.N. Aggarwall, K. Choudhury, S.K. Chhabra. Tobacco Smoking in India: Prevalence,
Quit-Rated and Respiratory Morbidity. Indian Journal of Chest Diseases and Allied Sciences 2006; 48:
37-42.
3. Lam WK, White NW, Chan-Yeung MM. Lung Cancer Epidemiology in Asia and Africa. Int J Tubere
Lung Dis. 2004 Sep; 8(9):1045-57.
4. Samet JM, Wigging CL, Humble CG, Pathak DR; Cigarette Smoking and Lung Cancer in New Mexico.
1988 May; 137(5):1110-3.
5. Pelaez Mena G, Pinedo sanchez A, Garcia Rodriguez a, Feradez Crehuet Navajas J. Tobacco and Cancer
of The Lung. A Case-Control study. 1989 Oct; 185(6):298-302.
6. Lung Cancer: Causes and Incidence. http://news.bbc.co.uk/i/hi/world/3758707
7. Basavanthapa BT. Nursing Research. 2nd Ed. New Delhi: Jaypee brothers. P.92, 155,177.
8. Redhwan Ahmed Al-Naggar1 et al(2013 Lung Cancer Knowledge among Secondary School Male
Teachers in Kudat, Sabah, Malaysia) . Asian Pacific Journal of Cancer Prevention, Vol 14,
DOI:http://dx.doi.org/10.7314/APJCP.2013.14.1.103

970

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Software Development Risk Management Using OODA Loop


Sanjeev Kumar Punia, Dr. Anuj Kumar, Dr. Kuldeep Malik
Ph.D. Scholar, NIMS University, Jaipur, Rajasthan - INDIA
puniyasanjeev@hotmail.com
+91 999 919 0085

ABSTRACT - Software development projects are subject to risks like any other project. These risks must be managed in
order for the project to succeed. Current frameworks and models for risk identification, assessment and management are
static and unchangble. They lack feedback capability and cannot adapt to future changes in risk events. The OODA
(Observe, Orient, Decide and Act) loop, developed during the Korean war by fighter pilot Colonel John Boyd, is a
dynamic risk management framework that has built in feedback methods and readily adapts to future changes. It can be
successfully employed by development teams as an effective risk management framework, helping projects come in on
time and on budget.
KEYWORDS - OODA loop, risk management, dynamic risk management, requirement risks management, software
mitigation risks, unanticipated risks, futuristic risk assessments.
INTRODUCTION - Software development projects are subject to risks like any other project. Software development is
subject to unique risks which can be mitigated through effective risk management techniques. Risks are unavoidable and
must be managed. Successfully managing risks assists developers in completing the project on time and on budget.
Strategies selected to manage risk may result in a better product than originally anticipated. Identifying, analyzing,
tracking, and managing software risk aids crucial decision making including release readiness.
The fighter pilot Colonel John Boyd developed a series of four steps that he noticed fighter pilots followed during air to
air combat Korean War. These four steps are observe, orient, decide and act that is known as the OODA loop. Col. Boyd
went on to become a superb fighter pilot and Pentagon strategist. Current risk management frameworks are static and
unchangeable as well as they lack feedback capability and cannot adapt the future changes in risk events. The OODA loop
is a dynamic risk management framework that is built with feedback methods and readily adapts to future changes.
Software development teams can employ the OODA Loop to manage risks reduction that affects their projects.
LITERATURE REVIEW - Software development projects are not immune to risks. Risk management strategies are
crucial to identify, track and reduce risks. The softwares spend shows that only 2% of software was able to used as
delivered by the study of Department of Defense (DoD) in 1995. 75% was either never used or cancelled prior to delivery.
Cook et. al. [1] explained that $35.7 billion spent on software management and much research involves surveying current
software developers with program manager professionals. The similar result is found by using different strategies to
identify, track and reduce risk. The similar components of risk were identified by reviewing past research experience.
Ropponen et. al. [2] explained that the risk components include scheduling risks, timing risks, system functionality risks,
subcontracting risks, requirement management risks, personnel management risks as well as resource usage and
performance risks.
971

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The knowledge lacking of software suppliers adds an increased level of risk. Schinasi et. al. [3] stated that it poses a large
problem to DoD as they mostly contracts it out and does very little of its own software development. Risks starts from
changing requirements, lack of skills, fault technologies, gold plating and an unrealistic project schedule. According to
Suresh Babu et. al. [4], gold plating developers develop a better requirement beyond the objective. Mohtashami et. al. [5]
explained that, the development teams spread across a building, the country or even the world as companies grow.
Distributed development teams add risk to software projects as they are not in a centralized location. According to
Borland Software Corporation [6], collecting the requirements from stakeholders is very important but more important
than that is continuing to request requirement, analyze and specify requirements to eliminate redundancy and avoid
unnecessary risks. Cook et. al. [1] explained that requirements elicitation, analysis, documentation, verification, review,
approval, configuration control and traceability should be incorporated into sound risk management procedures.
The identification and planning is the best way for risk reduction early in the development cycle. Leonard et. al. [7]
explained that software development and inspections focus to avoid risks before introducing them into the project. The
time, money and effort are used during the development process to mitigate the risks before beginning. Jrgensen [8]
suggested that an increased identification of risks led to an over confidence and over optimism in estimating software
development efforts. Stoddard et. al. [9] explained that company history, structure, processes and reward systems can
facilitate the risk management process. The conceptualize requirements is a popular method for tracking, identifying and
managing the requirement risks. Various model based requirement management approaches exist for better identification,
tracking and managing requirements. In the past, models were not formally connected to software development such that
there was no way to ensure programmers design decisions used in the mode.
Uzzafer [10] stated that a lot of factors as project characteristics, risk management team, risk identification approaches
and project quality contribute and affect the level of project risk. Assessing the impact of project risk and residual
performance risk provide a better understanding of effectiveness and adequacy for risk management techniques. The risk
management capabilities play important roles in managing software projects either implemented in any fashion.
However, the conceptualization and development of risk management theories lags the requirements of practice.
Bannerman [11] found in research studies that risk management practice lags the understandings of risk management such
that current frameworks and models for risk identification, assessment and management are static and unchangeable. They
lack feedback capability and cannot adapt to future changes in risk events. Sarigiannidis et. al. [12] stated that dynamic
risk management frameworks provide futuristic assessments of risk events by coupling with static models that can
enhance the project success. The software development projects are greatly benefited from model based requirements
engineering as identifying, assessing, analyzing, verifying, tracking and managing requirements that reduce risk to
software projects. A big research is not conducted that relates the OODA loop for risk management in the software
development process. This work concerned mainly with agile software development. Steve Adolph [13] relates the OODA
loop to agile software development and argues that agility depends on the tempo that iterate through that loop. The
development speed depends on culture but not on methodologies or tools used.
972

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

This paper is primarily an introduction to the OODA loop and agile software development. It briefly outlines the fitting of
OODA loop with the notion of agile software development and proposes research opportunities. Ullman [14] explained
the use of OODA loop to business and product development. He specifically explained the get stuck of business and
product development teams where action never occurs. He also explained that the guidelines to unstick the OODA loop
for making decisions and taking action.
Colonel John Bodys Loop - Colonel John Boyd was an air force fighter pilot and brilliant military strategist in United
State. During the Korean War, Boyd observed a cycle of four actions that pilots took during combat and named these
actions OODA loop. He explained that pilots with OODA loop are faster than others dominate dogfights. The pilots
without OODA loop forces constantly re-observe and re-orient themselves. OODA loop prevents the pilot from making
decisions and taking action to gain the upper hand. The OODA loop is composed of four steps: observe, orient, decide and
act as shown below.
Observe
Orient

Act
Decide
The OODA loop

FIGURE 1
Development teams cycle through these steps repeatedly. In OODA loop, observation phase deals with collection of data
for situation and surroundings. Orientation phase is the analysis of data to form a mental perspective. The decision phase
chooses a specific course of action based on gathered and analyzed data. Action phase is the physical act of executing the
decision. The results of the action should be observed and the cycle repeats till the completion of the requirements.
Although OODA loop created for air to air combat fighter pilots but it applies to risk management for software
development also. As fighter pilots apply the OODA loop to manage risk in combat same way stakeholders, project
managers and developers apply the OODA loop for prevention of crash and burn in software projects. The OODA loop
also assist to manage scheduling and timing risks, system functionality risks, subcontracting risks, requirement
management risks, resource usage and performance risks as well as personnel management risks.
The OODA Loop and Software Development Risk Management: Observe - The first step in risk management is to
identify or observe the risks so failing to identify risks can drastically harm software projects. There are four factors that
influence observations in the OODA loop that include outside information, unfolding circumstances, unfolding interaction
with environment and implicit guidance with control. These factors are external to the loop and assist developers and
project managers with risk identification by combining them. Outside information is required for effective risk
management. Software developers must receive absolute information from stakeholders because eliciting requirements
from stakeholders is time consuming.
973

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Early care should be taken for identifying all classes of stakeholders from all involved organizations. The development
can be complicate after missing a stakeholder requirement. The unfolding circumstances can change the risk posture in
requirement identification, design, development or test during development. One source of requirements creep is the
failing of capture requirements during the requirements identification phase. The cost and effort increases in integrating
new requirements as development progresses. Even more costly is to fix the bugs as development progresses.
Outside
Information

Implicit Guidance
and Control

Observations
Unfolding
Circumstances

Unfolding Interaction
with Environment

The observe step of the OODA loop

FIGURE 2
Apply the OODA loop on a small scale is a good practice when new requirements or even coding bugs are identified. The
development teams must ensure the working of each component works as intended during the development progresses
and components completion. The components may have unexpected consequences during interfaces. These side effects
may be mitigate through careful planning and design. Interaction with the environment is also critical in developing
software for a system. The OODA loop is feed with implicit guidance and control at each stage. It is especially crucial
during observation to identify and plan for direct orders, key performance parameters, laws and regulations.
Orient - The input of orient phase is the generated information from the observations first step of the OODA loop. The
orientation aligns observed information into a well defined, logical manner to take decisions more readily. During this
stage, risks must be assessed based on probability of occurrence and the potential impact. Based on calculated composite
risk indices many risks can be ranked. The severity of the risk is proportional to composite risk index. Col. Boyd
identified five factors those contribute the orientation of the pilots based on observed information. These five factors
include cultural traditions, new information, analysis and synthesis, previous experiences and genetic heritage. The
relationships between the five factors are shown below.

974

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Cultural
Information

Analysis &
Synthesis

New
Information

Previous
Experience

Genetic
Heritage

Factors influencing the orient step of the OODA loop

FIGURE 3
The data, requirements, systems and circumstances change that leads to new information the team can use to identify and
orient the project to manage risks. This factor is actually a mini observe step built within orientation step. It is a reminder
for teams to constantly absorb new information and watch for unanticipated risks. The analysis and synthesis is a no
brainer while information and observations are useless without analysis. Analyze the identified risks allows teams to
determine appropriate and effective risk management techniques. Software can be analyzed for functionality, bugs and
completeness that can be synthesized and tested. The three factors as cultural traditions, genetic heritage and previous
experiences are very similar to each other for risk management of software development projects.
The cultural traditions refer to the culture and traditions of the organization. The team or organization may have a
preference for one software development or requirements model. The genetic heritage describes the management of
projects and risks for developers and stakeholders in software project. Development teams and project managers rely on
previous experiences to identify and manage risks in current projects. The past successfully or unsuccessfully completed
project risks may affect current projects also so team members use past project experience to understand the tracking and
mitigating of current risks.
Decide - The development team chooses the risk management strategy after identification and analysation of risk with
orientation of project goals. Additional implicit guidance, risk action plans and contingency plans turns into a problem
during this phase. The risk can be decrease greatly by the ability to identify, analyze, monitor and track requirements and
project status through the development lifecycle. It is much easier to plan and integrate requirements at the beginning of
the software development cycle. The cost and effort to implement new requirements, makes changes and fix bugs
increases as software development progresses. Feedback from decisions flows back to the observation step. The risk
management strategy chosen may affect the project schedule or budget and it may change although it might accomplish
the same function. The benefit of effectively managing requirements is based on the working of software feedback
occurrence when decisions are made that allow quick iteration through the OODA loop.
Act - Execute phase starts after the decision of risk management strategies. The risk management is not a quick process
but risks can be managed by quick fed back in the observation stage. In this we have to check the operation of software
975

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

components with it indentation and further to check the development to tweak the software or correct bugs that ensure
stakeholders happiness with the progress and results. The additional risk is added as requirements begin changing and
requirement creep sets. The risks should be tracked and the results of the risk management strategies recorded and shared.
Team members and stakeholders need to know the status of their project. Another iteration of the OODA loop is
performed if the risk is not reduced as anticipated.
The loop - All combined four steps with influencing factors are shown below. The OODA loop is not a once through
framework for risk management and it applied repetitively throughout the entire software development cycle. Risks
remain in the project development till project completion. Teams should not get stuck into observing and orienting to risks
and development status and must be decide and act on observed information.

The decision and action stages feedback flows back to the observation stage. Continually observe the results of team
decisions and actions. Repetitive iterations of the OODA loop will reduce software project risks and increase the
likelihood of completion on time and within budget.

Orient

Observe
Outside
Information

Cultural
Information

Implicit Guidance
and Control

Observations

Feed

Analysis &
Synthesis

New
Information

Feed

Forward
Forward
Unfolding
Circumstances

Unfolding Interaction
with Environment

Act

Decide

Previous
Experience

Decision
(Hypothesis)

Genetic
Heritage

Feedback

Action
(Test)

Feed
Forward

Unfolding
Interaction with
Environment

The full OODA loop as described by Col. John Boyd

FIGURE 4
CONCLUSION - The OODA loop is a tool for effective risk management like all projects. Software projects also have
the risk so software project teams can use the OODA loop as a risk management framework. Each step helps developers
and project managers to identify, track and manage risks. Due to the cyclic nature of the OODA loop, multiple iterations
can be applied to the project as risks evolve over time. Successful implementation of the OODA loop assists project
managers in completing their projects within budget and on time. We plan to use the OODA loop as a risk management
framework for a software project in the future. We will try to test its effectiveness over the course of the project. Each
identified risk will be tracked and all observations, decisions and actions will be recorded.

976

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] D. A. Cook and T. R. Leishman "Requirements risk can drown software projects" Journal of the Quality Assurance
Institute, vol. 17, no. 2, pp. 56 - 64, 2012.
[2] J. Ropponen and K. Lyytinen, " Components of software development risk: How to address them? A project manager
survey" IEEE Transactions on Software Engineering, vol. 26, no. 2, pp. 86 - 94, 2010.
[3] K. V. Schinasi "Defense acquisitions: Knowledge of software suppliers needed to manage risks" United States General
Accounting Office, Washington D.C., 2009.
[4] G. N. K. Suresh Babu and S. K. Srivatsa, "Increasing success of software projects through minimizing risks"
International Journal of Research & Reviews in Software Engineering, vol. 1, no. 1, pp. 20 - 22, 2011.
[5] M. Mohtashami, T. Marlowe and V. Kirova "Risk management for collaborative software development" Information
Systems Management, vol. 23, no. 4, pp. 21 - 28, 2012.
[6]

Borland

Software

Corporation

"Mitigating

risk

with

effective

requirements

engineering"

Available:

http://www.borland.com/resources/en/pdf/white_papers/mitigating_risk_with_effective_requirements_engineering.pdf,
2012.
[7] J. G. Leonard, T. R. Adler and R. K. Nordgren "Improving risk management: Moving from risk elimination to risk
avoidance" Information and Software Technology, vol. 41, no. 1, pp. 28 - 35, 2013.
[8] M. Jrgensen "Identification of more risks can lead to increased over-optimism of and over confidence in software
development effort estimates" Information and Software Technology, vol. 52, no. 5, pp. 504 - 512, 2010.
[9] J. Stoddard and Y. H. Kwak "Project risk management: Lessons learned from software development environment"
Technovation, vol. 24, no. 11, pp. 914 - 919, 2011.
[10] M. Uzzafer "A new dimension in software risk management" World Acadamey of Science, Engineering and
Technology, vol. 64, pp. 341 - 343, 2010.
[11] P. L. Bannerman "Risk and risk management in software products: A reassessment" The Journal of Systems &
Software, vol. 81, no. 12, pp. 2117 - 2121, 2008.
[12] L. Sarigiannidis and P. D. Chatzoglou "Software development project risk management: A new conceptual
framework" Journal of Software Engineering & Applications, vol. 4, no. 5, pp. 294 - 308, 2011.
[13] Steve Adolph, What lessons can the agile community learn from a maverick fighter pilot?, In Proceedings of the
Agile Conference, Vancouver, BC, pp. 98 - 102, 2006.
[14] D.G. Ullman The sound of a broken OODA loop Crosstalk: Journal of Defense Software Engineering, vol. 20, no.
4, pp. 21 - 24, 2013

977

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Performance Enhancement of Tube Bank Heat Exchanger using Passive


Techniques: A Review
G.M. Palatkar 1, Sandeep V. Lutade 2
1

(Currently pursuing masters degree program in Heat Power Engineering in Dr. Babasaheb Ambedkar College of Engg. & Research
Hingna Road, Wanadongri, Nagpur, Maharashtra, India- 441110)
Contact No. 09860896994
gpalatkar@gmail.com
2

(Asst. Professor, Mechanical Engineering Department, in Dr. Babasaheb Ambedkar College of Engg. & Research Hingna Road,
Wanadongri, Nagpur, Maharashtra, India- 441110)
Contact No. 9049084364
lutadesandeep@gmail.com

Abstract In this paper, the passive flow control of fluid in the downstream direction of the circular tube and in tube banks by
means of changing the shape of tube to oval shape . Heat transfer and pressure drop depend on complex flow pattern of fluid in tube
banks, whereas pressure drop linked directly with the fluid pumping capacity. A primary focus is to review experimental, analytical
and numerical works that had been carried out to study the effect of longitudinal and transverse tube spacing, Reynolds number,
stagnation point and surface roughness on wake size and vortex shedding.

Keywords Heat transfer, circular cylinder, oval tube, splitter plate, tube banks, cross flow, drag, bluff body, vortex shedding, .
INTRODUCTION

A heat exchanger is a device built for efficient heat transfer from one medium to another. The media may be separated by a solid wall,
so that they never mix. Tube bank is the cross flow tubular heat exchanger and consists of multiple rows of tubes. One fluid passing
through the tubes and other is passing across the tubes as shown in Figure 1. Tube banks arrangement include the in-line and the
staggered arrangements. Cross flow tubular heat exchanger are found in diverse equipment as economizer, waste heat recovery,
evaporator of an air conditioning system to name but few.
Heat Exchanger involve several important design consideration which include thermal performance, pressure drops across
the exchanger, fluid flow capacity, physical size and heat transfer requirement. Out of this following consideration, determination of
pressure drop in a heat exchanger is essential for many applications because the fluid needs to be pumped through the heat exchanger.
The fluid pumping power is proportional to the exchanger pressure drop. In tube banks, the heat transfer and pressure drop
characteristic depend upon the flow pattern of fluid. The fluid flow converges as the minimum area occurs between the tubes in
transverse row or in a diagonal row which makes the flow pattern very complex

Figure 1.Cross Flow Tube Banks


Passive control is one of the flow control techniques for reducing the aerodynamic drag exerted on a bluff body. It controls the vortex
shedding by modifying the shape of the bluff body or by attaching additional devices in the flow stream.
978

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

PASSIVE HEAT TRANSFER AUGMENTATION TECHNIQUES


A Dewan1, P Mahanta1 (2004), this paper presented is a review on progress with the passive augmentation techniques
in the recent past and will be useful to designers implementing passive augmentation techniques in heat exchange. Twisted tapes, wire
coils, ribs, fins, dimples, etc., are the most commonly used passive heat transfer augmentation tools.
A twisted tape insert mixes the bulk flow well and therefore performs better in a laminar flow than any other insert
because in a laminar flow the thermal resistance is not limited to a thin region. However, twisted tape performance also depends on the
fluid properties such as the Prandtl number. Twisted tape in turbulent flow is effective up to a certain Reynolds number range but not
over a wide Reynolds number range. Compared with wire coil, twisted tape is not effective in turbulent flow because it blocks the
flow and therefore the pressure drop is large. Hence, the thermo-hydraulic performance of a twisted tape is not good compared with
wire coil in turbulent flow.
Wire coil in laminar flow enhances the heat transfer rate significantly. However, the performance depends on the Prandtl
number. If the Prandtl number is high, the performance of the wire coil is good because, for a high Prandtl number, the thickness of
the thermal boundary layer is small compared with the hydrodynamic boundary layer and the wire coil breaks this boundary layer
easily. Therefore, both heat transfer and pressure drop are large. Wire coil enhances the heat transfer in turbulent flow efficiently. It
performs better in turbulent flow than in laminar flow. The thermo-hydraulic performance of wire coil is good compared with twisted
tape in turbulent flow.
There are several passive techniques other than twisted tape and wire coil to enhance the heat transfer in a flow, such as
ribs, fins, dimples, etc. These techniques are generally more efficient in turbulent flow than in laminar flow.

EFFECT OF SPRINGS AND ANNULAR INSERTS IN CONCENTRIC TUBE HEAT EXCHANGER


Kumbhar D.G.(2010) in this paper emphasis is given to works dealing with twisted tape inserts because according to the
recent studies, these are known to be economic heat transfer enhancement tool. A twisted tape insert mixes the bulk flow well and
therefore performs better in laminar flow, because in laminar flow the thermal resistant is not limited to a thin region. The result also
shows twisted tape insert is more effective, if no pressure drop penalty is considered.
Twisted tape in turbulent flow is effective up to a certain Reynolds number range. It is also concluded that twisted tape insert is not
effective in turbulent flow, because it blocks the flow and therefore pressure drop increases. Hence the thermo hydraulic performance
of a twisted tape is not good in turbulent flow. These conclusions are very useful for the application of heat transfer enhancement in
heat exchanger networks.
S.S.Joshi (2013), the aim of this investigation is to study use of inserts as passive heat transfer augmentation technique to
enhance the effectiveness of concentric tube heat exchanger. Tube in tube heat exchanger is retrofitted with screwed protrusions in
annular portion and different springs are used as inserts in inner tube. The experimentation is carried out at different flow rate of either
fluid for three different arrangements of spring inserts.
(i).The springs of different diameters and their fixing arrangement causes more turbulence which causes more heat transfer to occur.
Hence the effectiveness of heat exchanger is increased.
(ii). For spring type-III (Diameter nearly equal to the diameter of inner tube which is fixed (adhered) to the inner tube surface heat
transfer is maximum but at the same time friction factor is also increases. The heat transfer and friction gradually decreases for spring
type-I (2 mm less than the diameter of inner tube fixed at both ends) and spring type-II (2 mm less than the diameter of inner tube
Fixed (adhered) to the inner tube surface) respectively. Hence he concluded that as diameter of spring decreases, the heat transfer and
friction factor decreases. Also when the spring is fixed to inner surface of tube, maximum value of effectiveness is achieved.
(iii)The heat transfer in the heat exchanger could be enhancement by using inserts and springs of different diameters and their fixing
arrangement. Use of annular insert causes slight increase in heat transfer coefficient and effectiveness of heat exchanger.
(iii) The heat transfer in the heat exchanger could be enhancement by using inserts and springs of different diameters and their fixing
arrangement. Use of annular insert causes slight increase in heat transfer coefficient and effectiveness of heat exchanger.

EFFECT OF SPLITTER PLATE ON FLOW AND HEAT TRANSFER BEHAVIOR AROUND CIRCULAR
CYLINDER.
J.Y. Hwang et al. (2003) numerically studied flow induced forces on a circular cylinder using detached splitter plate
for laminar flow. A splitter plate with the same length as the cylinder diameter was placed horizontally in the wake region.
Suppressing the vortex shedding, the plate significantly reduced drag force and lift fluctuation, there existed an optimal location of the
plate for maximum reduction. However, they sharply increase as the plate was placed further downstream of the optimal location.
979

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

H. Akilli et al. (2008) investigated experimentally passive control of vortex shedding by splitter plates having L/D ratio
(0.2 to 2.4) attached on the cylinder base in shallow water flow at Re = 6300. In this study, the length of the splitter plate was varied
from in order to see the effect of the splitter plate length on the flow characteristics. Instantaneous and time-averaged flow data clearly
indicated that the length of the splitter plate has a substantial effect on the flow characteristics. They found the flow characteristics in
the wake region of the circular cylinder sharply changed up to the splitter plate length of L/D=1.0 and above this plate length, small
changes occurred in the flow characteristics.

HEAT TRANSFER FROM TUBE BANKS


W.A. Khan et al. (2006) studied heat transfer from tube banks using analytical approach. It was concluded that the average
heat transfer coefficients for tube banks depend on the longitudinal, transverse pitches, Reynolds number and Prandtl number.
Compact banks (in-line or staggered) indicate higher heat transfer rates than widely spaced ones and the staggered arrangement gives
higher heat transfer rates than the in-line arrangement. This was also supported by the further work of W. A. Khan (2007), where an
optimal model of tube banks heat exchanger was developed using entropy generation minimization method. It was also demonstrated
that the staggered arrangement gives a better performance for lower approach velocities and longer tubes, whereas the inline
arrangement performs better for higher approach velocities and larger dimensionless pitch ratios.
S. G. Chakrabarty (2012) conducted experimentation on passive flow control of fluid in the downstream direction of the
circular tube and in tube banks by means of splitter plate The angle () is measured from the front stagnation point. Behavior of
Nusselt number for different passive control method is shown. The heat transfer coefficient decreases gradually from front stagnation
point towards the separation point. The heat transfer coefficient has the minimum value near the separation point. After the separation
point the heat transfer coefficient increases because of the considerable turbulence exists over the rear side of the tube where eddies of
the wake sweep the surface in case of circular tube without splitter plate. For comparison of Nusselt number distribution on the
circular tube with the Nusselt number distribution on the splitter plate and V-shaped profile, Graph 3.2 and 3.4 shows the variation of
Nusselt number along length of the splitter plate (from L=0 to L=0.05m) and V-shaped profile (L=D). Splitter plate is attached in a
longitudinal slot into circular tube at =180. Therefore Nusselt number on circular tube at =180 and splitter plate length (L) = 0 is
assume to be same.
James E. OBrien (2000) This paper presents the results of an experimental study of forced convection heat transfer in a
narrow rectangular duct fitted with either a circular tube or an elliptical tube in crossflow. The duct was designed to simulate a single
passage in a fin-tube heat exchanger. Heat transfer measurements were obtained using a transient technique in which a heated airflow
is suddenly introduced to the test section.
An experimental study of local heat transfer in a narrow rectangular duct fitted with either a circular tube or an elliptical tube in crossflow has been performed. The duct was designed to simulate a single passage in a fin-tube heat exchanger with a duct height of 1.106
cm and a duct width-to height ratio, W/H, of 11.25. The test section length yielded L/H = 27.5 with a flow development length of L/H
= 30. The test cylinder was sized to provide a diameter-to-duct height ratio, D/H of 5. The elliptical tube had an aspect ratio of 3:1 and
a/H equal to 4.33. Heat transfer measurements were obtained using a transient technique in which a heated airflow was suddenly
introduced to the ambient-temperature test section. High-resolution local test-surface temperature distributions were obtained at
several times after initiation of the transient using an imaging infrared camera. Corresponding local fin-surface heat transfer
coefficient distributions were calculated from a locally applied one-dimensional semi-infinite inverse heat conduction model. Heat
transfer results were obtained over an airflow rate ranging from 1.56 x 10-3 to 15.6 x 10-3 kg/s. These flow rates correspond to a ductheight Reynolds number range of 630 6300.

CONCLUSION
This paper describes the influence of various type of passive techniques used to enhances the performance of tube bank
heat exchanger. The performance parameters such as the flow behavior and heat transfer characteristic which increase the heat transfer
rate. Among the various passive techniques the oval shape tube bank found more effective as compare with the circular tube bank
because of their smaller face area. Oval shape tube heat exchangers are more compact in size than circular tube heat exchangers. So it
means that more tube can be adjusted into a specified volume, which indicates higher heat transfer rate. By employing oval shape
tubes instead of circular tube in tube banks reduces the size wake and turbulence generation. It modifies the boundary layer over the
tubes. By creating a streamlined extension of the circular tube, it reduces the wake size which added to their better combined thermalhydraulic performance, which indicates that encouraging characteristics for using oval shape tube in heat exchanger.

980

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] A Dewan1, P Mahanta1, K Sumithra Raju1 and P Suresh Kumar2 Review of passive heat transfer augmentation techniques
Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy 2004 218: 509.
[2] Kumbhar D.G.1, Dr. Sane N.K.2 Heat Transfer Enhancement in a Circular Tube Twisted with
Swirl Generator: A Review Proc. of the 3'd International Conference on Advances in Mechanical Engineering, January 4-6, 2010.
[3] S.S Joshi et.al / (IJAEST) international journal of advanced engineering sciences and technologies Vol No. 12, Issue No. 2, 041
048.
[4] Jong-Yeon Hwang, Kyung-Soo Yang and Seung-Han Sun, Reduction of flow-induced forces on a circular cylinder using a
detached splitter plate, Physics Of Fluids, Volume 15, Number 8, August 2003.
[5] Huseyin Akilli, Cuma Karakus, Atakan Akar, Besir SahinN and Filiz Tumen, Control of Vortex Shedding of Circular Cylinder in
Shallow Water Flow Using an Attached Splitter Plate, Journal of Fluids Engineering, April 2008.
[6] W.A. Khan, J.R. Culham and M.M. Yovanovich, Convection heat transfer from tube banks in crossflow: Analytical approach,
Int. Journal of Heat and Mass Transfer, 49 (2006), 4831-4838.
[7] W.A. Khan, J.R. Culham and M.M. Yovanovich, Optimal Design of Tube Banks in Crossflow Using Entropy Generation
Minimization Method, Journal Of Thermophysics and Heat Transfer, Vol. 21, No. 2, April-June 2007.
[8] S.G. Chakrabarty and Dr. U. S. Wankhede Flow and heat transfer behaviour across circular cylinder and tube banks with and
without splitter plate International Journal of Modern Engineering Research , ISSN: 2249-6645 Vol.2, Issue.4, July 16. 2012 pp1529-1533.
[9] James E. OBrien, Manohar S. Sohal Proceedings of NHTC00 34th National Heat Transfer Conference Pittsburgh, Pennsylvania,
August 20 22, 2000

981

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design and Comparison of Digital IIR Filters for Reduction of artifacts from
Electrocardiogram Waveform
Babak Yazdanpanah1, K.Sravan Kumar2 Dr.G.S.N.Raju3
1

Senior Research Scholar, Centre for Biomedical Engineering, Department of ECE, Andhra University, Visakhapatnam, India
2

M.tech, Centre for Biomedical Engineering, Department of ECE, Andhra University, Visakhapatnam, India

Professor, Department of Electronics and communication Engineering, Andhra University, Visakhapatnam, India
Email: babak_y1979@yahoo.com

Abstract-Electrocardiogram (ECG) signal has been broadly used in cardiac pathology to find heart disease. ECG signal is commonly
corrupted by disparate artifacts as baseline wander, power line interference (50/60 Hz) and electromyography noise and these should
be eliminated before diagnosis. The function proposed in this paper is removal of low frequency interference i.e. baseline wandering
and high frequency noise i.e. electromyography in ECG signal and digital filters are performed to eliminate it. A digital infiniteimpulse response (IIR) filter design in this article. The digital filters accomplished are IIR with different approximation methods as of
Butterworth, ChebyshevI, Chebyshev II and Eliptic. The results obtained are at order of 1, 2 and 3. The signals taken from the MITBIH database which contains the normal and abnormal waveforms. The task has been in MATLAB where filters are implemented in
FDA Tool. The result received for whole IIR filters with different methods are evaluated the waveforms, power spectrums density,
signal to noise ratio (SNR) and means square error (MSE) of the noisy and filtered ECG signals. The filter which shows the excellent
outcomes is the Butterworth.

Keywords: ECG, IIR filter, SNR, MSE, MATLAB, Butterworth, Chebyshev I, Chebyshev II, Eliptic.
INTRODUCTION
The electrocardiogram (ECG) is a time-varying signal causing the ionic current flow which reasons the cardiac fibers to compress and
subsequently relax. The ECG is obtained by recording the potential difference among two electrodes placed on the surface of the skin.
A single normal period of the ECG indicates the consecutive atrial depolarization/repolarization and ventricular
depolarization/repolarization which happens with each heartbeat. These may be almost associated with the peaks and troughs of the
ECG waveform labelled P, Q, R, S, and T as shown in Fig. 1.
Extracting helpful clinical data from the noisy ECG needs reliable signal processing methods [1]. These contain R-peak detection [2],
[3], QT-interval detection [4], and the derivation of heart rate and respiration rate from the ECG [5], [6]. The RR-interval is the time
among consecutive R-peaks, the reverse of this time interval gives the instantaneous heart rate. A series of RR-intervals is called as a
RR tachogram and variability of these RR-intervals shows significant data about the physiological condition of the subject [7]. Now
days, new biomedical signal processing algorithms are generally evaluated by using them to ECGs in a wide database like the
Physionet database [8][10].
This signal may corrupt due to different types of the artifacts [9]. ECG signal are generally corrupt by unwanted interference like
motion artifacts, muscle noise, electrode artifacts, base line drift noise and respiration. So for correct and significant clinical data of
heart these artifacts have to be removed or filtered out for which analog and digital filters are employed, but digital filters are at
present capable of exciting performed offering more benefits compare the analog one. Digital filters are more accurate due to absence
982

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

of instrumentation.
The typical digital signal processing (DSP) includes z transform, correlation, Fourier transform, convolution, and filtering, etc. The
benefits of DSP are programmable; it has excellent reliance, high accuracy, strong anti-interference, simple to maintain, and simple to
design for gaining linear phase, etc. IIR methods have an impulse response function that is non zero over an infinite duration of time.
IIR Filter can be accomplished as either analog or digital filter. In digital filter, the output feedback is rapidly apparent in the equation
describing the output.
The process of removing noise has been success when applying filtering method such as linear filter and moving average filter.
Complete filter design is accomplished with FDA tool in the MATLAB. In this paper display the performance of digital IIR filters
with different methods. In the results are indicating to comparatively in frequency spectrum density , signal to noise ratio (SNR) and
mean square error (MSE).The designed filters are tested with the samples from MIT-BIH database through the physionet website.

Figure 1: Schematic ECG Signal

Digital IIR filter


Digital Filters are designed by using the values of both the past outputs and the present input, an action brought about by convolution.
If such a filter is subjected to an impulse then its output not requirement essentially become zero. The impulse response of such a filter
may be infinite in period. Such a filter is known as an Infinite Impulse Response filter or IIR filter. The infinite impulse response of
such a filter implies the capability of the filter to have an infinite impulse response. This represents that the system is prone to
feedback and inconstancy.
The paper indicates various types of IIR filters containing the Butterworth Filter, Chebyshev I & II Filters and Elliptic of Low pass,
High pass and Band stop filters. IIR filters are designed fundamentally by the Impulse immutability or the Bilinear Transformation
method. IIR filter is determined by below equations:

(1)
A) Butterworth Filters
The Butterworth filter is a kind of signal processing filter designed to have as flat a frequency response as possible in the passband. It
is also mentioned to as a maximally flat magnitude filter. Butterworth filters are normal in character and of different orders, the lowest
order showing the excellent in the time domain, and the higher orders showing better in the frequency domain. Butterworth or
maximally flat filters have a uniform amplitude frequency response which is maximally flat at zero frequency response and the
983

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

amplitude frequency response reductions logarithmically with increasing frequency. The Butterworth filter has minimal phase shift
over the filter's band pass when evaluate to other conventional filters

(2)
B) Chebyshev Filters:
Chebyshev filters have the feature that they minimize the error between the idealized and the real filter characteristic over the range of
the filter, but with ripples in the passband. Because of the passband ripple intrinsic in Chebyshev filters, the ones that have a smoother
response in the passband but a more irregular response in the stopband are preferred for some applications. Chebyshev filters are of
two types: Chebyshev I filters are all pole filters which are equi-ripple in the passband and are montonic in the stopband. Chebyshev II
filters contain both poles and zeros presenting montonic behaviour in passband and equi-ripple in the stopband. The frequency
response of the filter is given by

(3)
Where is a parameter related to the ripple present in the pass band

(4)

C) Elliptic Filters:
Elliptic filters are determined by equi-ripples in both passband and stop bands. The amount of ripple in each band is independently
adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given
values of ripple (whether the ripple is equalized or not). Alternatively, one may give up the ability to independently adjust the
passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations. They prepare a
realization with the lowest order for a specific set of conditions.

(5)
As the ripple in the stop band approaches zero, the filter becomes a type I Chebyshev filter. As the ripple in the passband approaches
zero, the filter becomes a type II Chebyshev filter and finally, as both ripple values approach zero, the filter becomes a Butterworth
filter.

METHODOLOGY
ECG signal is fundamentally containing of frequency between 0-250Hz. Research proofs that the frequency range of the ECG signal is
0-250 Hz, The sampling frequency was selected to facilitate performances of 60 Hz digital notch filter in arrhymia detectors, sampling
frequency of data signal is 360 Hz and amplitude 1mv.We designed the filter for corrupted ECG signal in four steps: In first step with
the help of FDA Tool in MATLAB software design IIR with high pass filter cut off frequency 0.5 Hz to eliminating baseline wander
noise from noisy ECG signal, in second step removing power line interference (50/60 Hz) by band stop with cut off frequency
(59.5Hz-60.5 Hz) , in third step we reducing EMG noise by applying low pass filter with cut off frequency 100Hz, finally moving
984

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

average filter to smooth the ECG waveform. The task was accomplished in various orders. The efficiency analysis contained the
comparison of outcomes generated by filters designed during the modelling method with replacing filter parameters so arranging them
the one where outcomes obtained were best. The results were collected with performing the filters through the ECG database from
MIT-BIH site. The ECG samples 100m,104m,105m,106m,108m,109m(MLII,VI) obtained from MIT have supported the own research
into arrhymia database consists 48 half hours excerpts of two channel ambulatory ECG recordings are utilized to verify the results of
digital filter designed as described above in methodology.
ECG Signal with Baseline Wander

Noisy ECG Signal


10000

Amplitude

Voltage / mV

-1

10

20

30

40
50
60
70
Time / s
ECG Signal without Baseline Wander

80

5000

0
-200

90

-150

-100

-50

0
50
100
Frequency / Hz
ECG Signal with reduced Baseline Wander

150

200

-150

-100

-50

150

200

600

Amplitude

Voltage / mV

400

200

0
0
-200

-1

10

20

30

40
50
Time / s

60

70

80

90

Fig.1.ECG before & after filtering of Baseline Wander

0
50
Frequency / Hz

100

Fig.2. ECG before & after filtering of Baseline Wander (FFT)

RESULTS
The results were generated with the designed filters applying various raw MIT-BIH data for various methods of digital IIR high pass
filter and low pass filter. The filter with various methods at order 1, 2 and 3 shows different results. The graphs for the signal and their
power spectrums density before and after filtering are shown for various methods at 1 order .
ECG Signal with EMG Noise

ECG Signal with Powerline Interference


600

Amplitude

Amplitude

600

400

200

0
-200

-150

-100

-50

0
50
100
Frequency / Hz
ECG Signal with reduced Powerline Interference

150

Amplitude

Amplitude

200

-100

-50

0
50
Frequency / Hz

100

150

Fig.3.ECG before & after filtering of Power line Interference

985

-150

-100

-50

0
50
100
Frequency / Hz
ECG Signal with reduced EMG Noise

-150

-100

-50

150

200

150

200

600

400

-150

200

0
-200

200

600

0
-200

400

200

400

200

0
-200

0
50
Frequency / Hz

100

Fig.4.ECG before & after filtering of EMG noise (FFT)

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In Butterworth based on IIR filter response, it was clear that the filter has sharp attenuation and pulsation in the stop band. In the pass
band, the filter was found to be stable. The Chebyshev I, Chebyshev II and Eliptic do not have a sharp cut-off like the Butterworth. The
above figures show the filtered ECG signal by passing through the IIR filter based on various approximation methods respectively.
Using these methods, we designed the high pass filter of cut-off frequency 0.5 Hz for eliminating baseline wandering, band stop filter
of cut-off frequency 59.50Hz - 60.50 Hz for removing power line interference and the low pass filter of cut-off frequency 100Hz for
reducing EMG noise.
The Comparison of different IIR filters by calculation of SNR (Signal to noise ratio) and MSE (mean square error) was done at 1
orders. The results are shown in tabular form:

SNR OF ECG BEFORE AND AFTER FILTERING ORDER -1

MIT-BIH
real ECG
data
100m
104m
105m
106m
108m
109m

SNR of
noisy ECG
signal
12.2022
8.0892
8.3009
10.1212
4.7045
6.3300

Signal to noise ratio of IIR filtered ECG signal


Butterworth
Chebyshev
Eliptic
Chebyshev
Type 2
Type 1

12.6683
7.6443
8.2761
9.9547
4.1787
6.2211

12.2861
7.7751
8.2661
9.9547
4.2032
6.2263

2.0299
2.1893
3.8530
0.7720
0.9093
1.1050

12.2861
7.7751
8.2661
9.9547
4.2032
6.1828

MSE OF ECG BEFORE AND AFTER FILTERING ORDER -1

MIT-BIH
real ECG
data
100m
104m
105m
106m
108m
109m

986

MSE of
noisy ECG
signal
0.1391
0.1303
0.1423
0.1288
0.0892
0.2500

MSE of IIR filtered ECG signal


Butterworth
Chebyshev
Eliptic
Chebyshev
Type 2
Type 1

0.0284
0.0612
0.0914
0.0897
0.0208
0.1653

0.0293
0.0654
0.0934
0.0925
0.0226
0.1698

www.ijergs.org

6.5477e-14
9.9768e-14
5.0927e-14
5.3251e-14
8.5044e-14
1.3127e-14

0.0293
0.0654
0.0937
0.0925
0.0226
0.1680

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Voltage / mV

Smoothed ECG Signal


600

0
500

-2

10

20

30

40
50
60
Time / s
ECG Signal without EMG Noise

70

80

90
400

Amplitude

Voltage / mV

Voltage / mV

ECG Signal without Powerline Interference


2

0
-2

10

20

30

40
50
60
Time / s
Smoothed ECG Signal

70

80

90

300

200

2
100

0
-2

10

20

30

40
50
Time / s

60

70

80

0
-200

90

-150

-100

Fig.5.ECG shows signal after denoising and smoothing

-50

0
50
Frequency / Hz

100

150

200

Fig.6.Smoothed ECG Signal (FFT)

CONCLUSION
This article, introduced a method used for artifacts reduction from ECG signal which basically includes of designing of filter with
various approximation methods as of Butterworth, ChebyshevI, Chebyshev II and Eliptic with different order 1, 2, 3 with proper
parameters indicating the best outcomes of baseline wander noise, power line interference, and electromyography noise removal. The
results for various filters are considered and evaluated by waveforms, power spectrums density (PSD), signal to noise ratio (SNR),
Mean square error (MSE) where Butterworth show the best outcome. The order 1 of filters designed showing the best results
comparison to order 2 and 3. Hence it can be finalized that Butterworth shows best outcomes at order 1.

Periodogram Power Spectral Density Estimate


0

-20

-20

-40

Power/frequency (dB/Hz)

Power/frequency (dB/Hz)

Periodogram Power Spectral Density Estimate


20

-40

-60

-80

-100

-120

-60

-80

-100

-120

20

40

60

80
100
Frequency (Hz)

120

140

160

180

-140

20

40

60

80
100
Frequency (Hz)

Fig.7.Shows ECG Signal before & after filtering with Power Spectrums Density (PSD)

987

www.ijergs.org

120

140

160

180

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
[1] A.L.Goldberger and E.Goldberger, ClinicalElectrocardiography.St.Louis, MO: Mosby, 1977.
[2] J.Pan and W.J.Tompkins,A real-time QRS detection algorithm,IEEE Trans.Biomed Eng., vol.BME-32, pp.220-236, Mar.1985.
[3] D.T.Kaplan, Simultaneous QRS detection and feature extractiondiology. Los Alamitos, CA: IEEE Copput.SOC .Press, 1991,
pp.503-506.
[4] P.Davery,A new physiological method for heart rate correction of the QT interval,in Heart,1999,VOL.82,PP.183-186.
[5]

G.B.Moody,R.G.Mark,A.Zoccola,and

S.ManteroDerivation

of

repiratory

signals

from

multi

lead

ECGS,in

Comput,Cardioal.,1985,vol.12,pp.113-116.
[6] G.B.Moody,R.G.Mark,M.A.Bump,J.S.Weinstein,A.D.Berman,J.E Mietus,and A.L.Goldberger,Clinical validation of the ECGderived respiration (EDR)technique, compute. Cardiol, vol.13, pp.507-510, 1986.
[7] M.Malik and A.J.Camm,Heart Rate Variability.Armonk,NY:Futura,1995.
[8] A.L.Goldberger,L.A.N.Amaral,L.Glass,J.M.Hausdoroff,P.C.P.Ch.Ivanov,R.G.Mark,J.E.Miertus, G.B.Mody,C.K.Peng, and H.E.
Stanley,Physiobank,physiotoolkit,and physionet: components of a new research resource for complex physiologic
signals,Circulation,vol.101,no.23,pp.e215-e220,2000.
[9] B.Chandrakar,O.P.Yadav and V.K.Chandra,A Survey Of Noise Removal Techniques For Ecg Signal. In International Journal of
Advanced Research in Computer and Communication Engineering vol.2.Issue 3, pp.1354-1357, March2013.
[10] Patrick E. McSharry3, Gari D. Clifford, Lionel Tarassenko, and Leonard A. Smith A Dynamical Model for Generating
Synthetic Electrocardiogram Signals. IEEE Transaction on Biomedical Engineering.vol.50.
[11] JS Sqrensen, L.Johannesen,USL Grove,K Lundhus,J-P Couderc,C Graff.A Comparison of IIR Wavelet Filtering for Noise
Reduction of the ECG.Computing in Cardiology 2010;37:489-492.
[12] A.Kam,A.Cohen. Detection of Fetal ECG With IIR Adaptive Filtration And Genetic Algorithms Acoustics, Speech, and Signal
Processing, 1999. Proceedings., 1999 IEEE International Conference . 2335 - 2338 vol.4, 15-19 Mar 1999.
[13] Soo-Chang Pei, Chien-cheng Tseng. Elimination of AC Interference Electrocardiogram Using IIR Notch Filter with Transient
Suppression IEEE Transactions on Biomedical Engineering, VOL.42, NO.11, November 1995.
[14] Nalini Singh, Jhansi, J.P. Saini, Shahanaz Ayub, Design of Digital IIR Filter for Noise.2013 5 th International Conference on
Computational Intelligence and Communication.
[15] Sande Seema Bhogeshwar ,M.K.Soni,Dipail Bansal, Design of Simulink Model to denoise ECG Signal Using Various IIR &
FIR Filters.2014 International Conference on Reliability, Optimization and Information Technology

988

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

989

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review of Security Policy Enforcement for Mobile Devices


Dimpal V. Mahajan1, Prof. Trupti Dange2
1

Department of Computer Engineering, RMD Sinhgad School of Engineering, University of Pune, India
dimpal.mahajan27@gmail.com

Professor, Department of Computer Engineering, RMD Sinhgad School of Engineering, University of Pune, India

trupti.dange.rmdstic@sinhgad.edu

Abstract Mobile devices like smartphones and tablets are commonly used for personal and business purpose. Enterprises are more
concerns about security of mobile devices compared to personal mobile subscribers. The challenges an enterprise may face include
unlimited access to corporate resources, lack of encryption on corporate data, unwillingness to backup data, etc. Many of these issues
have been resolved by auditing and enforcing security policies in enterprise networks. However, it is difficult to audit and enforce
security policies on mobile devices. A substantial discrepancy exists between enterprise security policy administration and security
policy enforcement. So in this paper we will study different security policies.

Keywords Mobile device, security policy. enforcement, constraints, access control ,confidentiality , android.
1. INTRODUCTION
Mobile computing and communications technology has evolved tremendously[4] over the last decade. New mobile computing
devices with better design and increased capabilities are released frequently on the market. Consequently, rich mobile services such as
e-mail, scheduler, contact synchronization and even scaled-down versions of word processors, spreadsheets and presentation software
have become more and more common among mobile users, especially in the enterprises.

Android is the most popular open source[1] and fully customizable software stacks for mobile devices. Introduced by
Google, it includes an operating system, system utilities, middleware in the form of a virtual machine, and a set of core applications
including a web browser, dialer, calculator and a few others. Now a day's peoples are using Mobile phones and Tablets for accessing
Internet. Mobile technology has changed our daily lives in many different ways, such as connecting with people, collecting
information, and sharing information. Security concerns are increasing as popularity of Smartphone's and tablets is increasing.
Android is open platform[2] that provide few direct protections for the content placed on the phone[2]. Access controls restrict access
to application interfaces (e.g., by placing permissions on application components in Android), rather than placing explicit access
controls on data they handle. Therefore, what limited content protections exist are largely a by-product of the way interfaces are
designed and permissions (often capriciously) assigned. Thus, a malicious application with the appropriate permissions can exfiltrate
even the most sensitive of data from the phone. Malware has recently begun to exploit such limitations. Moreover even in the absence
of malicious applications commercial interests such as media providers wish to provide content without exposing themselves to
content piracy.
Aggressive malware and securing Android-powered devices[3] has focused on three major directions. The first one consists
of statically and dynamically analyzing application code to detect malicious activities before the application is loaded onto the users
device. The second consists of modifying the Android OS to insert monitoring modules at key interfaces to allow the interception of
malicious activity as it occurs on the device. The third approach consists of using virtualization to implement rigorous separation of
domains ranging from lightweight isolation of applications on the device to running multiple instances of Android on the same device
through the use of a hypervisor.
The competitive advantages for both mobile businesses and individual users is rovided by the corporate usage of handheld
devices[5], such as mobile devices and it has been fastly growing in recent years due to wide dispersion of mobile devices. As the
storage capability of mobile devices spans between internal memories, removable memories (e.g., Secure Digital, Multimedia) and
990

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

SIM cards, the overall amount of sensitive personal and corporate information that can be stored in those devices can be relevant.
Furthermore, some storage volumes (e.g., removable memory cards) are essentially less secure than other volumes (e.g., SIM cards).
Hence many companies, when planning to provide their personnel with mobile devices, have to face the typical new threats related to
such devices. Unluckily, when compared to personal computers, mobile devices are lacking a number of security features (e.g.,
effective anti-malware solutions, intrusion detection systems), which can be fundamental in order to protect corporate and personal
data as well.

2. Review on security policy on mobile devices


The author in [1], said that the first mass-produced consumer-market open source mobile platform is Android that allows developers
to easily develop applications and users to eagerly install them. The ability to install third party applications by users poses serious
security concerns. Whereas the existing security mechanism in Android allows a mobile phone user to see which resources an
application requires, but she has no choice to allow access to all the requested permissions if she wishes to use the applications. There
is no other way of allowing some permissions and denying others. Also there is no way of restricting the usage of resources based on
runtime constraints such as the location of the device or the number of times a resource has been previously used.
Third party developers creating applications for Android can submit their applications to Android Market from where users
can download and install them. While this provides a high level of availability of unique, specialized or general purpose applications,
it also gives rise to serious security concerns. When a user installs an application, she has to trust that the application will not misuse
her phone's resources. At install-time, Android presents the list of permissions requested by the application, which have to be granted
if the user wishes to continue with the installation. This is an all-or-nothing decision in which the user can either allow all permissions
or give up the ability to install the application. As once the user grants the permissions, there is no way of revoking these permissions
from an installed application, or imposing constraints on how, when and under what conditions these permissions can be used.
To deal with these problems, we have developed Android Permission Extension (Apex) framework[1], a comprehensive
policy enforcement mechanism for the Android platform. Apex gives a user several options for restricting the usage of phone
resources by different applications. The user may grant some permissions and deny others. This allows the user to use part of the
functionality provided by the application while still restricting access to critical and/or costly resources. Apex also allows the user to
impose runtime constraints on the usage of resources. Finally, the user may wish to restrict the usage of the resources depending on an
application's use e.g., limiting the number of sms messages sent each day. We define the semantics of Apex as well as the policy
model used to describe these constraints. We also de scribe an extended package installer which allows end-users to specify their
constraints without having to learn a policy language. Apex and the extended installer are both implemented with a minimal and
backward compatible change in the existing architecture and code base of Android for better acceptability in the community.

In [2], the access of cellular networks worldwide and emergence of smart phones has led to a revolution in mobile content. Users
consume various content when, for example, exchanging photos, playing games, browsing websites, and viewing multimedia. Current
phone platforms provide protections for user privacy, the cellular radio, and the integrity of the OS itself. However, few offer
protections to protect the content once it enters the phone. For example, MP3-based MMS or photo content placed on Android smart
phones can be extracted and shared with impunity.
The author[2], introduce the Policy Oriented Secure Content Handling for Android (Porscha) system. . The Porscha system
developed in this work places content proxies and reference monitors within the Android middleware to enforce DRM policies
embedded in received content. An analysis of the Android market shows that DRM services should ensure: a) protected content is
accessible only by authorized phones b) content is only accessible by provider-endorsed applications, and c) access is regulated by
contextual constraints, e.g., used for a limited time, a maximum number of viewings, etc.
Porscha policies are imposed in two phases: the protection of content as it is delivered to the phone and the regulation of
content use on the phone . For the former, Porscha binds policy and ensures content confidentiality \on the wire" using constructions
and infrastructure built on Identity-Based Encryption [6]. For the latter, Porscha enforces policies by proxying content channels (e.g.,
POP3, IMAP, Active Sync) and placing reference monitor hooks within Android's Binder IPC framework.
In[3], The increasing popularity of Googles mobile platform Android makes it the prime target of the latest flow in mobile malware.
Most of the research on enhancing the platforms security and privacy controls requires extensive modification to the operating
991

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

system, which has significant usability issues and hinders efforts for widespread adoption. We develop a novel solution called
Aurasium that bypasses the need to modify the Android OS while providing much of the security and privacy that users desire. We
automatically repackage arbitrary applications to attach user-level sandboxing and policy enforcement code, which closely watches
the applications behavior for security and privacy violations such as attempts to retrieve a users sensitive information, send SMS
covertly to premium numbers, or access malicious IP addresses. Aurasium can also detect and prevent cases of privilege growth
attacks. Experiments show that we can apply this solution to a large sample of benign and malicious applications with a near 100
percent success rate, without significant performance and space overhead. Aurasium has been tested on three versions of the Android
OS, and is freely available.
We develop a deployable technology called Aurasium and which is also a novel, simple, effective, robust technology.
Theoretically, we want Aurasium to be an application-hardening service: a user obtains arbitrary Android applications from
potentially untrusted places, but instead of installing the application as is, pushes the application through the Aurasium black box and
gets a hardened version. The user then installs this hardened version on the phone, assured by Aurasium that all of the applications
interactions are closely monitored for malicious activities, and policies protecting the users privacy and security are actively enforced.
Aurasium does not need to modify the Android OS at all; instead, it enforces flexible security and privacy polices to arbitrary
applications by repackaging to attach sandboxing code to the application itself, which performs
monitoring and policy enforcement. The repackaged application package (APK) can be installed on a users phone and will enforce at
runtime any defined policy without altering the original APKs functionalities. Aurasium exploits Androids unique application
architecture of mixed Java and native code execution to achieve robust sandboxing. In particular, Aurasium introduces libc
interposition code to the target application, wrapping around the Dalvik virtual machine (VM) under which the applications Java code
runs. The target application is also modified such that the interposition hooks get placed each time the application starts.
Aurasium is able to interpose almost all types of interactions between the application and the OS, enabling much more finegrained policy enforcement than Androids built-in permission system. For instance, whenever an application attempts to access a
remote site on the Internet, the IP of the remote server is checked against an IP blacklist. Whenever an application attempts to send an
SMS message, Aurasium checks whether the number is a premium number. Whenever an application tries to access private
information such as the International Mobile Equipment Identity (IMEI), the International Mobile Subscriber Identity (IMSI), stored
SMS messages, contact information, or services such as camera, voice recorder, or location, a policy check is performed to allow or
disallow the access. Aurasium also monitors I/O operations such as write and read. We evaluated Aurasium against a large number of
real-world Android applications and achieved over 99 percent success rate. Repackaging an arbitrary application using Aurasium is
fast, requiring an average of 10 seconds.

In [4], An extended security architecture and policy model to address the lack of flexibility of the current security model employed at
mobile computing platforms is proposed. The proposed model has the potential to open up the mobile device software market to thirdparty developers and it also empowers the users to tailor security policies to their requirements in a fine-grained, personalized
manner.
Our work focuses on the Java 2 Micro Edition (J2ME) - one of the most widely used virtual machine execution environments for
mobile computing devices today.
Our main contribution is the design and implementation of an extended version of the current J2ME, which we describe
xJ2ME, from extended J2ME. xJ2ME enables runtime enforcement of a much more expressive class of security policies compared to
the current state-of-the-art, allowing for a fine-grained behavior control of individual applications. Furthermore, initial evaluations
show no significant, even noticeable, performance overheads. Having introduced the fundamental aspects of the J2ME architecture, in
this section we present the extensions to the architecture and modification of its operational aspects that enable the support for finegrained, history-aware, userdefinable, per-application policy specification and enforcement. We preserve the fundamental aspects of
the J2ME security model, such as its domain based nature.
The aim of the work is to provide the support for fine-grained, history-based, application-specific policy specification and
enforcement within the J2ME framework. A simple example of a security policy that is supported by our extended J2ME architecture
(xJ2ME) is browser may not download more than 300 KB of data per day. Figure depicts the new Java security architecture. The
Run-time Monitor is in charge of making resource access decisions. In order to grant or deny resource access, the Run-time Monitor
992
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

relies on the Policy Manager to identify the relevant application-specific policy. Once the policy is identified, the Run-time Monitor
evaluates its conditions in conjunction with resource usage history information of the system and MIDlet, as obtained from the History
Keeper. If the policy conditions are fulfilled, access is granted, otherwise a Security Exception is thrown.

Fig. Extending the J2ME security architecture with the Run-time Monitor.

In[5], The new perspectives related to the security policies and services enforcement for mobile devices is focused . We propose a
novel methodological approach to protect the lifecycle of mobile devices aim at the secure management of such devices either
directly by the user or by the organization. In order to review the feasibility and impact of our approach in a real scenario, we
customized a mobile device with several security features respecting the goal of the proposed lifecycle. A SecureMyDroid is based
on our own customization of a Google Android OS version and tested it on actual mobile devices. SecureMyDroid is a prototype
developed on a customized version of Google Android OS. It is designed to support and realize practically the secure lifecycle
management of the mobile devices as proposed in this work. The basic step of this activity is to obtain, modify and compile the
source code of the mobile operating system, so that the new OS image file can be deployed on the mobile device. Starting from this
non-trivial result we have realized different features for improving the security management of the entire lifecycle. We sketch next
some of these features.
First of all, we patched the source code of the installation manager in order to inhibit the capability to install new
applications on the device. This security feature ensures both that the user cannot install new personal applications (e.g., games,
entertainment) on its business device and that the device cannot be compromised by the installation of mobile malwares (e.g., virus,
worms, Trojans). SecureMyDroid offers the possibility to disable the use of memory cards (e.g., secure digital cards). This
countermeasure can prevent from nimbly copying or moving sensitive business data out of the device.
We strengthened the relationship between the mobile device and a specific SIM card. Whenever the device is switched on, the
association with the SIM is checked; if the device recognizes that the inserted SIM does not match the expected one a dialog
prompting for a special PIN is shown. The device remains inactive until the user provides the correct PIN. After a given number of
unsuccessful attempts, the device can be permanently blocked.
We placed into the source code of most critical function of the mobile device (e.g., making a call, sending an SMS/MMS/e-mail,
connecting to a Wi-Fi network, syncing with a computer) an event logger call. When a stated time slot elapses the collected log
messages are transferred to a remote server using HTTPS connections.
SecureMyDroid can inform all the actions to an organization remote server performed by the user on the device through an
HTTPS connection. At the same time, the organization can explicitly query the device through simple text messages in order to obtain
specifically information such as the current GPS position, the last critical executed operations, the last contacts added to the address
book..etc.
Remote wiping is another interesting feature provided by SecureMyDroid. This function enables the organization to send a remote
wiping command to the mobile device. We implemented two different wiping modes: a soft mode and a hard mode. A soft wiping is
993

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

carried out only on all databases containing personal information (e.g., SMS, MMS, e-mail, address book, calendar). A hard wiping
implies the execution of a remove command on the mobile device shell; this completely deletes the OS and a new OS has to be
installed on the mobile device to make it again operational. Remote wiping can be activated with a simple text message or can be
executed when the mobile device starts with a non-authorized SIM.
Periodically, the security manager of the company can query the mobile device to receive the configuration of the customized OS.
When an employee requires an upgrade of his/her customized mobile device, the security management using SecureMyDroid can
backup all the data stored in the mobile device and according to the employee privileges can produce a new version of
SecureMyDroid. After this, the employee can restore all the useful data on the newly customized mobile device.
When either the device is assigned to a new employee or it must to be permanently disposed, the security management may check and
export the log collected on the mobile phone. Moreover, before deleting all data, if needed, he/she can activate the backup of personal
information.

3. CONCLUSION
So we conclude as Apex allows users to specify detailed runtime constraints to restrict the use of sensitive resources by applications.
The Porscha is a content protection framework for Android that enables content sources to express security policies to ensure that
documents are sent to targeted phones, processed by endorsed applications, and handled in intended ways. Aurasium is a robust and
effective technology that protects users of the widely used Android OS from malicious and untrusted applications. An extension to
the security architecture of the Java Virtual Machine for mobile systems, to support fine-grained policy specification and run-time
enforcement. SecureMyDroid is a customized version of Google Android OS to support and realize practically the secure lifecycle
management of the mobile devices.

REFERENCES:
[1]M. Nauman, S. Khan, and X. Zhang, Apex : Extending Android Permission Model and Enforcement with User-defined Runtime
Constraints, Inf. Syst. J., no. c, pp. 328332, 2010.
[2] M. Ongtang, K. Butler, and P. McDaniel, Porscha: Policy oriented secure content handling in android, in Proceedings Annual
Computer Security Applications Conference ACSAC, 2010, pp. 221230.
[3] R. Xu, M. Park, and R. Anderson, Aurasium : Practical Policy Enforcement for Android Applications, in USENIX Security 12,
2012.
[4] I. Ion, B. Dragovic, and B. Crispo, Extending the Java Virtual Machine to Enforce Fine-Grained Security Policies in Mobile
Devices. Ieee, 2007, pp. 233242.

[5] A. Distefano, A. Grillo, G. F. Italiano, and A. Lentini, SecureMyDroid : Enforcing Security in the Mobile Devices
Lifecycle, Management, pp. 14, 2010.
[6]D. Boneh and M. Franklin. Identity-Based Encryption from the Weil Pairing. In Proceedings of CRYPTO, 2001

994

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review of Direct Evaporative Cooling System With Its Applications


Suvarna V.Mehere, Krunal P.Mudafale, Dr.Sunil.V.Prayagi
MTech Scholar, Dr.Babasaheb Ambedkar College of Engineering & Research
Assistant Professor, Dr.Babasaheb Ambedkar College of Engineering & Research
Professor, Dr.Babasaheb Ambedkar College of Engineering & Research
Nagpur-441110.India
E-mail: mehere_016@rediffmail.com
Abstract-- Evaporative cooling is an energy efficient and environmentally friendly air conditioning technology. Direct evaporative
cooling systems is a technology which involves adiabatic humidification and cooling of air with supplementary heat exchange
facilities to lower final air temperature and try to reduce relative humidity. This concept is enhanced in all engineering fields due to its
characteristics of zero pollution, energy efficiency, simplicity and good indoor air quality. This cooling effect has been used on
various scales from small space cooling to large industrial applications. An extensive literature review has been conducted. The review
covers direct evaporative cooling design criteria, applications, advantages and disadvantages. Experimental and theoretical research
works on feasibility studies, performance test and optimization as well as heat and mass transfer analysis are reviewed in detail.

Keywords: Evaporative cooling, Design, Applications, Performance test, Optimization, Humidity, Air Conditioning
INTRODUCTION
Evaporative cooling is the process by which the temperature of a substance is reduced due to the cooling effect from the
evaporation of water. The conversion of sensible heat to latent heat causes a decrease in the ambient temperature as water is
evaporated providing useful cooling. Effective cooling can be accomplished by simply wetting a surface and allowing the water to
evaporative. Evaporative cooling occurs when air, that is not too humid, passes over a wet surface; the faster the rate of evaporation
the greater the cooling.[1]When considering water evaporating into air, the wet-bulb temperature, as compared to the airs dry-bulb
temperature, is a measure of the potential for evaporative cooling. The greater the difference between the two temperatures, greater the
evaporative cooling effect. Evaporative coolers provide cool air by forcing hot dry air over a wetted pad. The water in the pad
evaporates, removing heat from the air while adding moisture. When water evaporates it draws energy from its surroundings which
produce a considerable cooling effect. In the extreme case of air that is totally saturated with water, no evaporation can take place and
no cooling occurs [2].

Fig.1.Schematics of a drip-type DEC.[8]


995

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Generally, an evaporative cooling structure is made of a porous material that is fed with water. Hot dry air is drawn over the
material. The water evaporates into the air raising its humidity and at the same time reducing the temperature of the air [3].The
fundamental governing process of evaporative cooling is heat and mass transfer due to the evaporation of water. This process is based
on the conversion of sensible heat into latent heat. Sensible heat is heat associated with a change in temperature. While changes in
sensible heat affect temperature, it does not change the physical state of water. Conversely, latent heat transfer only changes the
physical state of a substance by evaporation or condensation [4].As water evaporates, it changes from liquid to vapour. This change of
phase requires latent heat to be absorbed from the surrounding air and the remaining liquid water. As a result, the air temperature
decreases and the relative humidity increases. The maximum cooling that can be achieved is a reduction in air temperature to the wetbulb temperature (WBT) at which point the air would be completely saturated [1,5].
This system is the oldest and the simplest type of evaporative cooling in which the outdoor air is brought into direct contact
with water, i.e. cooling the air by converting sensible heat to latent heat. Ingenious techniques were used by ancient civilizations some
of it by using earthenware jar water contained, wetted pads/canvas located in the passages of the air [6].The most commonly used
direct evaporative coolers are essentially metal cubes or plastic boxes with large flat vertical air filters, called pads, in their walls.
Consisting of wettable porous material, the pads are kept moist by the water dripped continuously onto their upper edges. The process
air is drawn by motorized fans within the coolers. After being cooled and humidified in the channels between the pads, the air leaves
the cooler as washed air for cooling use. Many coolers use two-speed or three-speed fans, so the users can modulate the leaving air
states as needed.[7]Fig. 1 is the schematic diagram of a drip-type DEC. Water is sprayed at the top edges of the pads and distributed
further by gravity and capillarity. The falling water is recirculated from the water basin by the water pump. In DEC, the process air
contacts directly with the sprayed water and hence is cooled and humidified simultaneously by the evaporation of water.[8]
This paper aims to review direct evaporative cooling technologies that could potentially provide sufficient cooling comfort,
reduce environmental impact and lower energy consumption in buildings. Experimental and theoretical research works on feasibility
studies, performance test and optimization as well as heat and mass transfer analysis are reviewed in detail. The review covers direct
evaporative cooling design criteria, applications, advantages and disadvantages.

DESIGN CRITERIA
The main parameter considered when evaluating the performance of direct evaporative coolers is the Saturation Effectiveness
(s), which can be defined as [9]
s=T11-T12/T 11-Th11 (1)
where
is the saturation effectiveness,
T11 is the outdoor air temperature,
T12 is the supply air temperature and
Th11 is the outdoor air wet bulb temperature.
The value of the Saturation Effectiveness depends on the following factors:
1- Air velocity through the cooler: For a specific cooler, with a particular area and water flow, an increment in the velocity
would result in:
A higher air volume flow.
A higher effect of evaporative cooling, which can be calculated as:
Q =maCpa(T11 -T12)=v.SCpa(T11 -T12)(2)
Where,
Q is the sensible heat (W),
v is the air velocity (m/s),
S is the area section (m2) and
is the density(kgda/m3).
In the majority of the direct evaporative coolers, velocity must not exceed 3m/s to prevent generation aerosols. In other case it would
be necessary to dispose a drift eliminator, which increases slightly the pressure drop.
2- Relation water/air (Mw/Ma): This is the relation between the mass flow of atomized water and air flow. A high value shows
a higher contact area between air and water and thus higher s.
3- Configuration of the humid surface: A humidifier that provides a higher area and time of contact between air and water
permits obtaining higher values of s.
In these systems water recirculation is generally used to save water and improve economic results. The recirculated water temperature
is close to the wet bulb air temperature. Given that air comes into direct contact with the atomized water, this process permits cleaning
the air by removing particles of dust into it. However, if there are great amounts of dust or particles into the air, an additional filter
should be used to prevent the fouling of the humidifier and the nozzles. [10]
996

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RESEARCH AND APPLICATIONS


Theoretical analysis on evaporative cooling is essential for revealing the heat and mass transfer laws in evaporative cooling
process as well as for predicting the process outputs under various working conditions. A number of studies were conducted on
numerical simulation of heat and mass transfer of DEC. [8] Zhang and Chen analyzed the heat and mass transport processes in DEC
and developed a simplified physical model for the DEC, in which the process air was forced to flow over a wet plate with
simultaneous heat and mass transfer [11].Qiang et al. [12] established a neutral network model to predict the air handling performance
of DEC under various working conditions. The direct cooling technology using water evaporation is widely used for environmental
control in agricultural buildings. Zhang [13] studied the heat and mass transfer characteristics of a wet pad cooling device by assuming
complete evaporation of the spraying water. By analogous analysis of the heat and mass transfer processes in DEC and cooling tower,
Du et al. [14] obtained the cooling efficiency formula of DEC as a function of the pad thickness, heat transfer coefficient, face velocity
and specific pad surface. A mathematical model of DEC and its associated boundary conditions were established and the distributions
of the velocity and humidity were calculated by the SIMPLER method [15].
He et al. studied film media used for evaporative pre-cooling of air. The cooling efficiencies of the cellulose media 43% to
90% while PVC media are 8% to 65%.[16] Lee and Lee has been fabricated a regenerative evaporative cooler and tested. To improve
the cooling performance, the water flow rate needs to be minimized as far as the even distribution of the evaporative water is secured.
At the inlet condition of 32 C and 50% RH, the outlet temperature was measured at 22 C which is well below the inlet wet-bulb
temperature of 23.7 C.[17] Xuan et al. first introduced the working principles and thermodynamic characteristics of different types of
evaporative cooling, including direct, indirect and semi-indirect evaporative cooling.[8] Fouda and Melikyanadiscussed heat and mass
transfer, process in direct evaporative cooler. The predicted results show validity of simple mathematical model to design the direct
evaporative cooler, and that the direct evaporative cooler with high performance pad material may be well applied for air conditioning
systems.[18] Kulkarni and Rajput made a theoretical performance analysis of direct evaporative cooling. The results of the analysis
showed that the aspen fiber material had the highest efficiency while the rigid cellulose material had the lowest efficiency.[19]
Heidarinejad et al. have been discussed, the results of performance analysis of a ground-assisted hybrid evaporative cooling
system in Tehran. A Ground Coupled Circuit (GCC) provides the necessary pre-cooling effects, enabling a DEC that cools the air
even below its WBT. Simulation results revealed that the combination of GCC and DEC system could provide comfort condition
whereas DEC alone did not. This environmentally clean and energy efficient system can be considered as an alternative to the
mechanical vapor compression systems. [20] Kachhwaha and Prabhakar presented simple and efficient methodology to design a house
hold desert cooler, predict the performance of evaporative medium and determined pad thickness and height for achieving maximum
cooling.[21] Dai and Sumathy theoretically investigated a cross-flow direct evaporative cooler, in which the wet durable honeycomb
paper constituted as the pad material, and the air channels formed by alternate layers of two kinds of papers with different wave angles
were regarded as parallel plate channels with constant spacing. [22]
Basediya et al. reported basic concept and principle, methods of evaporative cooling and their application for the preservation
of fruits and vegetables and economy also. Zero energy cooling system could be used effectively for short-duration storage of fruits
and vegetables even in hilly region.[2] McKenzie et al investigated the coupling evaporative cooling and decentralized gray water
treatment in the residential sector.[23] Chen et al. presented a case study of a two-stage DEC air conditioning application in the
northeast Chinese city of Lanz housing simulation. The results showed that the indoor temperature and humidity level can be
maintained at design values using such a system. Moreover, the electrical installation power of a two-stage DEC system is only 50.7%
of that of conventional central air conditioning systems.[24] The energy saving potentials of using DEC for air precooling in aircooled water chiller units in 15 typical cities in China were calculated by Jiang and Zhang [25]. The results showed that by using
DEC, the COP of the chillers can be enhanced by 1525% in most of those cities. According to the analysis results of You et al., using
DEC in air-cooled chiller units, the energy efficiency ratio (EER) of air-cooled chiller units in Tianjing can be increased by 14%
[26].FAO (1983) advocated a low cost storage system based on the principle of evaporative cooling for storage of fruits and
vegetables, which is simple, and relatively efficient. The basic principle relies on cooling by evaporation.[27]
The pad characteristics as well as the parameters of the air and the water in DEC significant influence the performance of
DEC. Therefore, the measurement and test of the heat and mass transfer processes of various pads under different inlet air parameters
attracted a lot of attentions.[28]You and Zhang studied the performances of the stainless steel pad and the perforated aluminum pad by
assuming the adiabatic humidifying process [29].The optimum mass flow rate of the air and the water were1.53.5 kg m2 s1 and 0.8
1.4 kg m2 s1, respectively. Moreover, Ge investigated the cooling and dehumidification performances of five types of perforated
aluminum pads with different sizes at various circulation water temperature[30]. Yang et al. tested the cooling performance of the
aluminum pad with specific surface area under the higher air velocity, the cooling efficiency of the pad tested was about 60% [31].
Zhang and Chen measured the adiabatic humidifying and dehumidifying cooling performances of cellulose pads [32]. The experiment
results indicated that the optimum airflow rate. Feng and Liu investigated the heat and mass transfer process of the foam ceramic pad
with a specific surface [33].

997

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

ADVANTAGES AND DISADVANTAGES


Based on available information of direct evaporative cooling system different advantages and disadvantages can be
summarized as follows:

Advantages:

The main advantages of evaporative coolers are their low cost and high effectiveness.
Permitting a wide range of applications and versatility in the buildings, dwellings, commercial and industrial sectors.
Direct evaporative devices act like filters, removing dust particles in air.
It requires no special skill to operate and therefore is most suitable for rural application.
It can be made from locally available materials.
Highly efficient evaporative cooling systems that can reduce energy use by 70%.
Less expensive to install and operate.
It can be easily made and maintained.

Disadvantages:

The water consumption associated to the operation of these systems, which is ascarce resource in dry and hot climates, where
these systems best work.
Evaporative cooling system requires a constant water supply to wet the pads. Therefore, need to be watered daily.
Space is required at outside the home.
Water high in mineral content leave mineral deposits on the pads and interior of the cooler gets damaged.[2,10]
DEC is only suitable for dry and hot climates. In moist conditions, the relative humidity can reach as high as 80%, such a
high humidity is not suitable for direct supply into buildings, because it may cause warping, rusting, and mildew of
susceptible materials.[34]

CONCLUSION
In this paper a review of direct evaporative cooling technology that could be efficiently applicable in building airconditioning was carried out. Using water for evaporation as a mean of decreasing air temperature is considerably the most
environmentally friendly and effective cooling system. Evaporative cooling differs from common air conditioning and refrigeration
technologies in that it can provide effective cooling without the need for an external energy source. If the power consumption can
reduce to a moderate level then it will become serviceable in all sorts of requirements. Evaporative cooling is also important to the
development of independent temperature and humidity control air conditioning systems. More R&D on potential applications of
evaporative cooling systems would be advisable to promote environmentally friendly, energy-efficient, and comfortable air
conditioning and, hence, a more sustainable world.

REFERENCES:
[1] Libertya J.T., Ugwuishiwu B.O., Pukuma S.A and Odo C.E. Principles and Application of Evaporative Cooling Systems for
Fruits and Vegetables PreservationInternational Journal of Current Engineering and Technology,3(3),1000-1006,2013
[2] BasediyaAmratlal& Samuel D. V. K. &BeeraVimala.Evaporative cooling system for storage of fruits and vegetables - a
reviewJ Food SciTechnol,50(3),429442,2013
[3] FAO.Small-scale Post-harvest Handling Practices -A Manual for Horticulture Crops.1995
[4] Watt J.R. and Brown W.K, Evaporative Air Conditioning Handbook, 3rd ed. Lilburn, GA: Fairmont.1997.
[5] P.M. La Roche Passive Cooling Systems in Carbon Neutral Architectural Design. Boca Raton, FL: CRC Press, ch. 7, sec.
7.4,242- 258,2012
[6] Amer O., Boukhanouf R., and Ibrahim H. G. A Review of Evaporative Cooling TechnologiesInternational Journal of
Environmental Science and Development,6(2),2015
[7] Watt JR, Brown WK. Evaporative air conditioning handbook 3rd ed. USA, The Fairmont Press,1997
[8] Xuan Y.M., Xiaoa F., NiuaX.F. ,Huang X., Wang S.W Research and application of evaporative cooling in China: A review
(I) Research Renewable and Sustainable Energy Reviews,16,3535 3546,2012
[9] Rey Martnez, F.J., Velasco Gmez E., lvarez-Guerra M., Molina Leyva, M. Refrigeracinevaporativa. El Instalador. 2000.
[10] E. Velasco Gmez, F.C. Rey Martnez, A. Tejero Gonzlez The phenomenon of evaporative cooling from a humidsurface
as an alternative method for air-conditioningInternational Journal of Energy and Environment,1(1),69-96,2010
[11] Zhang X, Chen PL Analysis of non-equilibrium thermodynamics on the transport processes in direct evaporative cooling
Journal of Tongji University,23(6),63843,1995 [in Chinese].
[12] Qiang TW, Shen HG, Xuan YM. Performance prediction of a direct evaporative cooling air conditioner using neural
network method. HVAC,35(11),103,2005 [in Chinese].
998

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[13] Zhang JY. Theoretical analysis of heat and mass transfer between water and vapor in wet pad Transactions of the Chinese
Society for Agricultural Machinery,30(4),4750,1999
[14] Du J, Huang X, Wu JM. Analogous analysis of heat and mass transfer in direct evaporative cooling air conditioner and
cooling tower Refrigeration and Airconditioning,3(1),114,2003 [in Chinese].
[15] Du J, Wu JM, HuangX. Numerical simulation of heat and mass transfer process in direct evaporative cooling system
Refrigeration and Air-conditioning,5(2),2833,2005 [in Chinese].
[16] He Suoying, Guan Zhiqiang, Gurgenci Hal, HoomanKamel, Lu Yuanshen, Alkhedh air Abdullah M. Experimental study of
film media used for evaporative pre-cooling of air Energy Conversion and Management,87,874-884,2014
[17] Lee Joohyun, Lee Dae-Young. Experimental study of a counter flow regenerative evaporative cooler with finned
channelsInternational Journal of Heat and Mass Transfer, 65,173-179, 2013.
[18] Fouda A., Melikyan Z. A simplified model for analysis of heat and mass transfer in a direct evaporative cooler Applied
Thermal Engineering,31,932-936,2011.
[19] Kulkarni R. K. and Rajput P.S. Comparative performance of evaporative cooling pads of alternative materials
IJAEST,10,239-244,2011
[20] HeidarinejadGhassem, KhalajzadehVahid, DelfaniShahram. Performance analysis of a ground-assisted direct evaporative
cooling air conditioner Building and Environment, 45, 2421-2429,2010
[21] Kachhwaha S S,PrabhakarSuhasHeat and mass transfer study in direct evaporative coolerApplied Thermal
Engineering,31,932-936,2011.
[22] Dai Y.J., Sumathy K Theoretical study on a cross-flow direct evaporative cooler using honeycomb paper as packing
materialApplied Thermal Engineering,22, 14171430,2002.
[23] Erica R. McKenzie, Theresa E. Pistochini, Frank J. Loge, Mark P. Modera. An investigation of coupling evaporative
cooling and decentralized graywater treatment in the residential sector Building and Environment,68,215-224,2013
[24] Chen X, Yao Y, Lu YJ, Liang WG.Analysis and research on reducing air conditioningpower consumption and reducing the
pollution of CFCs by utilizing the natural conditions of northwest China. Heating, Ventilation, and Air Conditioning,4,15
8, 1993 [In Chinese].
[25] Jiang Y, Zhang XS. The research of direct evaporation cooling and its application in air-cooled chiller unit Building
Energy and Environment,25(2),712,2006 [in Chinese].
[26] You SJ, Zhang H, Liu YH, Sun ZQ. Performance of the direct evaporative air humidifier/cooler with aluminum packing
and its use in air-cooled chiller units HVAC,25(5),413,1999 [in Chinese].
[27] FAO,FAO production yearbook, vol. 34. FAO, Rome.1983
[28] Y.M. Xuan, F. Xiao, X.F. Niu, X. Huang, S.W. WangResearch and applications of evaporative cooling in China: A review
(II)Systems and equipment Renewable and Sustainable Energy Reviews, 16, 3523 3534, 2012
[29] You SJ, Zhang WJ. Study on the metal Fill-type direct evaporative air cooler Journal of Refrigeration;4,2831, 1994 [in
Chinese]
[30] Ge LP. The study on the heat and mass transfer of metallic fill. China, TianjinUniversity, 2003 [in Chinese].
[31] Yang Y, Huang X, Yan SQ. Experimental research and analysis on the performance of aluminum foil packing in packedtype spray chamber. China Construction Heating and Refrigeration, 12, 2830, 2007 [in Chinese]
[32] Zhang QM, Chen PL. Heat and mass transfer performance of a domestic product of sprayed corrugated media. Journal of
TongjiUniversity23(6),64853,1995 [in Chinese].
[33] Feng SS, Liu QF. Research of heat and mass transfer process on foam ceramic fillers surface Contamination Control and
Air-conditioning Technology,4,114,2007 [in Chinese].
[34] N. Lechner, Heating, Cooling, Lighting: Sustainable Design Methods for Architects, 3rded. New Jersey, U.S.A,Wiley, ch.
10,276-293.2009

999

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

HYBRID METHOD FOR AUTOMATICALLY FILLING OF THE CHEMICAL


LIQUID INTO BOTTLES USING PLC & SCADA
JAGAT DHIMAN
M-Tech Scholar
Eternal university

ER.DILEEP KUMAR
Assistant professor, ECE Dept.
Eternal university

ABSTRACT- In todays fast-moving, highly competitive industrial world, a company must be flexible, cost effective and efficient if
it wishes to survive. In the process and manufacturing industries, this has resulted in a great demand for industrial control systems/
automation in order to streamline operations in terms of speed, reliability and product output. Automation plays an increasingly
important role in the world economy and in daily experience. A prototype of commercial Hybrid method of automatically filling the
bottles using PLC&SCADA and show its visualization on SCADA screen , controlled using programmable logic controller (PLC) is
proposed and the whole process is monitored using supervisory control and data acquisition (SCADA). This system provides the
provision of mixing any number of liquids in any proportion. Its remote control and monitoring makes the system easily accessible
and warns the operator in the event of any fault. One of the important applications of automation is in the soft drink and other
beverage industries, where a particular liquid has to be filled continuously. The objective of this paper is to design, develop and testing
of the Real time implementation of PLC,SCADA system for ratio control based bottle filling plant. This work will provide low
operational cost, low power consumption, accuracy and flexibility to the system and at the same time it will provide accurate volume
of liquid in bottle by saving operational time.
KEYWORDS -PLC, SCADA, Sensors , Automation,VFD, Conveyor,
I. INTRODUCTION
This research is to design and develop the Hybrid method of automatically filling the chemical liquid into bottles using
PLC&SCADA and show its visualization on SCADA screen. We can operate & control automatically filling of bottles sitting far
away from the plant(for example 500km distance from plant) and we can change all the parameter of the process using SCADA
technology because SCADA system is used as supervisor or monitor the process. The purpose of this research is to apply filling 2
type of chemical liquid into bottles randomly by using PLC as a controller. This is a batch operation where a set amount of inputs to
be process is received as a group, and an operation produces the finish product. In many automation Processes it is necessary to
achieve a desired demand in some specified time. Ex- If the production rate is 35 bottles per minute and the demand increases to 65
bottles per minute, the operating speed needs to be increased, whereas if the demand drops abruptly the production rate needs to be
decreased. Thus the research deals with overcoming the problems of speed control in order to have improved operational parameters.

II. PLC AS SYSTEM CONTROLLER


Programmable logic controller or programmable controller is a digital computer used for automation of industrial process, such as
control of machinery on factory assembly lines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output
unlike general purpose computers, the PLC is designed for multiple inputs and output. Arrangements, extended temperature ranges,
immunity to electrical noise, and resistance to vibration and impact. Programs to controlled machine operations are typically stored in
the battery backed or non volatile memory.

1000

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 1: Basic PLC Operation Process


Unlike a personal computer though the PLC is designed to survive in a rugged industrial atmosphere and to be very flexible in how it
interfaces with inputs and outputs to the real world. a programmable logic controller is simply used in many industries such as oil
refineries, manufacturing lines, conveyor systems and so on. Where ever there is a need in control devices the plc provides a flexible
way to soft wire the components together.
III.Fundamental Principles of Modern SCADA Systems
SCADA refers to the combination of telemetry and data acquisition. SCADA encompasses the collecting of the information,
transferring it back to the central site, carrying out any necessary analysis and control and then displaying that information on a
number of operator screens or displays. The required control actions are then conveyed back to the process. The PLC or
Programmable Logic Controller is still one of the most widely used control systems in industry. As needs grew to monitor and control
more devices in the plant, the PLCs were distributed and the systems became more intelligent and smaller in size. PLCs and DCS or
(Distributed Control Systems) are used as
The advantages of the PLC / DCS SCADA system are:
The computer can record and store a very large amount of data.
The data can be displayed in any way the user requires.
Thousands of sensors over a wide area can be connected to the system.
The operator can incorporate real data simulations into the system.
Many types of data can be collected from the RTUs.
The data can be viewed from anywhere, not just on site.
IV. Block Diagram
This deals with the key components used in settling up the complete plant and thus explains the use and working of each component.
The block diagram of the experimental set up is illustrated. The following configurations can be obtained.

1001

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 2: Block Diagram


The digital computer is used as an interface between PLC and SCADA. The PLC is a micro processor based system controller used to
sense, activate and control industrial equipments and thus incorporate a number of input output/modules which allows electrical
system to be interfaced. SCADA is a centralized system used to supervise a complete plant and basically consists of data accessing
features and controlling processes remotely. The communication protocol used is Ethernet. The Variable Frequency Drive connected
to the PLC receives AC power and converts it to an adjustable frequency adjustable voltage output for controlling the motor operation.
The analog module converts analog input signals to digital output signals which can be manipulated by the processor.
The output of the VFD is given to the 3-phase induction motor which in turn with the help of a pulley mechanism is used to vary the
speed of the conveyor belt. An inductive sensor is an electronic proximity sensors used to detect metallic objects without touching
them. The solenoid valve is a normally closed direct acting valve used to pour the liquid in the bottle whenever it gets a signal from
the proximity sensor
V.PLC AND RELATED SOFTWARES
The PLC used is Micro Logix 1200 as it has 10 inputs and 6 outputs and has an interface for Ethernet. The Micro Logix 1400 system
offers higher I/O count, faster high-speed counter/PTO, and enhanced network capabilities The programming software used is
RSLOGIX 500 and the communication software used is RS LINX 500.
Features of MicroLogix 1200:Ethernet port provides Web server capability, email capability and protocol support Built-in LCD with backlight lets you view
controller and I/O status Built-in LCD provides simple interface for messages, bit/integer monitoring and manipulation Expands
application capabilities through support for as many as seven 1762 Micro Logix Expansion I/O modules with 256 discrete I/O . As
many as six embedded 100 kHz high-speed counters (only on controllers with DC inputs) Two serial ports with DF1, DH-485,
Modbus RTU, DNP3 and ASCII protocol support .
Proximity Sensors
Proximity Sensors are available in two types namely;
1) Inductive sensors
2) Capacitive Sensors
Inductive Sensors are cheaper and allow detection of metal objects whereas capacitive sensors are costly and allow detection of metal,
plastic and glass objects as well.
VI.Variable Frequency Drive
When an induction motor starts, it will draw very high inrush current due to the absence of the back EMF at start. This results in
higher power loss in the transmission line and also in the rotor, which will eventually heat up and may fail due to insulation failure.
The high inrush current may cause the voltage to dip in the supply line, which may affect the performance of other utility equipment
connected on the same supply line.
Adding a variable frequency drive (VFD) to a motor-driven system can offer potential energy savings in a system in which the loads
vary with time. VFDs belong to a group of equipment called adjustable speed drives or variable speed drives. (Variable speed drives
can be electrical or mechanical, whereas VFDs are electrical.) The operating speed of a motor connected to a VFD is varied by
changing the frequency of the motor supply voltage. This allows continuous process speed control. Motor-driven systems are often
1002

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

designed to handle peak loads that have a safety factor. This often leads to energy inefficiency in systems that operate for extended
periods at reduced load. The ability to adjust motor speed enables closer matching of motor output to load and often results in energy
savings
The components of the drive system are broken into four major categories: source power, rectifier, dc bus, and inverter. Other
components exits such as resolver and encoder feedback devices, tachometers, sensors, relays and help supplement the system.
First, the source power must be converted from alternating current to direct current. This conversion is accomplished by means of a
rectifier; a diode is used for more intelligent rectification. The power source that was 460volts ac, 60 Hertz now converted to 650 volts
dc. This AC to DC conversion is necessary before the power can be changed back to AC at a variable frequency.
The diode bridge converter that converts AC-to-DC is sometimes just referred to as a converter. The converter that converts the dc
back to ac is also a converter, but to distinguish it from the diode converter, it is usually referred to as an inverter. It has become
common in the industry to refer to any DC-to-AC converter as an inverter.

Figure 3: Block diagram of VFD ac to dc converter


VII. METHODOLOGY
This department of the plant works on distribution of any kind of chemical liquid into different tank to a main two buffer storage tank.
This distribution takes place automatically using the Programmable Logic Controller (PLC).

1003

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 4:Screen shot of SCADA software of manufacturing department of the chemical liquid.
It commonly applied application of PLC where five different chemical liquids are mixed in required proportion to form a batch .Rate
of the flow is already fixed. We only control the time of the flow. Level of the liquids in the tank is sensed by the level sensor
switches. The ratio of five different liquids will decided as per the required mixed liquid that we needed in the bottle. There is a stirrer
motor is also fitted to mix these two liquids of definite amount in the main tank.

Figure 5:Screen shot of the SCADA software of the automatically filling of the chemical liquid into the bottles.
Bottles are kept in position in a carton over conveyor belt; they are sensed to detect their presence. Capacitive sensors are used for
sensing the bottles. Depending on the output of the sensor the corresponding valve switch on and filling operation takes place. If the
particular bottle is not present then the valve in that position is switched off, thereby avoiding wastage of the liquid. The filling
process is done based on timing. Depending on the preset value
of the timer the valve is switched on for that particular period of time and the filling is done.
The motor continues to run even when the bottle moves away from the first sensors range, i.e. the output of the motor is latched as
explained in the ladder logic section of PLC. When sensor 2 senses the bottle, it also gives a high output to the PLC. The PLC
instructs the inverter to stop the motor. The high output bit of sensor 2 is also given to the timer for the solenoid valve. The timer used
is TON. It counts for a predefined value of time (18 sec). It gives two outputs, Enable output and done output. The Enable output
remains high while the timer is counting and the output goes high after the timer has finished counting. The Enable output of TON is
1004

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

given to the solenoid valve, and so the solenoid valve is open for the predefined value of time (18 sec). The Done output bit is used to
turn ON the motor again in the running . And this all the process are repeat again and half the bottles fill again to in front of the
second chemical tank and the bottles full filled and the done output bit is used to turn on the induction motor again. All this are
described in this ladder programming of the PLC.
Plc ladder program.

1005

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure 6 :This is the plc ladder program for filling of the chemical liquid into the bottles .
CONCLUSION
This paper presents a automated liquid filling to bottles of using PLC and SCADA. A total control is made in a filling is achieved. The
present system will provides a great deal of applications in the field of automation, especially in mass production industries where
there are large number of components to be processed and handled in a short period of time and theres need for increased production.
The programming to this system developed is flexible, quickly and easily. This will increase the total production output; this increase
in production can yield significant financial benefits and savings. This concept can be used in beverage and food industries, milk
industries, medicine industries, mineral water, chemical product industries and
manufacturing industries. The present work is motivated to develop an online scheme to monitor and control a hybrid method of
automatically filling of the chemical liquid into the bottles using PLC and SCADA.

REFERENCES:
[1]Sager P. Jain, Dr.Sanjay l.Haridas ,energy efficient atomized bottling plant using plc and SCADA with speed variable conveyor
assembly (IOSR journal of electronic and communication engineerin e-issn:2278-2834,p- issn8735.
[2 ] Mrs.Jignesha Air design and development of plc and SCADA based control panel for continuous monitoring of 3-phase
induction
[3] K.Gowri Shankar control of boiler operation using PLC-SCADA international multiconference of engineers and computers
scientists 2008 volume2 imecs2008,19-21 march ,2008,hong Kong.
[4]Shaukat. N, PLC based automatic liquid filling process, multitopic conference2002,ieee publication.
[5] T.kalasiselvi and R.praveenaPLC based automatic bottle filling and capping system with user defined volume
selection(international journal of emerging technology and advanced engineering) Volume 2, Issue 8, August 2012)
[6]Arvind N. Nakiya, Mahesh A .Makwanaan overview of a continuous monitoring and control system for 3-phase induction motor
based on programming logic controller and SCADA technology.(IJEET volume 4,issuse 4,july august(2013).
1006

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[7]V. Mathavi & Dhivya static application panels(sap) controlled by plc(international journal of applied information systems(ijais)issn:2249-0868 foundation of computer science fcs,new yrk,usa volume 5 no 6,April2013.
[8] Hemant Ahuja& (Arika Singh automatic filling management system for industries( international journal of emerging technology
and advanced engineering),Volume -4, special issues, february2014,
[9] Mahesh Nandaniya( Automatic Canal Gate Control of 3- Induction Motor with PLC and VFD, Powered by Solar System and
Monitoring International Journal of Emerging Trends in Electrical and Electronics (IJETEE) Vol. 1, Issue. 1, March-2013.
[10] Sujith John Mathew & B.Hemalatha(Fault Identification and Protection of Induction Motor using PLC and SCADA
international journal of advanced research in electrical, electronic and instrumention engineering ) vol3, issues4 , april 2014.
[11] Pampashree &,md.Fakhruddin Ansari ( Design and Implementation of scada based induction motor control . Journal of
Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 3( Version 5), March 2014, pp.05-1
[12] Ahmed Ullah Abu Saeed & Md. Al-Mamun( Industrial Application of PLCs in Bangladesh) International Journal of
Scientific & Engineering Research, Volume 3, Issue 6, June-2012 1 ISSN 2229-5518
[13] Vikash V. &Amarendra H. J.( plc based sensor operated obstacle detection carriage vehicle) International Journal of Applied
Engineering Research ISSN 0973-4562 Volume 9, Number 7 (2014) pp. 847-851

1007

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Vibration Analysis of Beam With Varying Crack Location


P.Yamuna1, K.Sambasivarao2
1

M.Tech Student (Machine Design), Swami Vivekananda Engineering College, Bobbili


Assistant Professor, Mechanical Engineering Dept., Swami Vivekananda Engineering College, Bobbili

Abstract The importance of the beam and its engineering applications is obvious, and it undergoes different kinds of loading.
Such loading may cause cracks in the beam. Crack depth and location are the main parameters for the vibration analysis of such
beams. These cracks and their locations effect on the shapes and values of the beam frequency. So it becomes very important to
monitor the changes in the response parameters of the beam to access structural integrity, performance and safety.
In the current work, the natural frequency of a simply supported beam with a triangular crack, is investigated numerically by finite
element method using of FE analysis software ANSYS. Different crack location effects are considered and the results are compared
with that of the simply supported beam without crack. The results obtained from the vibration analysis of the beam show that the
lowest fundamental frequency of the beam without crack is higher than the lowest frequency obtained for beam with cracks. When the
location of the crack varies from the either end of the simply supported beam to the centre of the beam the lowest natural frequency
decreases.
Keywords Vibration Analysis, ANSYS, Beam, Crack, natural frequency
INTRODUCTION

Many engineering components used in the aeronautical, aerospace and naval construction industries are considered by designers as
vibrating structures, operating under a large number of random cyclic stresses. Cracks found in structural elements like beams and
columns have different causes. They may be fatigue cracks that take place under service conditions as a result of the limited fatigue
strength. They may be also due to mechanical defects, as in the turbine blades of jet engines. In these engines the cracks are caused by
sand and small stones sucked from the surface of runway. Another group involves cracks which are inside the material. They are
created as a result of manufacturing processes. The presence of vibrations on structures and machine components leads to cyclic
stresses resulting in material fatigue and failure.
Experimental based testing has been widely used as a means to analyze individual elements and the effects of beam strength under
loading. While this is a method that produces real life response, it is extremely time consuming and the use of materials can be quite
costly. The use of finite element analysis to study these components has also been used. In recent years, however, the use of finite
element analysis has increased due to progressing knowledge and capabilities of computer software and hardware. It has now become
the choice method to analyze such structural components.
A crack on a structural member introduces a local flexibility which is a function of the crack depth. Major characteristics of
structures, which undergo change due to presence of crack, are
The natural frequency b) The amplitude response due to vibration c) Mode shape.
Hence it is important to use natural frequency measurements to detect crack and its effects on the structure.

OBJECTIVE OF THE WORK


The objective of this study is to analyze the vibration behaviour of a simply supported beam using FEM software ANSYS
subjected to a single triangular crack under free vibration. Material properties of steel are considered for the simply supported beam.
Besides this, information about the variation in location and depth of cracks in cracked steel beams is obtained using this technique.
Dynamic characteristics of damaged and undamaged materials are very different. For this reason, material faults can be detected,
especially in steel beams, which are very important construction elements because of their wide spread usage construction and
machinery. Cracks in vibrating components can initiate catastrophic failures. Therefore, there is a need to understand the dynamics of
cracked structures. When a structure suffers from damage, its dynamic properties can change. Specifically, crack damage can cause a
stiffness reduction, with an inherent reduction in natural frequencies, an increase in modal damping, and a change in the mode shapes.
Since the reduction in natural frequencies can be easily observed, most researchers use this feature. Natural frequencies and mode
shapes of the beam are also been determined.
1008

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

LITERATURE REVIEW
The combination of rails, fitted on sleepers with a suitable fastening system and resting on ballast and subgrade is The effect of a
crack on the deformation of a beam had been considered as an elastic hinge by Chondros and Dimarogonas [2].Variations of the
natural frequencies were calculated by a perturbation method. A finite element model had been proposed , in which two different
shape functions were adopted for two segments of the beam, in order to consider the discontinuity of deformation due to the crack.
Cawley and Adams [1] showed that the stress distribution in a vibrating structure was non-uniform and was different for each mode of
vibration. Therefore, any local crack would affect each mode differently, depending on the location of the crack. Chondros and
Dimarogonas [3] used the energy method and the continuous cracked beam theory to analyze transverse vibration of cracked beams.
Ertugrul Cam et. al. [4], presented information about the location and depth of cracks in cracked beams. For this purpose, the
vibrations as a result of impact shocks were analyzed. The signals obtained in defect-free and cracked beams were compared in the
frequency domain. The results of the study suggest to determine the location and depth of cracks by analyzing the from vibration
signals. Experimental results and simulations obtained by the software ANSYS are in good agreement. The first two natural
frequencies were used by Narkis [5] to identify the crack and later Morassi [6] used it on simply supported beam and rods. Although it
can be solved by using 2D or 3D finite element method (FEM), analysis of this approximate model results in algebraic equations
which relate the natural frequencies of beam and crack characteristics.
Freud and Herrmann [9] modelled the problem using a torsional spring in the place of crack whose stiffness is related. The first
model is used to Euler-Bernoulli cracked beam with different end conditions [7, 8, 10-24, 1] and recently on Timoshenko beams [15,
16]. Luay S. Al-Ansari [17] presented a comparison of the natural frequency between solid and hollow simple supported cracked
beam for different crack depths and positions. Three methods utilized in this research experimental and two numerical method
(Rayleigh Method and Finite Element Method (using ANSYS)).
The identification of location and the depth of crack in a beam containing single transverse crack was done by Pankaj Charan et al.
[18] through theoretical and experimental analysis respectively. It was noticed that a crack in a beam has great effect on dynamic
behaviour of beam. The strain energy density function was also applied to examine the few more flexibility produced to because of the
presence of crack.
Muhannad Al-Waily [19] conducted studies on cracked of beam with different supports. The analytical results revealed the effect
of a crack in a continuous beam and the parameters calculated were the equivalent stiffness, EI, for a rectangular beam to involve an
exponential function with depth and location of crack effect, with solution of assuming equivalent stiffness beam (EI) by using of
Fourier series method. And, the beam materials studied were low carbon steel, Alloys Aluminium, and Bronze materials with different
beam length and depth. A comparison made between analytical results from theoretical solution of general equation of motion of beam
with crack effect with numerical by ANSYS results, where the biggest error percentage is about (1.8 %).

MODELLING OF SIMPLY SUPPORTED BEAM


Design of Beam Without Crack:
A slender, elastic beam of length L, width W, and height H is considered for frequency analysis. For vibration
analysis, the model has been built in ANSYS 12.1 Mechanical APDL. The length of the beam (L) is taken as 0.5 m, width (W) is
taken as 15 mm, and height (H) is taken as 25 mm. Material of the simply supported beam is considered as steel and its properties
taken are Young's elastic modulus as 207 GPa, Poisson's ratio as 0.3 and density as 7800 kg/m3. The beam considered for
modelling in ANSYS is shown in Fig.1. and the volumetric model of the beam modelled in ANSYS is shown in Fig.2.

Fig.1. Simply Supported beam without crack


1009

Fig.2. Model of the Simply Supported Beam modelled in ANSYS


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Design of Beam With Crack:


For vibration analysis of a cracked beam, a triangular crack with a minimum depth of 10 mm and width of 5 mm
is considered. The initial position of the crack is taken at a location 100 mm from one end of the beam. Later, for comparative
analysis the crack location is varied between 100 mm to 450 mm with an increment of 50 mm for one criteria, and in a second
criteria the crack depth is varied from 10 mm to 15 mm with an increment of 0.5 mm. The cracked beam considered with
different parameters is shown in Fig.3 and the volumetric model of cracked beam built in ANSYS is shown in Fig.4.

Crack

Fig.3. Simply Supported beam with crack

Fig.4. Model of the Simply Supported Beam with crack modelled in ANSYS

FE Modeling of Beam with and without crack:


The built volumetric model in ANSYS is discretized using SOLID 186 tetrahedral 20 node brick elements. The FE model of the
simply supported beam consists of 201 elements and 543 nodes. The discretized beam model is shown in Fig.5. The beam model with
crack is also discretized in a similar fashion of the beam discretized without crack. SOLID 186, 20 node tetrahedral brick node
elements are used for generating the mesh. For accurate results, the generated mesh is refined at the internal areas of the crack. The FE
model of the simply supported beam with crack consists of 354 elements and 926 nodes. The discretized beam model is shown in
Fig.6.

crack

Fig.5. Discretized Model of Beam

Fig.6. Discretized Model of Beam with Crack

VIBRATIONAL ANALYSIS OF BEAM MODEL


Boundary conditions:
The beam considered here is a simply supported beam. Hence, the vertical (Y-directional) displacements at the
bottom edges of the beam model at both ends are to be restricted. The bottom edges at both the ends are constrained in the
solution module as below. The edges of the simply supported beam constrained in UY direction is shown in Fig.7.
1010

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Constraints in Y direction
Fig.7. Constrained Model of Simply Supported Beam
Vibration Analysis of Beam without Crack:
The first step in the vibration analysis of the beam is to find its natural frequencies. In ANSYS, modal analysis
used to find the eigen natural frequencies. Initially the beam is taken without any defect(crack). A minimum of first five mode
shapes and natural frequencies are obtained and shown in Table 1. Block Lanczos method which is generally used in case of
symmetric structures is used to find the fundamental frequencies. The lowest frequency of the simply supported beam is found to
be 232.62 Hz with a maximum displacement of 1.165 mm. The mode shape and obtained lowest frequency for the simply
supported beam without crack are shown in Fig.8. The maximum displacement can in case of lowest natural frequency can be
observed at the centre of the beam.
Table 1: Modes and Frequencies of simply supported beam
Mode No.

Frequency (Hz)

232.62

316.96

870.37

919.03

1697.7

Fig .8 :Mode 1 of Simply Supported


Vibration Analysis of Beam with Varying Crack Location:

Beam without crack

A triangular crack is introduced in the simply supported beam model for further eigen frequency vibration
analysis. Initially the triangular crack is assumed to be located at 50 mm from the right end side of the beam model, which has
been explained earlier. The dimensions of the triangular crack are shown in the table below.

Crack Parameter
Crack Width
Crack Depth
Crack Length

1011

Table 2: Crack dimensions


Value
Orientation
5 mm
along length of beam
10 mm
along height of beam
15 mm
along width of beam

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Under simply supported condition the first five natural frequencies and mode shapes of the beam are obtained in
ANSYS modal analysis using Block Lanczos method. The lowest frequency is found to be 230.74 Hz. The location of the crack
which is initially assumed to be 50 mm from the right end of the simply supported beam is varied step by step at 100 mm, 150
mm, 200 mm, 250 mm, 300 mm, 350 mm, 400 mm, 450 mm. Similarly, fundamental frequencies and mode shapes for the step by
step crack locations are obtained. The first five frequencies for various crack locations from right end of the beam to left end of
the beam are shown in tables 3 to 11.
Table 3:

Frequencies for crack location 50mm

Table 4: Frequencies for crack location 100mm

Mode No.

Frequency (Hz)

Mode No.

Frequency (Hz)

230.74

226.87

316.79

315.73

867.35

858.94

893.09

865.66

1682.3

1665.2

Table 5:

Frequencies for crack location 150mm

Table 6: Frequencies for crack location 200mm

Mode No.

Frequency (Hz)

Mode No.

Frequency (Hz)

220.81

219.02

313.31

311.56

852.16

862.56

862.59

899.12

1680.6

1686.3

Table 7:

Frequencies for crack location 250mm

Table 8: Frequencies for crack location 300mm

Mode No.

Frequency (Hz)

Mode No.

Frequency (Hz)

215.51

218.68

309.82

311.52

868.70

862.21

917.31

898.49

1688.0

5
Table 9:

1665.3
Frequencies for crack location 350mm

Table 10:

Frequencies for crack location 400mm

Mode No.

Frequency (Hz)

Mode No.

Frequency (Hz)

220.88

227.05

313.34

315.81

852.22

859.22

862.86

867.02

1680.4

1665.1

Table 11:

Frequencies for crack location 450mm

Mode No.

Frequency (Hz)

230.68

316.80

867.72

892.20

1684.1

1012

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The lowest frequency mode shapes obtained for each step varying at location 50 mm to 450 mm along the length
of beam are shown from Fig.9 to Fig 17. It can be observed from the above results tables that the lowest fundamental frequency
of the cracked beam decreases when the crack location varies from 50 mm to 250 mm, i.e. till the mid span of the beam and then
it increases from the mid span of the beam to the crack location at 450 mm. The comparison of the lowest fundamental frequency
of the cracked beam with varying crack location is shown in Table 12 and in the plot Fig.18.

Fig 9. Mode 1 of Simply Supported Beam with crack located at 50 mm from right end (deformed and undeformed)

Fig .10. Mode 1 of Simply Supported Beam with crack located at 100 mm from right end (deformed and undeformed)

Fig.11. Mode 1 of Simply Supported Beam with crack located at 150 mm from right end (deformed and undeformed)
1013

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.12. Mode 1 of Simply Supported Beam with crack located at 200 mm from right end (deformed and undeformed)

Fig.13. Mode 1 of Simply Supported Beam with crack located at 250 mm from right end (deformed and undeformed)

Fig.14. Mode 1 of Simply Supported Beam with crack located at 300 mm from right end (deformed and undeformed)

1014

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.15. Mode 1 of Simply Supported Beam with crack located at 350 mm from right end (deformed and undeformed)

Fig.16 . Mode 1 of Simply Supported Beam with crack located at 400 mm from right end (deformed and undeformed)

Fig.17 . Mode 1 of Simply Supported Beam with crack located at 450 mm from right end (deformed and undeformed)
Table 12: Comparison of Lowest Natural Frequencies with Varying Crack Location
Crack Location from Right end of the beam in mm
Frequency of lowest Mode in Hz
50
100
150
200
250
300
350
400
450
1015

230.74
226.87
220.81
219.02
215.51
218.68
220.88
227.05
230.68
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

235
230
Frequency of lowest
Mode in Hz

225
220

Frequency of lowest
Mode for beam without
crack in Hz

215
210
0

50 100 150 200 250 300 350 400 450 500

Fig.18. Plot comparing lowest natural frequencies of beam without crack and with varying crack location from right to left end of the
beam

CONCLUSIONS
A slender, elastic steel beam of length(L) 0.5 m, width (W) 15 mm, and height (H) is 25 mm is considered for numerical analysis.
Material properties of the simply supported beam are Young's elastic modulus is 207 GPa, Poisson's ratio is 0.3 and density as 7800
kg/m3. Natural frequencies for beam without any crack defect are found and the lowest frequency is found to be 232.62 Hz. Then the
beam model with a triangular crack located initially at 50 mm, and then the crack locations are varied at 50 mm step increment, i.e.
crack locations are 100 mm,150 mm, 200 mm, 250 mm, 300 mm, 350 mm, 400 mm, and 450 mm respectively. The lowest frequencies
found for each location of crack decrease from 50 mm to 250 mm, which is the mid span of the simply supported beam, and increase
from there on.
Further it can be found that at symmetric positions of the crack position of the beam the lowest fundamental frequencies have
almost equal value, i.e. for crack position at 50 mm the lowest frequency is 230.74 Hz and for crack position at 450 mm the lowest
frequency is 230.68 Hz which is almost equal. This shows that the dynamic response of crack at symmetric locations of the beam is
similar.
REFERENCES
[1] Adams, R.D., Cawley, P., Pye, C.J. and stone, B.J., A vibration technique for non-destructively assessing the integrity of
structures. Journal of Mechanical Engineering Science., 1978.
[2] Chondros, T.G., and Dimarogonas, A.D., Identification of cracks in welded joints of complex structures, Journal of Sound and
Vibration, 1980, 69, pp.531-538.
[3] Chondros T.G, Dimarogonas A.D and Yao, J., A continuos cracked beam vibration theory. Journal of Sound and Vibration,
Vol.215, 1998, pp.17-34.
[4] Ertugrul Cam, Sadettin Orhan, and Murat Luy An analysis of cracked beam structure using impact echo method NDT&E
International, Vol. 38, 2005.
[5] Narkis, Y., "Identification of Crack Location in Vibrating Simply Supported Beams", Journal of Sound Vibration, 172, 1994,
pp.549558.
[6] Morassi, A., "Identification of a Crack in a Rod Based on Changes in a Pair of Natural Frequencies", Journal of Sound Vibration
242, 2001, pp.577596.
[7] Lee, H. P., and Ng, T. Y., "Natural Frequencies and Modes for the Flexural Vibration of a Cracked Beam", Journal of Applied
Acoustics, 42, 1994, pp.151163.
[8] Dilena, M., and Morassi, A., "The Use of Antiresonances for Crack detection in Beams", Journal of Sound Vibration, 276, 2004,
pp.195214.
[9] Freund, L.B. and Herrmann, G., "Dynamic Fracture of a Beam or Plate in Bending", Journal Applied Mechanics, 76-APM-15,
1976, pp. 112116. |
[10] Bamnios, G., and Trochides, A., "Dynamic Behavior of a Cracked Cantilever Beam", Appl. Acoust., 45, 1995, pp. 97112.
[11] Boltezar, M., Stancar, B., and Kuhelj, A., "Identification of Transverse Crack Location in Exural Vibrations of Free-Free Beams",
Journal of Sound and Vibration, 211, 1998, pp. 729734.
[12] Fernndez-Sez, J., Rubio, L., and Navarro, C., "Approximate Calculation of the Fundamental Frequency for Bending Vibrations
of Cracked Beams", Journal of Sound Vibration, 225, 1999, pp.345352. |
[13] Fernndez-Sez, J., and Navarro, C., "Fundamental Frequency of Cracked Beams: An Analytical Approach", Journal of Sound and
Vibration, 256, 2002, pp.1731.
1016
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[14] Aydin, K., "Vibratory Characteristics of EulerBernoulli Beams With an Arbitrary Number of Cracks Subjected to Axial Load",
Trans. ASME, Journal of Vibration and Acoustics, 2008, pp. 485510.
[15] Loya, J. A., Rubio, L., and Fernndez-Sez, J., "Natural Frequencies for Bending Vibrations of Timoshenko Cracked Beams",
Journal of Sound and Vibration, 290, 2006, pp. 640653.
[16] Aydin, K., "Vibratory Characteristics of Axially-Loaded Timoshenko Beams With Arbitrary Number of Cracks", Trans. ASME,
Journal of Vibration and Acoustics, 129, 2007, pp. 341354.
[17] Luay S. Al-Ansari, Experimental and Numerical Comparison Between the Natural Frequency of Cracked Simply Supported
Hollow and Solid Beams, Indian Journal of Applied Research, Volume 3, Issue 9, 2013, pp.267-272.
[18] Pankaj Charan Jena, Dayal R. Parhi, and Goutam Pohit, Faults detection of a single cracked beam by theoretical and experimental
analysis using vibration signatures, IOSR Journal of Mechanical and Civil Engineering, Volume 4, Issue 3 (Nov-Dec. 2012),
pp.01-18.
[19] Muhannad Al-Waily, Theoretical and Numerical Vibration Study of Continuous Beam with Crack Size and Location Effect,
International Journal of Innovative Research in Science,Engineering and Technology, Vol. 2, Issue 9, September 2013, pp. 41664177

1017

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

DESIGN OF A CLOSED LOOP CONTROL OF THE BOOST CONVERTER


(AVERAGE MODEL)
AHMED MAJEED GHADHBAN
Diyala University, Electrical Power and Machines Engineering Department,
Collage of Engineering.
Diyala. Iraq.
Asm725@yahoo.com

Abstract This paper addresses the design of closed loop control of the boost converter based on specific criteria given. The closed
loop boost converter is used to convert a low level DC input voltage from a DC power supply and use to control the average model.
The average model is useful when investigating the dynamic behavior of the converter when it is subject to changes in operating
parameters. The design methodology will comprise of hand calculation that involves theory and simulation. The simulation carried out
in Pspice software. The Performance analysis, which covers the closed loop control of the average model on related waveforms of
output voltage, current and power are discussed and achieved.

Keywords DC-DC Converter, boost converter, closed loop control, PSpice simulation, DC chopper.
INTRODUCTION
A Boost converter or step-up switch mode power supply that can also be called a switch mode regulator. It steps up the input
voltage to produce a higher output voltage. The popularity of a switch mode regulator is due to its high efficiency, compact size and
low cost. Generally, any basic switch power supply consists of the same basic power components: two switches, usually a MOSFET,
and a diode D, an inductor and an output capacitor, all components are same as the buck and buck-boost converter but placed in
different circuit locations. The boost converter was configuration is shown in Figure (1).

Figure (1). Basic boost converter topology


Deriving the output voltage conversion ratio will be done by evaluating the inductor Volt Second balance as described in [2].
In steady state, the inductor average voltage must equal zero, which means that the energy flows into the inductor is equal to the
energy flows out of the inductor over one complete switch cycle
The DC-DC converters have two fundamentally different modes of operation: Continuous Conduction Mode (CCM) and
Discontinuous-Conduction Mode (DCM)[3]. The boost converter and its control are designed based on both modes of operation. In
the Continuous Conduction mode, the inductor current flows continuously that is the inductor current is always above zero throughout
the switching period [4].
A boost converter is designed to step up a fluctuating or variable input voltage to a constant output voltage of 24 volts with
input range of 6-23volts in [5]. To produce a constant output voltage feedback loop is used. The output voltage is compared with a
reference voltage and a PWM wave is generated, here PIC16F877 microcontroller is used to generate a PWM signal to control
switching action.
1018

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A DC to DC converter is used to step up from 12V to 24V in [6]. The 12V input voltage is from the battery storage
equipment and the 24V output voltage serves as the input of the inverter in solar electric system. In designing process, the switching
frequency, f is set at 20 kHz and the duty cycle, D is 50%. The tool that has been used for circuit simulation and validation are
National Instrument Multisim and OrCAD software.
Here author introduced an approach to design a boost converter for photovoltaic (PV) system using microcontroller in [7].
The converter is designed to step up the solar panel voltage to a stable 24V output without storage elements such as a battery. It is
controlled by a microcontroller unit using voltage-feedback technique.
Finally, This paper concerns with design and simulation of closed loop control to the boost converter (average model) with
specific criteria. The system has a nonlinear dynamic behavior, as it works in switch-mode. Moreover, it is exposed to significant
variations which may take this system away from nominal conditions, due to changes in load is changing subjected to half load to full
load and to half load again. The input is obtained from a DC power supply. In this paper the boost converter is analyzed and a design
component and simulation of DC/DC boost converter is proposed. The simulation is done in Pspice software

DESIGN AND CALCULATION


To calculate the parameters of the components which are used for control. The error amplifier as showed in Figure (2)
compensated compares the converter output voltage with a reference voltage to generate an error signal which adjusts the duty ratio of
the switch. The amplifier should have a high gain at low frequency and low gain at high frequency. Usually the controller crossover
frequency of the total open-loop transfer function. This is usually around an order of magnitude less than the converter switching
frequency. It is about 1/10 of switching frequency. The switching frequency is 100 kHz.
The transfer function and frequency response can be obtained by using PSpice as follows:

For the filter with load R and R ESR shows the gain at 10kHz = -20.84 dB,
Phase angle = -970

The error amplifier output voltage Vc is compare to a saw tooth waveform with amplitude Vp, for this design Vp is 3 V.
Hence the PWM converter gain is: Gain=1/Vp =1/3 = -9.54dB, Phase angle=0

The combination of gain filter and converter is :

The compensated error amplifier must have gained +30.38 to make loop gain=0dB.
30.38 = 20

(1)

Phase error Amplifier, From K-factor table K=3 is chose hence co = -217o .
Table 1. K-factor

K
co()

2.5

-233

-224

-217

-208

-203

-199

Combined gain (filter PWM)


Gain 20.84 (9.54dB) 30.38dB, Phase angle, filter 97 o
To calculate C1 and C2;
1019

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

K=

co
z

(2)

co

p = Kco =

1
R2 C2

(3)

Where
The resonance frequency co
p Is the dominant-pole
z Is the capacitor ESR Zero

Figure (2). Compensate Amplifier

SIMULATION RESULTS AND DISCUSSION


The Figure (3) shows the full circuit design of the controller of the boost converter (average model). The average model is useful
when studying the dynamic behavior of the converter when it is subject to changes in operating parameters. Such an analysis is
essential when the output is regulated and controlled through a feedback loop, which is intended to keep the output at a set level by
adjusting the duty ratio of the switch to accommodate variations in the load.

Figure (3). Circuit of the controller of the boost converter


Figure (4) shows the output voltages and shows the changing in the output voltage due to the load changing from half load to
full load and then to half load again and that switches used. The (S1) is closed, according to that there are two resistances in parallel
so the load will be half loaded. Then the (S1) opens, thats mean
one resistance and it is the full load. The (S2) is closed and we get two resistances in parallel so the load will be half again.
1020

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure (4). Output voltage waveform


Figure (5). Shows the output current waveform and changing the current value according to the load changing.

Figure (5). Output current waveform

Figure (6). Shows the output power waveform and changing the power value according to the load changing

Figure(6). Output power waveform.

CONCLUSION
In this paper, a closed loop control to boost converter (average model) has been designed. The complete system has been
simulated in PSpice which resulted in efficiency at full power as gave in specific data. By using compensate the amplifier makes the
1021

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

compensation of the boost converter much easier as the power stage has a response of the system at low frequency and using a
compensator, is adequate, which greatly simplifies the design process. And the controller can stabilize the output voltage when the
load is changing.

REFRENECES:
[1] T.NageswaraRao., V.C. Veera Reddy, (2012)A NOVEL EFFICIENT SOFT SWITCHED TWO PORTS DC-DC BOOST
CONVERTER WITH OPEN LOOP AND CLOSED LOOP CONTROL.
Indian Journal of Computer Science and
Engineering,Volume: 3 Issue: 3 PP: 394-400
[2] Zengshi Chen , Wenzhong Gao, Jiangang Hu --- Xiao Ye, (2011).Closed-Loop Analysis and Cascade Control of a Nonminimum
Phase Boost Converter .IEEE Transactions on Power Electronics, Volume: 26 Issue: 4 PP: 1237-1252.
[3] Wei-Chung Wu, R.M. Bass, J.R. Yeargan, Elimination the effects of the Right-half Plane Zero in Fixed Frequency Boost
Converters, IEEE Annual Power Electronics Specialists Conference 06/1998, 362 - 366 Vol.1.
[4] R. Ridley, Current Mode Control Modeling, Switching Power Magazine, 2006,
[5] S. Masri and P. W. Chang, Design and development of a DC-DC Boost converter with constant output voltage, IEEE,
International conference on Intelligent and Advanced systems (ICIAS), June 2010.
[6] Asmarashid Ponniran and Abdul Fatah Mat Said.,DC-DC Boost Converter Design for Solar Electric System, International
Conference on Instrumentation, Control and Automation, October 20-22 (ICA 2009) Bandung.
[7] Syafrudin Masri and Pui-Weng Chan, Development of a Microcontroller-Based Boost Converter for Photovoltaic System,
European Journal of Scientific Research. ISSN 1450-216XVol.41No.1, pp. 38-47. http://www.eurojournals.com/ejsr.htm

1022

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Review On Finding Relevant Content and Inuential Users based on


Information Diffusion
Ms.Ashwini Sopan Shidore,Ms.Sindhu M.R
ME(II),Computer,
G.H.Raisoni College OfEngg,
Ahmednagar, Maharashtra,India
ashidore419@gmail.com

Abstract Understanding information diffusion processes that take place on the Web, specially in social media, is a fundamental
step towards the design of effective information diffusion mechanisms, recommendation systems, and viral marketing. Two key
concepts in information diffusion are inuence and relevance. Ability to popularize content in an online community is the inuence .
Inuentials introduce relevant content, in the sense that such content satises the information needs of a signicant portion of this
community.We describe the problem of identifying inuential users and relevant content in information diffusion data and also study
how individual behavior data may provide knowledge regarding inuence relationships in a social network in this paper.

Keywords Inuence, Relevance, Information diffusion , Social Networks, Twitter, Page-Rank, Profile-Rank.
INTRODUCTION

Powered by the remarkable success of Twitter, Facebook, Youtube, and the blogosphere, social media is taking over
traditional media as the major platform for content distribution. The combination of user-generated content and online social networks
is the engine behind this revolution in the way people share news, videos, memes, opinions, and ideas in general. As a consequence,
understanding how users consume and propagate content in information diffusion processes is a very important step towards the
design of effective information diffusion mechanisms, viral marketing and recommendation systems on the Web. Two key concepts in
information diffusion are inuence and relevance. In social networks, inuence can be dened as the capacity to affect the behavior of
others [1]. However, in information diffusion scenarios, inuence is usually a measure of the ability of popularizing information.
Relevance is a relationship between a user and a piece of information, in the sense that relevant information satises a users
information needs/interests, being a fundamental concept also in information retrieval and recommender systems [4, 11]. This work
focuses on the link between user inuence and information relevance in information diffusion data, which describe how users create
and propagate information across time. As we are interested in the diffusion of content (e.g., news, videos) on the Web, we use the
terms content and information interchangeably.
An important challenge in the study of inuence and information diffusion in social networks is the lack of data at a large
enough scale. Most of the social network datasets available for research contain just static topological information (i.e., persons and
rela- tionships) and do not contain key information for analyzing inuence and information diffusion. Further, social inuence
analysis requires temporal information that indicates, for instance, eventual association of persons to information items. As a
consequence of such scarcity, a signicant part of the existing models and analy- sis of social inuence are based on synthetic data,
which is frequently based on epidemiological models [17]. Blogs, news media websites, viral marketing campaigns, photo and video
sharing services, and online social networks in general have provided rich datasets that supported several interesting ndings
regarding social inuence and information diffusion in real scenarios.

LITERATURE SURVEY
Meeyoung Cha Hamed Haddadi Fabrcio Benevenuto Krishna P. Gummadi[24] described that using a large amount of
data collected from Twitter, we present an in-depth comparison of three measures of inuence: indegree, retweets, and mentions.
Based on these measures, we investigate the dynamics of user inuence across topics and time. We make several interesting
observations. First, popular users who have high indegree are not necessarily inuential in terms of spawning retweets or mentions.
Second, most inuential users can hold signicant inuence over a variety of topics. Third, inuence is not gained spon- taneously or
accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these ndings provide new insights
for viral marketing and suggest that topological measures such as indegree alone reveals very little about the inuence of a user.
1023

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Jie Tang ,Jimeng Sun,Chi Wang and Zi Yang [2] shows that in large social networks, nodes (entities,users) are inuenced by
others for different reasons. For ex:, the colleagues have strong inuence on ones work, while the friends have strong inuence on
ones daily life. How to differentiate the social inuences from various angles(topics)? How to quantify the strength of those social
inuences? How to estimate the model on real large networks? To address these questions, we propose Topical Afnity Propagation
(TAP) to model the topic-level social inuence on large networks. TAP can take results of any topic modeling and the existing
network structure to perform topic-level inuence propagation. With the help of the inuence analysis, we present several important
applications on real data sets such as 1) what are the representative nodes on a given topic? 2) how to identify the social inuence so
neighboring nodes on a particular node? To scale to real large networks, TAP is designed with efcient distributed learning algorithms
that is implemented and tested un- der the Map-Reduce framework. We also present the common characteristics of distributed
learning algorithms for Map-Reduce. Finally, we describe the effectiveness and efciency of TAP on real large data sets.
Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon[21] start with the network analysis and study the distributions
of followers and followings, the relation between followers and tweets, degrees of separation. Next we rank users by the number of
followers, the number of retweets and PageRank and present quantitative comparison among them. The ranking by retweets pushes
those with fewer than a million followers on top of those with more than a million followers. Through our topic analysis we show
what different categories trending topics are classied into, how long they last, and how many users participate. Finally, we study the
information diffusion by retweet. We construct retweet trees and examine their spatial and temporal characteristics. This work is the
rst quantitative study on the entire Twittersphere and information diffusion on it..
Arlei Silva, Hrico Valiati, Sara Guimares, Wagner Meira Jr.[28] described how individual behavior data may provide
knowledge regarding inuence relationships in a social network and also dene what we call the inuence network discovery
problem, which consists of identifying inuence relationships based on user behavior across time. Several strategies for inuence
network discovery are proposed and discussed. a case study on the application of such strategies using a follower-followee network
and user activity data from Twitter, which is a popular microblogging and social networking service. We consider that a followerfollowee interaction denes a potential inuence relationship between users and the act of posting a tweet, a URL or a hashtag show
an individual behavior on Twitter. The results described that, while tweets may be used effectively in the discovery of inuence
relationships, hashtags and URLs do not lead to good performance in such task. Moreover, strategies that consider the time when an
individual behavior is observed outperform those that do not and by combining such information with the popularity of the behaviors,
even good results may be achieved.
Arlei Silva , Sara Guimares, Wagner Meira , Mohammed Zaki [20] study the problem of identifying inuential users and
relevant content in information diffusion data. We propose ProleRank, a new information diffusion model based on random walks
over a user-content graph. ProleRank is a PageRank inspired model that describes the principle that relevant content is created and
propagated by inuential users and inuential users create relevant content. One good property of ProleRank is that it can be adapted
to provide personalized recommendations. Experimental results show that ProleRank makes accurate recommendations,
outperforming baseline techniques. We also illustrate relevant content and inuential users discovered using ProleRank. Our analysis
shows that ProleRank scores are more correlated with content diffusion than with the network structure. We also demonstrate that
our new modeling is more efcient than PageRank to perform these calculations.
ACKNOWLEDGMENT
We would like to thank all the authors of different research papers referred during writing this paper. It was very knowledge gaining
and helpful for the further research to be done in future.

CONCLUSION
In this paper, we have studied how individual behavior data may be applied in the identication of inuence relationships in
social networks. This paper surveys different research papers that proposed various methods which are basis for future research in the
field of information diffusion data. Identifying influential user efficiently from large datasets are the challenging tasks in the field of
information diffusion data. ProleRank, is a random walk based information diffusion model that computes user inuence and
content relevance using information diffusion data. ProleRank scores are more correlated with content diffusion than with the
network structure. We also described that our new modeling is more efcient than PageRank .
1024

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

REFERENCES:
N. Friedkin. A structural theory of social inuence, volume 13. Cambridge University Press, 2006.
J. Tang, J. Sun, C. Wang, and Z. Yang. Social inuence analysis in large-scale networks. In KDD, 2009.
R. Baeza-Yates, B. Ribeiro-Neto, et al. Modern information retrieval, volume 82. Addison-Wesley New York, 1999.
F. Alkemade and C. Castaldi, Strategies for the diffusion of innovations on social networks, Comput. Economics, vol. 25,
no. 1-2, pp. 323, 2005.
[5] M. Gomez Rodriguez, J. Leskovec, and A. Krause. Inferring networks of diffusion and inuence. In SIGKDD, 2010.
[6] H. Tong, S. Papadimitriou, P. S. Yu, and C. Faloutsos. Proximity tracking on time-evolving bipartite graphs. In SDM, 2008.
[7] A. Goyal, F. Bonchi, and L. V. Lakshmanan. Learning inuence probabilities in social networks. In WSDM, 2010.
[8] D. Gruhl, R. Guha, D. Liben-Nowell, and A. Tomkins. Information diffusion through blogspace. In WWW, 2004.
[9] J. Hannon, M. Bennett, and B. Smyth. Recommending twitter users to follow using content and collaborative ltering
approaches. In RecSys, 2010.
[10] F. Ricci, L. Rokach, and B. Shapira. Introduction to recommender systems handbook. Recommender Systems Handbook,
2011.
[11] H. Tong, C. Faloutsos, and J.-Y. Pan. Fast random walk with restart and its applications. In ICDM, 2006.
[12] J. Weng, E.-P. Lim, J. Jiang, and Q. He. Twitterrank: nding topic-sensitive inuential twitterers. In WSDM, 2010.
[13] A. Anagnostopoulos, R. Kumar, and M. Mahdian. Inuence and correlation in social networks. In KDD, 2008.
[14] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of inuence through a social network. In KDD, 2003.
[15] J. Yang and J. Leskovec. Patterns of temporal variation in online media. In WSDM, 2011.
[16] M. R. Subramani and B. Rajagopalan. Knowledge-sharing and inuence in online social networks via viral marketing.
Commun. ACM, 2003.
[17] B. Taskar, M. fai Wong, P. Abbeel, and D. Koller. Link prediction in relational data. In NIPS, 2004.
[18] D. Watts and P. Dodds. Inuentials, networks, and public opinion formation. Journal of Consumer Research, 2007.
[19] D. M. Romero, W. Galuba, S. Asur, and B. A. Huberman. Inuence and passivity in social media. In PKDD, 2011.
[20] Arlei Silva, Sara Guimares, Wagner Meira, Jr., Mohammed Zaki ProfileRank:finding relevant content and influential
users based on information diffusion,2013
[21] H. Kwak, C. Lee, H. Park, and S. Moon. What is twitter, a social network or a news media? In WWW 10: Proceedings of
the 19th international conference on World Wide Web, pages 591600, 2010.
[22] M. J. Pazzani and D. Billsus. Content-based recommendation systems. In The Adaptive Web, pages 325341. Springer
Verlag, 2007.
[23] N. E. Friedkin, A Structural Theory of Social Inuence. Cambridge University Press, 1998.
[24] M. Cha, H. Haddadi, F. Benevenuto, and K. P. Gummadi. Measuring User Inuence in Twitter: The Million Follower
Fallacy. In ICWSM, 2010.
[25] E. Bakshy, B. Karrer, and L. A. Adamic, Social inuence and the diffusion of user-created content, in Proceedings of the
10th ACM conference on Electronic commerce. ACM, 2009, pp. 325334.
[26] M. Gomez Rodriguez, J. Leskovec, and A. Krause, Inferring networks of diffusion and inuence, in Proceedings of the
16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2010, pp. 10191028.
[27] F. Bonchi, Inuence propagation in social networks: A data mining perspective, IEEE Intelligent Informatics Bulletin,
vol. 12, no. 1, pp. 816, 2011.
[28] A. Silva, H. Valiati, S. Guimares, and W. Meira Jr. From individual behavior to inuence networks: A case study on
twitter. In Webmedia, 2011.
[29] E. Bakshy, I. Rosenn, C. Marlow, and L. Adamic, The role of social networks in information diffusion, in Proc. of the
21st international conference on World Wide Web. ACM, 2012, pp. 519528.
[30] S. Aral, L. Muchnik, and A. Sundararajan. Distinguishing inuence-based contagion from homophily-driven diffusion in
dynamic networks. PNAS, 2009.
[31] F. Alkemade and C. Castaldi, Strategies for the diffusion of innovations on social networks, Comput. Economics, vol. 25,
no. 1-2, pp. 323, 2005.
[32] M. De Choudhury, S. Counts, and M. Czerwinski. Identifying relevant social media content: leveraging information
diversity and user cognition. In HT, 2011.
[33] J.Wortman, Viral marketing and the diffusion of trends on social networks, University of Pennsylvania, Tech. Rep.
Technical Report MS- CIS-08-19, May 2008

[1]
[2]
[3]
[4]

1025

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Surveyof Knowledge Discovery for Object Representation with SpatioSemantic Feature Integration
Madhuri B. Dhas1
PG Student
1
Department Of Computer Engineering
VPCOE, Baramati,Savitribi Phule University
1

Baramati, India

Prof.S.A.Shinde2
2
Asistant professor
2
Department Of Computer Engineering
VPCOE,Baramati,Savitribi Phule University

Baramati, India

madhuri.dhas13@gmail.com,

shinde.meetsan@gmail.com

ABSTRACT:A social media network is becoming popular these days, where user interacts with each other to form social networks.
The photo sharing websites include Flicker, Picasa, YouTube support users to create, annotate, share, detect and comment on Media
data. The terms data mining, knowledge discovery are related to this web multimedia objects. Multimedia object classification is a
necessary step of multimedia information retrieval. There is urgent need to efficiently index and organize these web objects, to
facilitate convenient browsing and search of the objects, and to effectively reveal interesting patterns from the objects. For all these
tasks, classifying the web objects into manipulable semantic categories is an essential preprocessing procedure. One important issue
for classification of objects is the representation of images. There exist an excessive number of visual features proposed for the
representation of images. To perform supervised classification tasks, the knowledge is extracted from unlabeled objects through
unsupervised learning. Instead of using traditional Bag-of-words (BoW), a higher level image representation model called Bag-ofvisual-phrases(BoP) is used to incorporate the spatial and semantic information.
KEYWORDS
Knowledge Discovery, Correlation Knowledge, Spatio-Semantic Feature Integration.
I.INTRODUCTION
Recently the task of image retrieval has received more attention from the web community and web multimedia retrieval since there are
so many useful images on web pages. Image retrieval is a branch of information retrieval whose task is to retrieve some pieces of
information to fulfill a users information needs according to semantic relevance measurements.In multimedia retrieval process
Knowledge discovery concerns the entire knowledge extraction phase, including storage of data and accessing that data, what are the
efficient and scalable algorithms used to analyze massive datasets, interpretation and visualization of the results, interaction between
human and machine. It also deals with the support for learning and analyzing the application domain.
Multimedia mining and retrieval deals with the extraction of prior knowledge, multimedia data relationships, or other patterns that are
not explicitly stored in multimediafiles. In retrieval process, indexing and classification of multimedia data with efficient information
fusion of the different modalities is essential for improvement of system's overall performance. An automated method for generating
images annotations, taking into accounttheir visual features. The semantic rules map the combinations of visual characteristics (colour,
texture, shape, position, etc.) with semantic concepts; capture the meaning and understanding of a domain.
This topic contributes:
An unsupervised learning method for discoveringcross-domain correlation knowledge in which anovel cross-domain method is
possible to discover the correlationknowledge between different domains via unsupervised learning on unlabeled data from multiple
feature spaces,and then apply it to perform supervised classificationtasks.

1026

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

A novel two-level image representation model in which Unlike the basic bag-of-visual-words (BoW) model, the higher level visual
components (words and phrases) to incorporate the spatial and semantic information into image representation model (i.e.,bag-ofvisualphrases (BoP)). By combining visual words with phrases, the distinguishing power of image features is enhanced.

Two different strategies (Enlarging Strategy and Enriching Strategy) to utilize the correlationknowledge to enrich the feature space
for classification. Bytransferring such knowledge, both the strategies can handlethe situation when one information source is missing,
especiallythe most common situation that the textual descriptionsof a small portion of web images are missing.
By effectively transferring the cross-domain correlation knowledge to new learning tasks, this method cannot only be applied in some
specific domains (e.g.,, disaster emergency management), but also be used in the general domain (e.g.,social-media images
organization, etc.).

II. RELATED WORK


The Fayyad et al.[3] KDP(Knowledge Discovery Process) model consists of nine steps, which are given bellow:
1.
2.
3.
4.
5.
6.
7.
8.
9.

Developing and understanding the application domain. It includes learning the relevant prior knowledge and discovers the
knowledge.
Creating a target data set. The data miner selects a subset of variables (attributes) anddata points (examples) that will be used
to perform discovery tasks. This usually includesquerying the existing data to select the desired subset.
This step consists of Data cleaning and preprocessing which includes removing outliers, deals with noiseand missing terms
and values in the data, and maintaing account for time sequence information and knownchanges.
It includes the Data reduction and projection in which finding useful attributes by applyingdimension reduction and different
transformation methods.
Choosing the data mining task where data miner matches the goals defined in step 1 by applyingregression, clustering,
classification, etc.
Choosing the data mining algorithm in this the data miner selects methods to search for patterns inthe data and decides which
models and parameters of the methods used may be appropriate.
Data mining step generates patterns in a particular representational form, such asregression models classificationrules,
decision trees, trends, etc.
Interpreting mined patterns includes the process where the analyst performs visualization of the extracted patternsand models,
and visualization of the data based on the extracted models.
Consolidating discovered knowledge consists of incorporating the discoveredknowledge into the performance system,
documenting and reporting it to the interestedparties. It also includes checking and resolving potential conflicts with
previouslybelieved knowledge.

Pravin M. Kamde, Dr. Siddu. P. Algur[4] introduced different models for Multimedia classification and clustering based on the
supervised and unsupervised learning.
Classification models
Machine learning (ML) and meaningful information extraction can only be realized. Decision trees can be translated into a set of rules
by creating a separate rule for each path from the root to a leaf in the tree. The rules can also be directly induced from training data
using a variety of rule-based algorithms. Artificial Neural Networks (ANNs) are another method of inductive feasible learning, based
on computational models of biological neurons and networks. The Support VectorMachines (SVMs) [5] is the newest technique that
considers the notion of a margin.

1027

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Clustering Models
In unsupervised classification, the issue is to group a given collection of unlabeled multimedia files into meaningful clusters according
to the multimedia content without a priori knowledge.Clustering algorithms can be divided into different methods such as partitioning
methods, hierarchical methods, density-based, grid based, and model-based methods. A survey of clustering techniques can be found
in[4].Density-based clustering algorithms try to find clusters based on density of data points in a region. The main idea of densitybased clustering is that, for each instance of a cluster, the close neighbourhood of a given radius has to contain at least a minimum
number of instances.Grid-based clustering algorithms first quantize the clustering space into a finite number of cells (hyperrectangles) and then perform the required operations on the quantized space.

Association Rules
Rule based classification is based on the occurrences similarity of pattern or entity in the specific domain data set. Support,
Confidence and Interest are three mainmeasures of the association.The support factor indicates the relative occurrences of both X and
Y within the overall data set of transactions. It is the ratio of the number of instances that are satisfying both X and Y over the total
number of instances. The confidence factor is defined as it is the probability of Y given X and itis the ratio of the number of instances
that are satisfying both X and Y over the number of instances satisfying X. The support factor simply indicates the frequencies of the
occurring patterns in the rule, and the confidence factor describes the strength of implication of the rule. Ameasure of human interest
in the rule is called interest factor. The little research has been conducted on mining multimedia data [4][6] with different types of
associations: association between image content and non image content features. Association mining in multimedia data can be
transformed into problems of association mining in traditional transactional databases. Thats why mining the frequently occurring
patterns among different images becomes mining the frequent patterns in a set of transactions.

Hsin-Chang Yang, Chung-Hong Lee [8] proposed the self organizing map that is usful for image retrieval. Traditional content-based
image retrieval (CBIR)[11]systems most of the times fails to meet a users need due to the semantic gap between the extracted
features of the systems and the users query. It is a difficult task to extract semantics of image. Most existing techniques apply some
predefined semantic categories and assign the images to appropriate categories through some learning processes.These techniques
always need human intervention and rely on content-based features. A novel approach to bridge the semantic gap which is the major
deficiency of CBIR systems [11].By applying a text mining process, which adopts the self-organizing map (SOM) learning algorithm
as a kernel process on the environmental texts of an image to extract the semantic information from this image.Some implicit semantic
information of the images can be discovered after the text mining process. A semantic relevance measure has been given to achieve
the semantic-based image retrieval task.

Yimin Wu and AidongZhang [6] introduced an adaptive classification method for multimedia retrieval using relevance
feedback.Relevance feedback method caneffectively improve the performance of CBIR [11].A relevance feedback approach can be
able to efficiently capture the users query concept from a very limited number of training samples. To address this issue, a novel
adaptive classification method using random forests, which is a machine learning algorithm with proven good performance on many
traditional classification problems with random forests, this feedback method reduces the relevance feedback to a two-class
classification problem and classifies database objects as relevant or irrelevant. Fromthe relevant object set, it returns the top k nearest
neighbors of the query to the user.

1028

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

where P(1|o) is the number of classifiers that classify o as relevant over the total number of classifiers. The larger the P(1|o), the more
confident it is to output o as relevant. So,this method returns some classified-irrelevant objects with the largest P(1|o) values, in case
less than k objects were classified as relevant.
Features:

This system will able to address the multimodal distribution of relevant points, because it trains a nonparametric and
nonlinear classifier, i.e., random forests, for relevance feedback.

It does not overfit training data because it uses an ensemble of tree classifiers to classify multimedia objects.

Christian Ries[7] proposed a classification framework for digital images which is capable of identifying images which belong to a
particular class. Simply to design filters which find images in a given database which feature content (e.g brand logos).This
framework learnsdifferent class models in an unsupervised classification manner. The user is only required to provide images which
contain some common object or concept as positive training examples without further annotation or knowledge. Then it finds common
properties of the positive training images based on color and visual words. It consists of two main stages: A color-based pre-filter (or
region of interest detector) and a classifier trained on histograms of visual words ("bags-of-words").As the learning process of the
color model is to be unsupervised, different problems has been considered like identification of colors of the object without manual
annotation and dealing with color deviations due to different lighting conditions. It is not so easy to classify images or localize objects
based on color models.A classifier trained on histograms of visual words ("bags-of-words") this stage uses bag-of-words models to
classify images. By compute spatial histograms of visual words for positive and negative training images and then train a binary
classifier using these histograms. To find positive images among large scale databases for a very low false positive rate the classifier
AdaBoost is used.There exist different local feature descriptors which can be used for the bag-of-words model. Also, the clustering
process which yields visual vocabulary and the AdaBoost classifier depend on many parameters.

Dong ping Tian[13] has been studied different Image Feature Representation techniques. It includes how to partition an image and
how to organize the image features with challenging questions. In general, there are mainly three methods to transform an image into a
set of regions: regular grid approach, unsupervised image segmentation and interest point detectors. Images can be segmented by a
regular grid or by the JSEG , and to detect the salient regions detected by the Difference of Gaussian (DoG) detector.Bag of visual
words representation has been widely used in image annotation and retrieval. This visual-word image representation is analogous to
the bag-of-words representation of text documents in terms of form and semantics. The procedure of generating bag-of-visual-words
can be as follows: First, region features are extracted by partitioning an image into blocks or segmenting an image into regions.
Second, clustering and discretizing these features into visual word that represents a specific local pattern shared by the patches in that
cluster. Third, mapping the patches to visual words and then we can represent each image as a bag-of-visual-words.Compared to
previous work, J. Yang, Y.Jiang, [15] have thoroughly studied the bag-of-visual-words from the choice of dimension, selection, and
weighting of visual words in this representation. For more detailed information, please refer to the corresponding literature. Figure 1
shows the basic procedure of generating visual-word image representation based on vector-quantized region features.

1029

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Figure1:Procedure of generating visual-word image representation based on vector-quantized region features

III. SOURCES OF DATA


Now a days there are n number of data sources are available for the classification purpose. The different data sources are social
media, news articles, review sites, blogs, datasets, etc.

1.

Social Media

Social media become a huge platform to share the images among the people. It is a large network where at a time millions of people
can share their photos and views about the particular photo. There are different type of social media sites are available like
www.facebook.com, www.tweeter.com, www.hi5.com, www.linkedin.com etc. which contains images with their descriptions.

Figure2: Facebook image with its textual description

1030

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.

News Articles

The websites like www.abpmajha.com, www.aajtak.com and www.lokmat.com, www.bhaskar.com has a news article that contains
images of different news with its detail information.

Figure3:Times of India news

3.

Review Sites

Before purchasing any product it is very important to get details of that product. There are various ecommerce sites like
www.flipkart.com, www.cnet.com, www.snapdeal.com where customers want to know about the product and its details while
purchasing the product.

Figure4: Example of the product view and its details


4.

Blogs

A web log is called as blog it is a personal webpage on which particulars can write their likes, dislikes, opinions, hyperlinks to various
sites etc. daily. Tweeter is one of the popular micro blogging service in which user creates status messages in a limited word count
1031

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

which called as tweets. The tweeter will get flooded while the elections were going on. Tweets can also use as data source for
multimedia object classification.

CONCLUSION
It explored related research efforts that generally focused on information retrieval tasks. Our intention is to recognize the trends in the
surveyed area and categorize them in a new way that would integrate and add understanding to the work in the field with respect to the
Flickr social media network. Spatio-Semantic Feature Integration can be used for multimedia object classification

IV. ACKNOWLEDGMENT
I express great many thanks to Prof. S. A. Shinde for his great effort of supervising and leading me, to accomplish this fine work. Also
to college and department staff, they were a great source of support and encouragement. To my friends and family, for their warm,
kind encourages and loves. To every person gave us something too light my pathway, I thanks for believing in me.

REFERENCES:

[1] Wenting Lu, Jingxuan Li, Tao Li, WeidongGuo, Honggang Zhang, and Jun Guo Web Multimedia Object Classifcation Using
Cross-Domain Correlation Knowledge," IEEE transactions on multimedia, vol.15, no. 8,December 2013.
[2] AncaLoredana Ion,Methods for Knowledge Discovery in Images, ISSN 1392-124X Information Technology and Control,
Vol.38, No.1, 2009.
[3]Fayyad et al., Knowledge discovery and data mining: towards a unifying framework, Proceedings of the 2nd International
Conference on Knowledge Discovery andData Mining, pp. 8288, 2000.
[4]Pravin M. Kamde, Dr. Siddu. P. Algur, A survey on Web Multimedia Mining, The International Journal of Multimedia & Its
Applications (IJMA) ,Vol.3, No.3, August 2011
[5] Cristianini, N. &Shawe-Taylor, J., An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods,
Cambridge University Press, Cambridge, 2000.
[6]Yimin Wu and Aidong Zhang,An Adaptive Classification method for Multimedia Retrieval, The Adaptive Web,pp. 291-324,
Springer, 2007.
[7]Christian Ries, Unsupervised One-class Image Classification,ACM International Conference on Multimedia Retrieval (ICMR
2012), 2012.
[8]Hsin-Chang Yang,Chung-Hong Lee, Image semantics discovery from web pages for semantic-based image retrieval using selforganizing maps, Expert Systems with Applications, vol 22, pp.266279, 2008.
[9]Khatib, W., Day, F., Ghafoor, A., & Berra, P. B., Semantic modeling and knowledge representation in multimedia databases,
IEEE Transactions on Knowledge and Data Engineering, vol no.11,pp6480. 2000.
[10] Chang, S. F., Chen, W., &Sundaram, H., Semantic visual templates: linking visual features to semantic, In Proc.
IEEEinternational conference on image processing (ICIP98), Chicago, IL(pp. 531535),2011.
[11] T. K. Shih, J. Y. Huang, C. S. Wang, et al., An intelligent content-based image retrieval system based on color, shape and spatial
relations, In Proc. National Science Council, R. O.C., Part A: Physical Science and Engineering, vol. 25, no. 4, pp. 232-243,2001.
1032

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[12] T. W. S. Chow and M. K. M. Rahman, A new image classification technique using tree-structured regional
features,Neurocomputing, vol. 70, no. 4-6, pp. 1040-1050,2007.
[13]Dong ping Tian, A Review on Image Feature Extraction and Representation Techniques, International Journal of Multimedia
and Ubiquitous Engineering, Vol. 8, No. 4, July, 2013.
[14]Y. N. Deng and B. S. Manjunath, Unsupervised segmentation of color-texture regions in images and video, IEEE PAMI, vol.
23, no. 8, pp. 800-810,2008.
[15] J. Yang, Y.Jiang, A. G. Hauptmann and C. Ngo, Evaluating bag-of-visual-words representations in scene classification,, In
Proc. Workshop on MIR, pp. 197-206,2007

1033

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Stock Market Prediction From Financial News: A survey


Shubhangi S. Umbarkar
PG Student
Department Of Computer Engineering
VPCOE,Baramati,Savitribi Phule University
Baramati,India
shubhangishriramumbarkar@gmail.com

Abstract: A number of investors on financial markets are growing day by day. Investors need to continuously observe financial news
for market events to deciding buy and sell equities due to the upcoming news of market sensitivity. For taking information about
financial market, investors always prefer news and financial markets are motivated by news information, which is comes from
different media agencies through a various channels. Information is time-sensitive, especially in the circumstance of financial markets,
selecting and processing all the relevant information in a decision-making process, such as whether to buy, hold, or sell shares is
difficult task. There are various techniques used for prediction of stock market like Data mining, ontology learning, machine learning,
artificial neural network (ANN), decision tree etc. Due to uncertainty nature of financial market investors are confuse to take the
significant decision about investment. Stock market prediction is the ongoing research field in the data mining. This paper is about to
discuss different techniques, challenges related to prediction of stock market.

Keyword- Stock market prediction, Data mining, artificial neural network (ANN), Descision making, machine learning, ontology
learning

I.INTRODUCTION
Stock market prediction is burning topic in the field of finance. Due to its business increment, it has attracted often aid from educator
to economics sector. It is impossible to give the prediction of prices of stock market because of stock prices are changed by every
second. Market stock prediction has ever been a subject of curiosity for most investors and business analyst. In today's informationdriven domain, more individuals try to keep record up-to-date with the current developments by reading informative news items on the
web. The content of news items reflect past, current, and upcoming world conditions, and thus news contains valuable information for
various purposes. Being alert of ongoing marketplace situations is of paramount importance for investors and traders, who require to
creating knowing decisions that could have an evidentiary impact on definite aspects specified as profits and marketplace perspective.
However, due to the ever expanding of information, it is virtually impractical to keep evidence of all future applicable news in a
regulated trend. It is need to do the automatically extracting news items by means of computers that would alleviate effort that are
required for manually processing of news information.
Financial markets are motived by information. There are many sources of information. The most important source of information is
news which are comes from different communication media through a various channels. The increasing number of information
sources resulting in high volumes of news. Manually processing of such huge information is very tedious task. Information about
financial markets is time sensitive. Selecting, processing of the relevant news information in decision-making process is challenging
job. Data mining tools such as ViewerPro tool can be use for automatically extract the market event[1].
II. RELATED WORK
Jethro Borsje,Frederik Hogenboom and Flavius Frasincar[2] introduced lexico-semantic patterns and lexico-syntactic patterns
methods for extraction of financial event from RSS news feeds. Lexico-semantic patterns used for financial ontology that leveraging
the commonly used lexico-syntactic patterns to a higher abstraction level by enabling lexico-semantic patterns to identify more and
more relevant events than lexico-syntactic patterns from text. The semantic web used to classify the news item. Semantic actions
allow to up- dating the domain knowledge. Semantic Web Rule Language (SWRL) is responsible for implementation of the action
rule. Triples paradigms are used for defining Lexico- semantic information extraction patterns that resemble simple sentences in
natural language. Event rule engine used to allow rules creation, financial event extraction from RSS news feed headlines, and
ontology updates. The rule engine does the following actions.
1034

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Mining text items for patterns,


Creating an event if a pattern is found,
Determining the validity of an event by the user,
Executing appropriate update actions if an event is valid.
The engine consists of multiple components. The first component is rule editor, using the editor user can construct the rule. Second
component is event detector, which is used for mining text items for the lexico-semantic patterns occurrence for the event rules. The
third component validation environment using this component user can determine the validation of the event and can modify the event
if event detector made an error. In addition, the last component is action execution engine which is used to perform the updating the
rule, finding the event which are associated with that rule, if and only if the event is valid. The effectiveness of the above work is
tested with the help of precision, recall, and F1 measures.
Advantages

It achieved the accuracy.


Most of the news item headlines are not associated with a buy event and thus the rule engine has successfully ignored a large
portion of such news items in each run.
It reduced the error rate of the event detector.
Limitations

It is impossible to remove individuals or instances of properties from an ontology using SWRL.


On the use of SWRL, there is incompleteness of good documentation.

Wouter IJntema, Jordy Sangers[3] makes the use of text rule based method that uses lexico semantic pattern for learning ontology
instances from text that helps domain experts for maintaining ontology population process. Ontology is used for retrieving relevant
news items in a semantically in efficient way. Authors used Hermes Information Extraction Language (HIEL) which apply the
semantic concepts from ontology and used to evaluate for extracting events and relations from news. Hermes Information Extraction
Engine (HIEE) also has implemented. The Hermes news portal (HNP) is a stand-alone application and java based tool, which gives an
opportunity to use the various Semantic web technology. The overall framework of Hermes Information Extraction Engine (HIEE) is
divided into two parts i.e. preprocessing stage and rule engine. In preprocessing few steps are implemented like tokenization, sentence
splitting, and Part-Of-Speech (POS) tagging. Then the Hermes Information Extraction Rule Engine compiles the rule in rule compiler
and matches the rule using rule matcher to the text after preprocessing the news information. They have showed that the lexicosemantic patterns are superior than lexico-syntactic patterns with respect to efficiency and effectively. Pattern-based information
extraction techniques are mainly focused by authors.

Figure1: Rule Tree

1035

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Advantages

As they have mainly focused on pattern-based information extraction techniques that often require less training data and help
users to gain more insight into why a certain relation found.
Hermes Information Extraction Language (HIEL) enhanced the expressivity of the rules.
Lexico-semantic rules requires significantly less time than creating equally performing lexico syntactic rules.
Limitations

Lexico semantic rules exploit the inference capabilities of ontology.


Ontology is not update automatically.

M.A. Mittermayer and G.F. Knolmayer[4] implemented several prototypes for predicting the short-term market reaction to news based
on text mining techniques.
Prototype developed by Wthrich et al. Cho99[5], ChWZ99[6], WCLP98][7]. This prototype attempts to predict the 1-day trend of five
major equity indices such as the Dow Jones, the Nikkei, the FTSE, the Hang Seng, and the Straits Times. According to a 3-category
model the documents of this protocol were labeled. News articles followed by 1-day periods are fell into first (second) category which
are associated with at least 0.5% increasing equity index. For trading sessions, the threshold is of +/- 0.5% was chosen so that onethird part of it roughly fell in each of the three categories. Prototype categorized all newly published articles during its operational
phase.
From each category the numbers of news article were counted and depending on where the most news article were assign to the
prototype triggered to buy or to sell or to do nothing recommendations for the corresponding index. Term Frequency times Inverse
Document Frequency (TFxIDF) use as feature selection technique.
Advantages
When prototype was tested then the result is 40% for Straits Times and 46.7% for FTSE was correct.
Limitations

The simulated performance results of this prototype cannot be achieved in reality.

Prototype developed by Lavrenko et al. LSLO00a[8], LSLO00b[9]. The prototype Enalyst was developed around 2000 at the
University of Massachusetts. The goal is to forecast stocks in very short-term i.e. intraday price trends of a subset by analyzing the
homepage of YAHOO finance where news articles are published. In this prototypes 5-category model are as follows:
Surge
Slight
No Recommendation
Slight Plunge
The author first segmented the stock price according to time series with a linear regression into small trend windows. If the news
articles published in h hours before start of a trend window with price trend slop 0.75 were put in the surge category. Slight+
category is assign, if news article belongs to a slope between 0.5 to 0.75.and other category were assigned accordingly. Naive Bayes
used to train the classifier. If an incoming news article was assigned to the categories Surge or Slight+ then the prototype triggered a
buy recommendation. The short recommendation triggered if an articles are assigned to slight- and plunge category.
Advantages

When the prototype was tested based on 10 minute stock price data between mid of March and April 2000 with investment of
U.S. Dollar (USD) 10,000 in each roundtrip. Then after testing period of 40 days, by performing about 12,000 transactions,
280,000 USD is achieved.
Limitations

1036

The author included only those 127 U.S. stocks that showed largest positive or negative price. Such selection leads to bias
towards highly volatile stock.
This protocol is very unrealistic.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Prototype developed by Thomas et al. This prototype was developed at the Robotics Institute of Carnegie Mellon University between
2000 and 2004 SeGS02[10], SeGS04[11]. This prototype mainly focused on forecasting the volatility. The author developed strategy
that, once news is published, market is temporarily exit for particular stock that may increase volatility. Then reentering decision is
depends on technical indicators.

Advantages
The result of strategy improved return.
Limitation

Vital information regarding the simulation missing.

Prototype developed by Elkan/Gidfalvi .The aim of this prototype is to forecast stock price trends regarding the publication of a news
article into windows of influence. This prototype divides the stock price time series. Foe example if the window is ranging from 0 to
20, it means that in 20 minutes most of the price adjustment occurs. The documents were labeled according to a 3-category model in
the learning phase. The first and second category consists of news leading to a price increase or decrease at least with 0.2% during the
window of influence. The remaining news fell into the third category. By using MI (Mutual Information) as selection criterion, feature
definition was done automatically. The learning phase was finished by training a Nave Bayes classifier. A virtual roundtrip is
performed if the prototype sorts an incoming article into one of the first two categories. The asymmetric exit strategy was applied for
triggering the market exit.
Advantages

This prototype achieved a performance of 10 bps per roundtrip.

Prototype developed by Peramunetilleke/Wong.


The prototype developed at the University of New South Wales was developed in collaboration with currency traders from UBS
around 2001 PeWo02[17]. It contains more than 400 features, each one consisting of two to five words that are combined with the
logical operator AND. Also this dictionary has not been made accessible to the public. The authors create a rule-based classifier based
on the three categories in the learning phase i.e. dollar up>0.23%, dollar down >0.23%, and dollar steady based on threshold
0.23%. Profit per roundtrip do not provide by the authors.
Advantages
The system gives 50% right predictions.
Decision speed is significantly improved.
The achieves decision quality.
Limitations

A random traders achieved only 33.3% right decision.

Prototype developed by Fung/Lam/Yu. This prototype was developed around 2002 at the Department of Systems Engineering and
Engineering Management of the Chinese University of Hong Kong. The documents are labeled according to approach used in
prototype developed by Lavrenko . The price time series are segmented in the first step, around the publication of a news article into
time windows. Then the clustering algorithm they have used to divide the sample of time windows into the three most discriminating
clusters that are Rise ,Drop based on steepest positive and negative average slope and the third one was called Steady. Instead
of programming a prototype the authors used commercial text mining software. For example, the preprocessing of the news articles
was performed by IBM's Intelligent Miner for Text and the Support Vector Machine (SVM) was used as classifier.
Advantages of the prototype systems
None of the above prototypes considers any costs in the performance simulation.
Systems covers the costs of immediate execution by achieving a gross profit of 10-15 bps.
Investors are actively involved because of cost effectiveness of prototypes.
Limitations
1037

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The performance studies neglect some important features of the financial markets like transaction costs, limited volume at
given prices.
Practically, the use of prototypes may increase the complexity of the system.

F. Allen, R. Karjalainen and Wijnand Nuij [1][12] used genetic programming to develop optimal trading rule. If the solution is
represented in the decision tree or computer program then genetic programming used. Genetic programming uses the principles of
parallel search, natural selection and historical data to search for candidate solutions to problems of interest. A computer randomly
generates a population of candidate solutions expressible as decision trees to a problem of interest. The rules are required only to be
well defined and to produce output appropriate to the problem of interest a buy/sell decision in stock market.

Advantages

1) Genetic programming is a multi-dimensional, non-differential, non-continuous, and even non-parametrical so it is useful in


problems of decision tree.
It solves problems with multiple solutions.
It can solve every optimization problem which can be described with the chromosome encoding.
Limitations

Genetic programming minimizes but does not eliminate the problem of data snooping by searching for optimal ex ante rules,
rather than rules known to be used by traders.
If generations of the genetic algorithm do not train long enough then the usability would be low.
It required more time to give the output.

K. Senthamarai Kannan, P. Sailapathi Sekar[13] uses data mining technique for the prediction of stock market. Five methods were
combined to predict.
A. Typical Price (TP)
By adding the high, low, and closing prices together, the Typical Price indicator is calculated and then dividing by three. The result is
the average, or typical price.
Algorithm:
1. Inputting High, Low, Close values of the daily share
2. Take an output array and add the values of H, L and C
3. Divide the total by 3
TP=

H+L+C
3

Where, H=High; L=Low; C=Close

B. Chaikin Money Flow indicator (CMI)


Chaikin's money flow is based on Chaikin's accumulation/distribution. If the stock closes above its midpoint [(high+low)/2] for the
day, then there was accumulation that day, and if it closes below its midpoint, then there was distribution that day. By summing the
values of accumulation/distribution for 13 periods and then dividing by the 13-period sum of the volume the CMI was calculated.
The Following formula was used to calculate CMI.

1038

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

CMI=

sum(AD,n)
sum(VOL,n)

AD = VOL

(CLOP)
sum(HILO)

AD stands for Accumulation Distribution, Where


n=Period; CL=todays close price; OP=todays open price;
HI=High Value; LO=Low value

C. Stochastic Momentum Index (SMI)


The Stochastic Momentum Index (SMI) is based on the Stochastic Oscillator. The range of SMI is from +100 to -100. The mid point
was calculated as [(high+low)/2]. When the close is greater than the midpoint, the SMI is above zero, when the close is less than the
midpoint, the SMI is below zero. A buy signal is generated when the SMI rises above -50, or when it crosses above the signal line. A
sell signal is generated when the SMI falls below +50, or when it crosses below the signal line.
The Following formula was used to calculate SMI.

[ 5 ,13 + ,13

100

[5 ,13 + ,13

,25, ,2,]
,25, ,2,]

Where HHV= Highest high value.


LLV = Lowest low value.
E = exponential moving avg.

Using the following formula, exponential moving average


was calculated.
EMA= (

2
+1

D. Relative Strength Index


This indicator compares the number of days a stock finishes up with the number of days it finishes down. Usually between 9 and 15
days it is calculated. The RSI has a range between 0 and 100.
RSI=100(100/1+RS); RS=AG/AL
AG=[(PAG)x13+CG]/14; AL=[(PAL)x13+CL]/14
PAG = Total of Gains during past 14 periods/14
PAL = Total of Losses during past 14 periods/14
Where AG=Average Gain, AL=Average Loss
PAG=Previous Average Gain, CG=Current Gain
1039

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

PAL=Previous Average Loss, CL=Current Loss


The following algorithm was used to calculate RSI:
Upclose = 0
DownClose = 0
Repeat for nine consecutive days ending today
If (TC > YC)
UpClose = (Upclose + TC)
Else if (TC < YC)
DownClose = (Down Close + TC)
End if
RSI=100-

100
1+

E. Bollinger Bands
It is a technical indicator which creates two bands i.e. upper band and lower band around a moving average and are based on the
standard deviation of the price. If the volatility is high then the bands will wide and when there is little volatility then the bands will
narrow.
The Upper and Lower Bands are calculated as

stdDev =

=1

()

Upperband= MA+D

=1

Lowerband= MA-D

=1

Advantages
Using the above methods prediction was correct at least 50% of the time.
Limitations

Above methods performed well on half of the stocks and not so well on the other half of the stocks.

Debashish Das and Mohammad Shorif Uddin [14] introduced data miming and neural network technique for prediction for stock
market. Data analysis tools were used to predict future trends and behavior which helping organizations in active business solutions
for knowledge driven decisions. Intelligent data analysis tools produce a database to search for hidden patterns, finding projecting
information that may be missed due to beyond experts prediction.
Data mining technique steps are as follows.
1040

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1. Analysis of survey data


2. Explanatory simulation
3. Analytical modeling
4. Identify patterns and rules
5. Acquisition summary
Due to ability in dealing with fuzzy, uncertain and insufficient data which may fluctuate rapidly in very short period of time the neural
network was used. In this computer units connected together such that each neuron can transmit and receive signals from each other.
The framework of neural network is as follows.

Input

Artificial Neurons prototype

N
1

N2

N3

N4

N5

N6

Sum up Neurons
S1

S2

S3

Output

Figure2: Neural network model architecture


Advantages
Neural network gives high computation speed.
The neural network technique is fault tolerance.
The data mining technique discover useful patterns from a dataset which are used for prediction of market.
Limitations

1041

Handling of time series data in neural networks is a very complicated.


For the prediction of stock market with data mining technique requires high volumes of data for training.
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

III. CONCLUSION
In this paper we see the different technique for prediction of stock market. Stock market prediction is the activity of interpreting the
future state of the share prices. It gives the guidance for proper investment. Most stock market prediction technique such as data
mining technique, neural network technique, different technical trading indicators are depends on the financial news approaches and
success of these technique is also depends on proper information extraction about stock market. Another approach such as stock
market news extraction which uses different clssifire such as viewr pro tool, nave bayes classifier, Term Frequency and Inverse
Document Frequency for providing the market event model. In this paper, prototypes that used for prediction of stock market. also
focused.
IV. ACKNOWLEDGMENT
I express great many thanks to college and department staff, they were a great source of support and encouragement. To my friends
and family, for their warm, kind encourages and loves. To every person gave us something too light my pathway, I thanks for
believing in me.

REFERENCES:
[1] Wijnand Nuij, Viorel Milea, Frederik Hogenboom, Flavius Frasincar, and Uzay Kaymak ,An Automated Framework for
Incorporating News into Stock Trading Strategies. IEEE transactions on knowledge and data engineering, VOL. 26, NO. 4, APRIL
2014
[2] J. Borsje, F. Hogenboom, and F. Frasincar, Semi-Automatic Financial Events Discovery Based on Lexico-Semantic Patterns,
Intl J. Web Eng. and Technology, vol. 6, no. 2, pp. 115-140, 2010.
[3] W. IJntema, J. Sangers, F. Hogenboom, and F. Frasincar, A Lexico-Semantic Pattern Language for Learning Ontology Instances
From Text, J. Web Semantics: Science, Services and Agents on the World Wide Web, vol. 15, no. 1, pp. 37-50, 2012..
[4] M.-A. Mittermayer and G.F. Knolmayer, Text Mining Systems for Market Response to News: A Survey, technical report, Institute
of Information Systems University of Bern
[5] Cho, V.: Knowledge Discovery from Distributed and Textual Data. Dissertation Hong Kong University of Science and
Technology. Hong Kong 1999.
[6] Cho, V.; Wthrich, B.; Zhang, J.: Text Processing for Classification. In: Journal of Computational Intelligence in Finance 7 (1999)
2, pp. 6-22.
[7] Wthrich, B.; Cho, V.; Leung, S.; Peramunetilleke, D.; Sankaran, K.: Zhang, J.; Lam, W.: Daily Prediction of Major Stock Indices
from Textual WWW Data. In: Proceedings 4th ACM SIGKDD Int. Conference on Knowledge Discovery and Data Mining. New York
1998, S. 364-368.
[8] Lavrenko, V.; Schmill, M.; Lawrie, D.; Ogilvie, P.; Jensen, D.; Allan, J.: Mining of Concurrent Text and Time Series. In:
Proceedings 6th ACM SIGKDD Int. Conference on Knowledge Discovery and Data Mining. Boston 2000, pp. 37-44.
[9] Lavrenko, V.; Schmill, M.; Lawrie, D.; Ogilvie, P.; Jensen, D.; Allan, J.: Language Models for Financial News Recommendation.
In: Proceedings 9th Int. Conference on Information and Knowledge Management. Washington 2000, pp. 389-396.
[10] Seo, Y.; Giampapa, J.A.; Sycara, K.: Text Classification for Intelligent Portfolio Management. Technical Report CMU-RI-TR02-14, Robotics Institute, Carnegie Mellon University, Pittsburgh.
[11] Seo, Y.; Giampapa, J.A.; Sycara, K.: Financial News Analysis for Intelligent Portfolio Management. Technical Report CMU-RITR-04-04, Robotics Institute, Carnegie Mellon University, Pittsburgh.
[12] F. Allen and R. Karjalainen, Using Genetic Algorithms to Find Technical Trading Rules, J. Economics, vol. 51, no. 2, pp. 245271, 1999.

1042

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[13] K. Senthamarai Kannan, P. Sailapathi Sekar, M.Mohamed Sathik and P. Arumugam, Financial Stock Market Forecast using Data
Mining Techniques, Preceding of the International MultiConferance of engineers and Computer Scientists 2010Vol I, IMECS 2010,
March 17-19,2010, Hong kong
[14] Debashish Das and Mohammad Shorif Uddin, Data mining and Neural network techniques in stock market prediction: a
methodological review, International Journal of Artificial Intelligence & Applications (IJAIA), Vol.4, No.1, January 2013

1043

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Advancement in surface finishing by abrasive flow


machining:Review
Prof. Nagnath Kakde1, Tejas Deshmukh2, Prof.Akshaykumar Puttewar3, Prof.S.V.Lutade4
1.

Asst.Professor, Mechanical Engineering Department, DBACER, Nagpur, nagnath.kakde@gmail.com, 9881204717


2. M.tech Scholar, Mechanical Engineering Department, DBACER, Nagpur
3. Asst.Professor, Mechanical Engineering Department, DBACER, Nagpur
4. Asst.Professor, Mechanical Engineering Department, DBACER, Nagpur

ABSTRACTAbrasive flow machining (AFM) is a manufacturing technique that uses the flow of a pressurized abrasive media to remove
work piece material. In comparison with other polishing technique, AFM is very efficient, suitable for the finishing of complex inner
surfaces. In recent years, hybrid-machining processes have been developed to improve the efficiency of a process by clubbing the
advantages of different machining processes and avoiding limitations. It is experimentally confirmed that abrasive flow machining
can significantly improve surface quality of nonlinear runner, and experimental results can provide technical reference to optimizing
study of abrasive flow machining theory. This project is a genuine effort to optimize all the available modifications and permutation
of abrasive flow machining in order to get the best available surface finish with minimum changes in the pre existing setups at no or
very little cost.

Keywords:- Abrasive flow machining, Material removal rate; Surface roughness, Surface Finish

INTRODUCTION
In the fields of military and civil uses, some special passages exist in many major parts, such as non-linear tubes. The overall
performance is usually decided by the surface quality. Abrasive flow machining (AFM) technology can effectively improve the
surface quality of the parts. Abrasive flow machining (AFM) is a nontraditional machining process that was developed in the USA in
the 1960s.
Abrasive flow machining (AFM), also known an extrude honing, is an industrial process used in metal working. This process is used
to finish the interior surfaces of cast metals and produce controlled radii in the finished product. The process of abrasive flow
machining produces a smooth, polished finish using a
pressurized media. The medium used in abrasive flow machining is made from a specialized polymer. Abrasives are added to the
polymer, giving it the ability to smooth and polish metal while retaining its liquid properties. The liquid properties of the polymer
allow it to flow around and through the metal object, conforming to the size and shape of the passages and the details of the cast metal.
Abrasive flow machining equipment is made in single and dual flow systems. In a single flow system, the abrasive media is forced
through the project at an entry point and then exits on the other side, leaving a polished interior to mark its passage. For more
aggressive polishing, the dual flow abrasive flow machining system might be employed. In dual flow system, the abrasive media flow
is controlled by two hydraulic cylinders. These cylinders alternate motions push and pull the media through the project. This delivers a
smoother, highly polished end result in much less time than a single-flow system. The process of abrasive flow machining is used in
1044

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

the finishing of parts that require smooth interior finishes and controlled radii. Examples of these parts include automotive engine
blocks and other precision finished parts. This process is also used in the metal fabrication and casting industry to deburr the dies and
remove recast layers from molds used during production. Abrasive flow machining makes it possible to polish and smooth areas that
otherwise would be unreachable, because of the ability of the media to flow through the part.

The AFM process has a limitation too, with regard to achieving required surface finish. With the aim to overcome the difficulty of
longer cycle time, the project is trying to find a hybrid process, which permits AFM to be carried out. Harder materials require no. of
cycles due to its low material removal rate [1].
Hybrid version of the abrasive flow machining may include use of magneto assistance during the finishing process, use of
centrifugal force for abrasive particle or simply changing the abrasive putty which is grease, polymers or natural based. Thus in this
project we are trying to prove the best available technique or even try to combine two or more technique for the raising the bar for the
surface finish in the growing field of industrial up gradation.

Major areas of experimental research in abrasive flow finishing


"Ramandeep Singh" et al.[1] Abrasive flow machining (AFM) is a relatively new non-traditional micro-machining process developed
as a method to debur, radius, polish and remove recast layer of components in a wide range of applications. Material is removed from
the work-piece by flowing a semi-solid viscoelastic plastic abrasive laden medium through or past the work surface to be finished.
Components made up of complex passages having surface/areas inaccessible to traditional methods can be finished to high quality and
precision by this process. The present work is an attempt to experimentally investigate the effect of different vent/passage
considerations for outflow of abrasive laden viscoelasic medium on the performance measures in abrasive flow machining. Cylindrical
work-piece surfaces of varying cross-sections & lengths having different vent/passage considerations for outflow of abrasive laden
1045

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

viscoelastic medium have been micro-machined by AFM technique and the process output responses have been measured. Material
removal, MR and surface roughness, Ra value are taken as performance measures indicating the output responses. Experiments are
performed with significant process parameters, such as concentration of abrasive particles, abrasive mesh size, number of cycles and
media flow speed kept as constant on brass as work material. The results suggest that the work-piece surfaces having single
vent/passage for media outflow have higher material removal and more improvement in surface roughness in comparison with workpiece surfaces having multiple vents/passages and the performance measures decrease with increase in the number of vents for media
outflow.
"R.S. Walia" et al. [2] Limited efforts have been done towards enhancing the productivity of Abrasive Flow Machining
(AFM) process with regard to better quality of work piece surface. In recent years, hybrid-machining processes have been developed
to improve the efficiency of a process by clubbing the advantages of different machining processes and avoiding limitations. In the
present study, the abrasive flow machining was hybridized with the magnetic force for productivity enhancement in terms of material
removal (MR). The magnetic force is generate around the full length of the cylindrical work piece by applying DC current to the
solenoid, which provides the magnetic force to the abrasive particles normal to the axis of work piece. The various parameters
affecting the process are described here and the effect of the key parameters on the performance of process has been studied.
"Junye Li" et al. [3] Due to the high thickness of abrasives and fine particle size, the abrasives will show excellent viscosity.
The mixture of interaction of processing conditions, such as extrusion of high pressure, causes the removal effect of abrasive to
nonlinear material even more apparent on the surface of tube channels, so as to obtain better surface roughness. Due to the abrasives
continuous removal effect on nonlinear tube channel surfaces, after the abrasive flow machining, the surface profile becomes more
smooth and fine than the ups and downs before processing.
" Jose Cherian" [4] The average percent reduction in surface roughness can be increased by keeping the extrusion pressure,
grain mesh number and Abrasive concentration at high levels, while the average force ratio can be increases by keeping extrusion
pressure and abrasive concentration at high level and grain mesh number at low level. Also when the force ratio is maximum the
percentage reduction in surface roughness is also maximum. The correlation coefficient between average percent reduction in surface
roughness and average force ratio is higher as compared to correlations of average percentage reduction in surface roughness with
average axial and radial forces.
"P.D. Kamble" [5] A magnetic field has been applied around a component being processed by abrasive flow machining and
an enhanced rate of material removal has been achieved. Magnetic field significantly affects both MRR and surface roughness. The
slope of the curve indicates that MRR increases with magnetic field more than does surface roughness. Therefore, more improvement
in MRR is expected at still higher values of magnetic field. For a given number of cycles, there is a discernible improvement in MRR
and surface roughness. Fewer cycles are required for removing the same amount of material from the component, if processed in the
magnetic field. Magnetic field and medium flow rate interact with each other .The combination of low flow rates and high magnetic
flux density yields more MRR and smaller surface roughness. Medium flow rates do not have a significant effect on MRR and surface
roughness in the presence of a magnetic field. MRR and surface roughness both level off after a certain number of cycles.

Objectives
The recent increase in the use of hard, high strength and temperature resistant materials in engineering necessitated the
development of newer machining techniques. Conventional machining or finishing methods are not readily applicable to the materials
1046

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

like carbides; ceramics .Conventional machining processes when applied to these newer materials are uneconomical, Produce poor
degree of surface finish and accuracy, Produce some stress, highly insufficient. Newer machining processes may be classified on the
basis of nature of energy employed. Low material removal rate happens to be one serious limitation of almost all processes.
The main objective of this project is to increase the metal removal rate from the abrasive flow machining and decrease the no.
of cycle for the MRR required. Just by simple modification of changing the medium, magneto system, and introduction of centrifugal
force we can actually increase quality of output at less cost of modification. It also focus on bringing the AFM & surface finishing to a
new level to attain an unreached standards, which not only will increase the product quality but also increase the use of standardize
spares, ultimately increasing life expectancy of the product without any raise at any kind of expenditure on the spares.

Methodology
Though the project is not that simple as it seems to be, as it requires a no. of operations, it can be done into steps for its successful
implementation. The whole project is basically divided into three parts:

Data collection from different companies for basic operation of the machine.

Analysis on the data collected

Optimized the permutations to select the best modification and best abrasive material for surface finish.
The first portion of the project requires some times to collect all necessary parameters. There are certain local companies

which are using this type of surface finishing machines for their products. The data related to the parameters of AFM will be collected
from such industries. Few parameters are required to be assumed as all the data cannot be obtained from the companies. so by
selecting appropriate parameter we can fill in the gaps for the step one to be carried out.
The second step includes training of an appropriate process for surface measurement and comparison of the data collected.
Few specimens would be passed though a no. of cycles at different conditions. These specimens would be the observed under the
electron microscope for the comparison. Third step is to combined and analyzed this data on any of the analysis software to select the
most optimum method.
It will focus to combine different methods for a better and hybrid surface finishing technique. This will also helps us to
suggest the best available medium for the AFM. Option of selecting a cheap, easily available and environmental friendly medium is
also one of the prime focuses of this project.

REFERENCES:

1.

Ramandeep Singh and R.S. Walia - "Hybrid Magnetic Force Assistant Abrasive Flow Machining Process Study

for Optimal Material Removal" at International Journal of Applied Engineering Research, ISSN 0973-4562 Vol.7 No.11
(2012).
1047

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

2.

Ramandeep Singh, &R.S. Walia - " Study the Effects of Centrifugal Force on Abrasive Flow Machining Process"

at IJRMET Vol. 2, Issue 1, April 2012.

3.

Junye Li, Lifeng Yang,Weina Liu, Xuechen Zhang, and Fengyu Sun - " Experimental Research into Technology

of Abrasive FlowMachining Nonlinear Tube Runner" at Hindawi Publishing Corporation Advances in Mechanical
Engineering Volume 2014, Article ID 752353.

4.

Jose Cherian, Dr Jeoju M Issac - "Effect of process variable in abrasive flow machining" at International Journal

of Emerging Technology and Advanced Engineering, (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3,
Issue 2, February 2013).

5.

P.D. Kamble, S.P. Untawale and S.B. Sahare - " Use of Magneto Abrasive Flow

Machining toIncrease Material Removal Rate and Surface Finish" at VSRD international journal of mechanical,
automobile and production engineering VSRD-MAP, Vol. 1 (7), 2012, 249-262

1048

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The Formulation and Study the Problem of Mining Probabilistically Frequent


Sequential Patterns in Uncertain Databases
Priyanka V. Patil
Computer Engineering
Alard College of Engineering, Pune, India

Ismail
Computer Engineering
Alard College of Engineering, Pune, India

gajare.priyanka2@gmail.com

ismail_009@yahoo.com

Abstract
Uncertainty in various domains implies the necessity for various data mining techniques and algorithms that can handle uncertain
datasets. Many studies on uncertain datasets have main focused on modeling, query ranking, classification models, discovering
frequent patterns, clustering, etc. However despite the existing need, very few studies have considered uncertainty in sequential data.
In this paper, we propose to measure pattern frequentness based on the various possible world semantics. We are looking to
establish two uncertain sequence data models abstracted from many real-life applications involving uncertain sequence data, and
formulate the problem of mining probabilistically frequent sequential patterns (or p-FSPs) from data that conform to our models.
Using the prefix-projection strategy of the famous PrefixSpan algorithm, we are developing two new algorithms, collectively called
U-PrefixSpan, for p-FSP mining. UPrefixSpan avoids the problem of possible world explosion, and when combined with our three
pruning techniques and one validating technique, it achieves good performance

Keywords Data mining, Uncertain datasets, frequent sequential patterns, PrefixSpan algorithm

I.

INTRODUCTION

The problem of Sequential Pattern Mining, which involves discovery of frequent sequences of events in data with a temporal
component; Sequential pattern mining has become a classical and well-studied problem in data mining. In classical frequent
Sequential pattern mining, the database to be mined consists of tuples. A tuple may record a retail transaction (event) by a customer
(source), or an observation of an object/person (event) by a sensor/camera (source). All of the components of the tuple are assumed to
be certain, or completely determined.

However, it is recognized that data obtained from a wide range of data sources is inherently uncertain. This paper is concerned with
frequent sequential pattern mining in probabilistic databases, a popular framework for modelling uncertainty.

In this paper, we consider the problem of mining frequent sequential patterns in the context of uncertain datasets. In contrast to
previous work that adopts expected support to measure pattern frequentness, we looking to define pattern frequentness based on the
various possible world semantics. This approach gives us effective mining of high quality patterns with respect to a formal
probabilistic data model. We propose here two uncertain sequence data models (sequence-level and element-level models) abstracted
from many real-life applications involving uncertain sequence.

II.

OBJECTIVE

A. The first work that attempts to solve the problem of p-FSP mining, the techniques of which are successfully applied in an
RFID application for trajectory pattern mining.

1049

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

B. We are considering two general uncertain sequence data models that are abstracted from many real-life applications
involving uncertain sequence data: sequence level uncertain model and element level uncertain model

C. The prefix-projection method of PrefixSpan, we design two new U-PrefixSpan algorithms that mine p-FSPs from uncertain
data conforming to our models.

D. Pruning techniques and a fast validating methods can be used to further improve the efficiency of U-PrefixSpan.

III.

LITERATURE SURVEY

A. UApriory Algorithm
The First expected support-based frequent itemset mining algorithm was proposed by Chui et al. [1]. This algorithm is the extension of
the well- known Apriori algorithm of frequent itemset mining to the uncertain environment and uses the generate-and test framework
to find all expected support-based frequent itemsets. But it has a limitation that it does not scale well on large datasets. As due to the
uncertain nature of data each item is associated with a probability value so the itemsets are required to be processed with these values.
.The efficiency degrades and the problem becomes more serious and uncertain datasets in particular when most of the existential
probabilities are of low value.

B. UApriory with data trimming


To improve the efficiency of the earlier U-Apriori algorithm, a data trimming technique was proposed [2]. The main idea behind this
is to trim away items with low existential probabilities from the original dataset and to mine the trimmed dataset instead. So the
computational cost of those insignificant candidate increments can be reduced. In addition, the I/O cost can be greatly reduced since
the size of the trimmed dataset is much smaller than the original one. The framework of the Apriori needs to be changed for the
application of the trimming process.
The mining process starts by passing an uncertain dataset D into the trimming module of project. It first obtains the frequent data
items by scanning D once. A trimmed dataset D is constructed by removing all the items with existential probabilities smaller than a
trimming threshold. It is then mined by using U-Apriori algorithm. If an itemset is frequent in trimmed dataset DT then it must also be
frequent in original dataset D.

C. Tree based Approaches


In tree based approaches are different from the Apriori based as they dont involve the candidate generation and the candidate pruning
phases for finding the frequent itemsets instead they make use of tree structure to store the data [3].From the tree structure the frequent
itemsets can be mined using the algorithms like F-Growth .These algorithms are also modified for the uncertain data.

D. Sequential pattern mining


Frequent itemset mining, graph pattern mining and sequential pattern mining are very important pattern mining problems that have
been studied in the context of uncertain datasets. For the problem of frequent pattern mining, earlier work commonly uses expected
support to measure pattern frequentness [4]. However, some experimental results have found that the use of expected support may
render important patterns missing [5]. As a result, recent research focuses more on using probabilistic support.
PrefixSpan is considered to be superior to other sequence mining algorithms such as GSP and FreeSpan, due to its prefix-projection
technique. It has been used successfully in many applications such as a trajectory mining. We now review the prefix-projection
technique of PrefixSpan, which is related to our proposed algorithm.

1050

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

IV.

METHODOLOGY

ALGORITHMS USED IN OUR PROJECTS PROPOSED WORK IS AS FOLLOW:

U-PrefixSpan:
Here we introduce a new pattern-growth method for mining sequential patterns, called PrefixSpan. Its major idea is that, instead of
projecting sequence databases by considering all the possible occurrences of frequent subsequences, the projection is based only on
frequent prefixes because any frequent subsequence can always be found by growing a frequent prefix.
For that purpose there are two ways to deal with:
1. Sequence-level U-PrefixSpan
2. Element-level U-PrefixSpan
Each of them have their own issues to handle and deal with the handling the sequence pattern generation and mining those from the
datasets.

Along with this mentioned strategies to deal with we are going to implement the one of it i.e. Sequence-level U-PrefixSpan which is
the core part of the proposed work.

A. Seuence-level U-PrefixSpan:
In this section for giving details of this method, we direct the problem of p-FSP mining on datasets that conform to the sequence-level
uncertain model. We propose a pattern-growth algorithm for this which called SeqU-PrefixSpan, to overcome this problem. Compared
with PrefixSpan, the SeqU-PrefixSpan algorithm needs to addresses the following additional issues coming from the sequence - level
uncertain model which are as follow:
1. Frequentness validating
2. Pattern Frequentness Checking
3. Candidate Elements for Pattern Growth
These are the main core issues associated with this technique of probabilistic sequence patterns mining with sequence-level UPrefixSpan which we are going to concern in our proposed work along with the implementation of the algorithm for the same.
We will see the algorithm details in the sub-section below:

i)

SeqU-PrefixSpan Algorithm:

SeqU-PrefixSpan algorithm which we are going to proposed and implement in our work recursively performs pattern growth from
the previous pattern say a to the current B= e, by appending an element e T|a. where T|a is set of elements which are nothing but
generated from the local datasets. We also construct the current projected probabilistic database for the generation of local datasets
D|B using the previous projected probabilistic database in the sequential pattern mining as mentioned in the section 4 in this paper.
For the execution and testing of the above algorithm work we are going to use one application scenario where we are going to
generate the datasets locally in that application from the local user of the proposed architecture workflow which as follows:
So it goes in the following way where the performance of SeqU-PrefixSpan is checked by the, implementation of data
generated which datasets that relate to the sequence-level uncertain model. Given the configuration (n, m, l, d), our generator generates
n probabilistic sequences. For each probabilistic sequence, the number of sequence instances is randomly chosen from the range [1,m]
which is decided from the local datasets. The length of a sequence instance is randomly chosen from the range [1,l], and each element
1051
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

in the sequence instance is randomly picked from an element table with d elements these are the important parameters in the proposed
architecture for the finding the sequence patterns based on the probability of the patterns

ii) Fast Validating Method:


In this part, we present this method of fast validation for speeding up the U-PrefixSpan algorithm and to increase the efficiency of the
same. The method involves two approximation techniques which checks the probabilistic frequentness of patterns and reducing the
time complexity from O(n log2 n) to O(n) which is achive with the help of this method means its work as the complimentary for the
our proposed algorithm to enhance the efficiency of the algorithm. For that purpose we are going to apply the two models in the
proposed architecture of the system design which are namely (for e.g. a Poisson or Normal model) by which we can verify our p-FSPs
very fastly and the efficiently.

B. Element-level U-PrefixSpan:
In this section for giving details of this method, we direct the problem of p-FSP mining on datasets that conform to the sequence-level
uncertain model. By comparing with sequence-level U-PrefixSpan, we have to consider some additional issues which are from
sequence projection.
To expand each element-level probabilistic sequence from database into its sequence level representation ElemU-PrefixSpan method
is used.

V.

CONCLUSION AND FUTURE WORK

In our paper, we formulate and study the problem of mining probabilistically frequent sequential patterns in uncertain databases.
Our study is based on two uncertain sequence data models that are fundamental for many real-life applications involving uncertain
sequence data. We propose two new U-Prefixplan algorithms to mine probabilistically frequent sequential patterns from data that
confirm to our sequence level and element-level uncertain sequence models. We also develop novel pruning rules and one early
validating method to speed up pattern frequentness checking, which further improve the mining efficiency in the context of uncertain
sequence patterns generated from the local as well as from various applications databases. Applying the models for the generation of
datasets locally mentioned in the section ii) can be effectively enhanced in the future work for extending and enhancing the
functionality of our algorithm.

REFERENCES:
[1] Chiu, C.K. Chui, B. Kao, Mining Frequent Itemsets from Uncertain Data, Proc. 11th Pacific-Asia Conf. Advances in
Knowledge Discovery and Data Mining, 2007.
[2] L. Wang, R. Cheng, S.D. Lee, Accelerating Probabilistic Frequent Itemset Mining: A Model-Based Approach, Proc. 19th
ACM Intl Conf. Information and Knowledge Management (CIKM), 2010.
[3] Q. Zhang, F. Li Finding Frequent Items in Probabilistic Data, Proc. ACM SIGMOD Intl Conf. Management of Data, 2008
[4] C. Aggarwal, J. Wang. Frequent Pattern Mining with Uncertain Data. In SIGKDD, 2009.
[5] Q. Zhang, K. Yi Finding Frequent Items in Probabilistic Data. In SIGMOD, 2008

1052

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Survey of Social Tagging with Assistance


of Geo-tags and Personalization
PrasannaS. Wadekar
M.E. Computer-II,
Vidya Pratishthans College Of Engineering-Baramati.
prasannawadekar90@gmail.com
Abstract - Social websites such as Flickr, zoomer, Picasa are photo sharing websites that permit users to share their multimedia data
over the social networks. Flickr inspires sharing photos with tags, joining in interested groups, contacting other users with similar
interest as friends as well as expressing their preference on photos by tagging, commenting, sharing, annotating. The users social
tagging includes metadata in the form of keywords reflects users preference over those photos and those can be better utilized to mine
the users preference. The social tags are used in various ways by many of the recommender systems to predict searchers preference on
returned photos for personalized search. Geotagging of a photo is the process in which a photo is marked with the geographical
identification of the place it was taken. Geotagging can help users to find a wide variety of location-specific information. In
personalized tag recommendation, tags that are relevant to the users query are retrieved based upon the users interest.
Keywords Personalization, Geo tags, Co-occurrence pairs, Subspace learning,

INTRODUCTION
Social tagging is the process that assigns the important words that are relevant for the multimedia data. Humans can assign tags for
photo but it requires a time. Tag recommendation inspires users to add more tags while connecting the semantic break between human
perception and the features of media entity, which offers an achievable solution for Content Based Image Retrieval (CBIR). Many tag
recommendation strategies have worked upon connection between tags and photos. Users like to create photo album with respect to
the places they have visited.This task can be achievedby adding geo tags for photos. Geo tagging is the process of including
information to various media objects in the form of metadata such as longitude, latitude, city name etc. Same tags can be
recommended to visually similar photos of user but if geo favor of user is considered then it will recommend photos that are relevant
with location. Social tagging describes the multimedia data by its meaning from its metadata and syntax of that multimedia data such
as features patterns. In traditional systems tagging can be done by two ways,

Quality of tagging is reduced with the human based tag assignment. As per the M.wang, B. Ni et.al [1] proposed three techniques for
tagging that improve manual tagging and automatic tagging:
1) Tagging with data selection and organization: manual process for tag selection from data
2) Tag recommendation
3) Tag processing: - It is process of refining tags or adding new tags.

Half percentage of the tags offered by Flickr users are truly interrelated to the photos. Second, it hasdeficiency of an optimum ranking
strategy. Lets take Flickr as example. There are dual ranking preferences for tag-based social image exploration,time based ranking
and interestingness based ranking. The time-based ranking method ranks images based on the uploading time of each image, and the
interestingness-based ranking method ranks images based on each images interestingness in Flickr. Above two methods do not take
the graphical content and tags of images into consideration. It gives irrelevant results. It gives irrelevant results. Therefore, tag
oriented image retrieval is used to provide highly efficient results.

1053

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

LITERATURE SURVEY
A] Motivation for annotation in mobile and online media
M. Ames et.al [2] proposed in, why we tag: motivation for annotation in mobile and online media,Flickr is a popular web
based photo sharing system It is useful when organization and retrieval of photos is important task. It allows user to discover other
users by sharing photos to other community members or groups. Authors have motivated users to assign tags. Authors described about
features that are provided by Flickr such as, Privacy setting. Privacy setting allows user to make available his photo to others either by
public or private. Authors explained how the tags, title, comment, and description can be assigned with interface provided by Flickr.
Authors also explained tag-specific retrieval mechanism. Authors have discussed motivations to tag with respect to function and
sociality. Table describes motivations for tagging.
Zone tag is camera phone application. It is used to upload photos taken by phone. It can capture, annotate, store and share
photos from phone. Tags are suggested based on contexts that are pre-fetched from Zone Tag server. It has feature to provide photo
with privacy setting i.e. public or private access. It considers location, time and context while suggesting tags.
Disadvantages: 1) Organize their memory geographically by reporting photos with tags related to places where those photos were taken.
2) Suggesting non obvious tags that may be confusing to users.
3) Users are inclined to attach a tag even if it is irrelevant.
4) User preference is not considered.
B] Tag Recommendation by Concept Matching
A. Sun et.al [3] proposed in, Social Image Tag Recommendation by Concept Matching,Tag recommendation is three step
process: -

1) Tag relationship graph construction.


2) Concept detection.
3) Actual Tag recommendation.
1) Tag relationship graph construction:

First, it selects a set of candidate tags.


Tag relationship graph is constructed from tag co-occurrence pair.
Tag co-occurrence pair of tags denotes the numbers of images annotated by that tags.
Central node is required to create TRG graph.
Most transpiring tags with central node are included in first iteration of tags tocentral node. Choose those tags that are related
with as a minimum of two first iteration tags and areincluded them in the TRG as second iteration tags of central node.

2) Concept detection: Removal central node is key step to detect concept.


If central node is removed then relation between central node and first iteration hop should be remained as it is.
It is detected by graph cut problem..
3) Actual Tag recommendation:

Compute the recommendation score of each candidate tag with respect to the user-given tags of an image.
Sorts the candidate tags in descending order of their scores.
Suggests most occurring tags to the Flickr user.
Calculating score with Tc and inclusion of cosine similarity leads to matching of tags.

Advantages: 1) It enables customized matching score computation.


2) It boosts scalability and efficiency of tag recommendation process.
1054
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Disadvantages: 1) Geo specific information is not considered here.


C]Tagrecommendation based on collective knowledge.
B. Sigurbjrnsson et.al [4] proposed in, Flickr tag recommendation based on collective knowledge, Specified a photo using user
defined tags. Ordered list of m-candidate tags established on tag co-occurrence is derived. List of candidate tags are served asinput for
tag aggregation as well as ranking. It Produce ranked list of n-recommended tags.
It is two-step process: 1) Tag co-occurrence.
2) Tag aggregation and promotion
1)Tag co-occurrence:

Collective knowledge is generated from user provided tags.


Two methods are used to calculate co-occurrence between two tags. Symmetric measure And asymmetric measure

2) Tag aggregation and promotion: In tag aggregation lists are merged into unified ranking.
It has Two methods, 1) Voting. 2) Summing.
Vote: - A list of suggested tags is acquired by sorting contender tags on votes.
Sum: - It takes combination of all tag list and adds over co-occurrence values of tags.
Advantages: 1) It can handle evolution of vocabulary.
2) It can recommend locations, objects and things.
Disadvantages: 1) User-provided tags are usually limited, one reason is that it is difficult and often requires high mental focus to figure out a lot
of words to describe image or video content in a short moment.
2) It ignores user preferences or not personalized.
3) No evaluation setup for studying user.
4) Being less interactive, performance calculation is not accurate.
5) It requires crucial tuning parameters.
6) System is expensive.
D] Georeferenced tag recommendation.
A. Silva and B. Martins [5] proposed in,Tag recommendation for georeferenced photos,Georeferenced tag recommendation
annotates geo referenced photos with descriptive tags. It discovers redundancyover huge number of annotations accessible at online
sources with other geo referenced photos. Previous methods have used heuristic approaches for integrating geospatial contextual
information but in this method supervised learning is used to rankmethods for combining different estimators of tag relevance.
Various estimators are used such as, adjacent images, different users, number of visits on website made by different users for
particular photo, geospatial distance of image and users. Ranking techniques such as RankBoost, AdaRank, Coordinate, CombSUM,
CombMNZ
Disadvantages: 1) It does not consider concept of image to improve to visual search.
2) It ignores user preferences.
E] Personalized tag recommendation.
1055

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

N.garg and I.Weber et al [6] proposed in, Personalized interactive tag recommendation for Flickr, a personalized tag
recommendation idea that discovers the tagging history by profile of users. System suggests tags dynamically based on Users previous
tagging history. User may select from suggested list or simply ignore to tag a photo. It is personalized approach in that learns
behavior. E.g. User may tag apple as fruit or apple as electronic device such as laptop, phone, and tab etc. It uses various algorithms
such as Nave Bayes local, Tf-Idf global, and The Better of two worlds.
Advantages: 1)
2)
3)
4)
5)
6)

It is hygienicmethodology that leads to conservative performance.


It shows use of old classification algorithms that can be appropriate to this problem,
It has initiated a new price measure, which grabs the effort of the whole tagging process,
It clearly identifies, when purely local schemes can or cannot be improved by global scheme.
It is less computationally complex than collective knowledge.
It recommends tags dynamically.

Disadvantages: 1) It concentrates only on tags of user and not on metadata contextual information and contents.
2) It does not consider geo specific information.
3) It uses group of images, so it does not use classifier.
F] Tensor factorization and tag clustering for item recommendation
D. Rafailidis et al [7] proposed in,The TFC model: Tensor factorization andtag clustering for item recommendation in social
tagging systems,a method that can handle very sparse data. It can utilizeLow order polynomials with help of <user, image, tag,
weights>.

Steps of this approach:


1) Tag propagation by exploiting content. It uses relevance feedback mechanism to work on tag propagation between similar
items.
2) Tag clustering is performed to find topics on social network and interest of users. It has three types: Tripartite, Adapted Kmeans, and Innovative Tf-Idf
3) TF based HOSVD: - In this cubic complexity of HOSVD is minimized by number of tags to tag clusters. By using this step
latent association between user, tag, topics and images are revealed. It requires tensor modeling to dataset and HOSVD to
produce reconstructed tensor.
Steps:

Initial construction of 3 order tensor.


Matrix unfolding of tensor A to create three new matrices
Apply SVD on each unfold matrix.
Construction of core tensor S.
Reconstruction of tensor.
Generation of Item recommendation.

Advantages: 1) It handles Learning tag relevance problem, cold start problem, Sparsity problem.
2) It is collaborative-based approach that improves accuracy.
3) It uses relevance feedback mechanism.
Disadvantages: 1) It ignores geo-specific information.
2) It requires high space and time complexity.
G] Subspace learning with Matrix factorization
1056

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

J. liu et al [8] proposed in in, Personalized Geo-Specific Tag Recommendation for Photos on Social Websites, It generates subspace
that creates latent space that is output of visual space and textual space. It recommends tags based on this embedded space and it uses
content based image retrieval techniques to find and retrieve similar types of images. Those Images are recommended to user. Tag
recommendation uses similarity of visual and textual similarities. It recommends tags and photos to users based on his interest, profile
history, image features and geo specific interest.
COMPARISON OF ALL METHODS: Authors

Title

Work and its type


Contents

Morgan Ames

Motivation for annotation in

et.al [1]

mobile and online media.

Aixin Sun

Tag Recommendation by Concept

et.al [2]

Matching

Sigurbjornsson

Tag recommendation based on

et.al [3]

collective knowledge

Ana Silva et.al

Georeferenced tag

[4]

recommendation.

N.garg and

Personalized tag

I.Weber [5]

recommendation.

D. Rafailidis

Tensor factorization and

Tags

User

Geo

Preference

Preference

Yes

Yes

No

Yes

Yes

Yes

No

No

No

Yes

No

No

Yes

No

No

Yes

No

Yes

Yes

No

Yes

Yes

Yes

No

tag clustering for item


et al [7]

recommendation

CONCLUSION
Survey includes study of social tagging, types of social tagging. It has also covered study of various tagging techniques, their
pros and cons over each other and came to conclusion that Subspace learning approach will solve tag recommendation problem with
contents, Tags, User preferences and Geo-Preferences.

REFERENCES:

[2]
[3]
[4]
[5]
[6]
[7]
[8]

[1] M. Wang, B. Ni, X. Hua and T. Chua, Assistive Tagging: A Survey of Multimedia Tagging With Human-Computer Joint
Exploration, In Proc. ACM Computing Surveys, Vol. 44, No. 4, Article 25, 2012.
M. Ames and M. Naaman, why we tag: motivation for annotation in mobile and online media, in Proc.ACMCHI, 2007.
Sun, S. Bhowmick and J. Chong, Social Image Tag Recommendation by Concept Matching, in Proc. ACM Multimedia, 2011.
B. Sigurbjrnsson and R. van Zwol, Flickr tag recommendation based on collective knowledge, in Proc. ACM WWW, 2008.
Silva and B. Martins, Tag recommendation for georeferenced photos, in Proc. ACM SIGSPATIAL Int. Workshop LocationBased Social Networks, 2011.
N. Garg and I. Weber, Personalized, interactive tag recommendation for Flickr, in Proc. ACM Recommender Systems, 2008.
D. Rafailidis and P. Daras, The TFC model: Tensor factorization and tag clustering for item recommendation in social tagging
systems, IEEE Trans. Syst., Man, Cybern.: Syst., vol. 43, no. 3, pp. 673688, 2013.
J. Liu, Z. Li, J. Tang, Y. Jiang, and H.Lu, Personalized Geo-Specific Tag Recommendation for Photos on Social Websites,
IEEE Trans. Multimedia, vol. 16, no. 3, Apr. 2014

1057

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Document-Document similarity matrix and Naive-Bayes classification to web


information retrieval
Dr.Poonam Yadav1
1

D.A.V College of Engineering. & Technology, India


poonam.y2002@gmail.com

Abstract Due to the continuous growth of web database, automatic identification of category for the newly published web
documents is very important now-a-days. Accordingly, variety of algorithms has been developed in the literature for automatic
categorization of web document to easy retrieval of web documents. In this paper, Document-Document similarity matrix and NaiveBayes classification is combined to do web information retrieval. At first, web documents are pre-processed to extract the features
which are then utilized to find document-document similarity matrix where every element within matrix is similarity between two web
documents using semantic entropy measure. Subsequently, D-D matrix is used to create a training table which contains the frequency
of every attributes and its probability. In the testing phase, relevant category is found for the input web document using the trained
classification model to obtain the relevant categorized documents from the database. The relevant category identified from the
classifier model is used to retrieve the relevant categorized documents which are already stored in the web database semantically. The
experimentation is performed using 100 web documents of two different categories and the evaluation is done using sensitivity,
specificity and accuracy.

Keywords Information retrieval, Naive-bayes classification, semantic retrieval, web document categorization, D-D matrix,
accuracy, specificity.

1. INTRODUCTION
With the ever seen growth of web database, relevant information retrieval finds major difficulties in most of the time because of
extensive availability information and lack of effective approaches. To retrieve the information effectively and efficiently, automatic
categorizing of web document is important [1] for information retrieval system. We know that, information stored in the web database
is growing continuously so when new information is published in web database, retrieving those information if it is most relevant
category is also important for user [2-6]. In order to accomplish this task, automatic identification of category for a new web document
is definitely needed in current days. With the aim of this, classification-based algorithms [11-13] are proposed in the literature for
automatic categorization of web documents. For example, K-NN [7-10], naive bayes classifier and adboost algorithms are benchmark
algorithm utilized by various researchers for classification.
In this paper, web information retrieval for an input query document is done using document to document similarity matrix and naive
bayes classifier. At first, input web document are converted to feature space using a set of pre-processing techniques. Then, D-D
matrix is constructed using semantic entropy measure which considers multiple considerations mathematically. The similarity space is
given to naive bayes classifier [15] to construct training table. Finally, relevant category of the input web documents is found out
using testing phase of naive classifier. The relevant category can easily output the relevant web document as an output to the user. The
paper is organized as follows: Section 2 presents K-NN classifier and section 3 presents the proposed algorithm for information
retrieval. Section 4 presents the experimental result and finally, conclusion is given in section 4.

2. K-NN CLASSIFICATION FOR WEB INFORMATION RETRIEVAL


K-NN [7-10] is one of the standard algorithms for classification which is the process of identifying a relevant group or class for any
input data. K-NN classification can be done using three important steps. In the first step, distance is found out for a query data with all
of the training data available in the training database. In the second step, most similar k-number of data is identified through the
minimum distance. Lastly, class label of the query data will be identified from k-number of similar data through majority voting.
Drawbacks: When taking K-NN classification algorithm for web document categorization, two important challenges should be
handled. The first challenge is how to identify the similarity among the documents with the inputting document. The second challenge
is how to avoid the similarity matching with the entire training document because matching the similarity with the entire web database
1058
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

is very tough. In order to solve these two challenges, semantic entropy measure (SE) is used instead of Euclidean distance and naive
bayes classifier is used instead of K-NN classification.

3. DOCUMENT-DOCUMENT SIMILARITY MATRIX AND NAIVE-BAYES CLASSIFICATION TO WEB


INFORMATION RETRIEVAL
This section presents the proposed web information retrieval algorithm using document-document similarity matrix and nave-bayes
classifier. The block diagram of the proposed method is given in figure 1. The method is explained using two different phases.

Web pages

Frequency
Computation

Tag
Removal

Image
Removal

StopWords
Removal

Top N-Words
Extraction

Storing

Document
Word Matrix

Preprocessing

Feature Matrix

Similarity
Finding

Training Table
Construction

D-D
Matrix

Document to Document Matrix

Training

Query Web
Document
Preprocessin
g

Feature Matrix

Document to
Document Matrix

Testing
Figure 1. Block diagram of the proposed algorithm

1059

www.ijergs.org

Posterior
probability

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

In the first phase, web documents are pre-processed to extract the features which are then utilized to find document-document
similarity matrix. The D-D matrix is used to create a training table which contains the frequency of every attributes and its probability.
In the testing phase, a relevant category is found out using the classification model to obtain more relevant categorized documents
from the web database.

3.1 Preprocessing
The input web database W which has m number of web documents is taken as input along with its relevant category. For every web
document Di , pre-processing is applied to extract the relevant keywords. In order to find the relevant words from web document, all
the html tags are identified and removed from the web document. Once we remove tags and images, stop words such as, can, could,
is, was , may are matched with pre-defined set to obtain only meaningful words. After obtaining meaningful words, root form of all
the keywords is obtained to make all the derived format of words into its original format. For every words identified, frequencies are
computed within web document and top-N words chosen as vector to represent the web page document.

3.2 Document-Document similarity matrix computation


The document vector obtained from the previous step is then given to D-D matrix computation process. This matrix is generated by
finding similarity among all the web documents. The document to document similarity matrix is indicated as D-D matrix which is in
the size of m * m . Every element within matrix is similarity between two web document having top-N extracted keywords. The
similarity is computed based on measure, called semantic entropy measure (SE) [14]. Let us consider that d1 and d 2 are two
documents. The document

d1 have k1 number

of keywords and document

d 2 have k 2 number

of keywords. The unique keywords ( m )

are taken and frequencies of the keywords are represented in a vector d1 j . Similarity, frequencies of unique keywords belonging to
are represented in a vector d 2 j . f D1 is frequency of the keywords in

D1

f D 2 is

the frequency of keywords in

D2 ,
f D1

d2

represents the

frequency of keywords in the synonyms set, f D2 is the frequency of keywords in the synonyms set. Here, synonyms set are
computed by giving the keywords of document to the wordnet ontology. Based on this assumption, the proposed SE-measure is
formulated as,
SEmeasure Pr D1 , D2 log Pr D1, D2 Pr D1 , D2 log Pr D1, D2

Pr D1 , D2 log Pr D1, D2 Pr
D1 ,
D2 log Pr
D1,
D2

The values of

Pr D1, D2

Pr D1, D2

Pr D1, D2

and

Pr
D1,
D2

Pr D1, D2

Pr D1, D2

Pr
D1,
D2

are defined as follows,

1 m f D1 f D 2
2
m i 1 max f D1 , f D2

f D1
1 m

m i 1 f D1 f D2

Pr D1, D2

(1)

(3)

f D1
f D2
1 m
2
m i 1
max
f D1 ,
f D2

(2)

1 m
f D2

m i 1 f D1 f D2

(4)

(5)

3.3 Naive-Bayes classification to web information retrieval


D-D matrix obtained from the training web document is given for the training process to construct the training table. Training table is
utilized to find the category of test web document. The relevant category identified from the classifier model is used to retrieve the
relevant categorized documents already stored in the web database. Training: Let assume that D-D matrix of training data contains
1060
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

m number of attributes, d . Here, D-D matrix is segmented as different number of categories based on ground truth. After that, for
every category of web documents, mean and variance is computed for every attributes. Assume that

and

c2

be the mean and

variance of the D-D matrix belonging to every attributes of class c. Subsequently, the probability of attributes given in a class for the
D-D matrix, P(d D D C) , can be calculated using Gaussian distribution formulae with mean c and variance,

c2 . That is,

( d c) 2

P(d D D | c)

2 c2

2 c2

(6)
Based on the above formulae, training table of size, C * m is constructed. Every element in this matrix is computed by finding the
probability value belongs to category label with respect to attribute.
Testing: In the testing phase, input web document wT is taken and its m attribute value are found out by computing similarity
between the training documents. Once we find m attribute values, the category of test web document is calculated based on the
objective,
m

classify (wT ) arg max p(C c) p( Di d ) | C c).


c

i 1

(7)

prob(ci ) prob(ci | d )
posterior (ci )

i 1

evidence

j 1

i 1

(8)

evidence prob(ci ) * prob(ci | d )

(9)

Semantic Retrieval: The posterior probability of every class for the input web document is computed and the posterior probability
which is greater in the corresponding category is given as final category. Based on the category found, the categorized documents
stored in the database are given as final output to user.

4. RESULTS AND DISCUSSION


This section presents experimental results and discussion of the proposed D-D matrix-based nave bayes classifier.
4.1 Evaluation with sensitivity
The proposed D-D matrix-based nave bayes classifier algorithm is implemented with 100 web documents having two groups, one is
related with sports articles and other one is related with politics related articles. Every group contains 50 documents and it is given as
input to the algorithm. For training, 80% of the documents from each group is taken for building the training table and remaining 20%
of document from every group is used as testing dataset. The obtained classification results are evaluated with sensitivity.

Sensitivit y TP/(TP FN)

Where, TP stands for True Positive,


Positive.

(10)

TN stands for True Negative, FN stands for False Negative and FP stands for False

The performance plot of the proposed D-D matrix-based nave bayes classifier algorithm and D-D matrix-based k-NN algorithm is
given in figure 2. From the figure, we can easily understand that the proposed algorithm providing good sensitivity. The proposed
algorithm reached of about 85% sensitivity as compared with existing algorithm reaches the value of 82%.

1061

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Sensitivity
86

Sensitivity

84
82
80

D-D matrix with nave


bayes

78

D-D matrix with k-NN

76
74
2

Number of rounds

Figure 2. Sensitivity plot in between the proposed and existing

4.2 Evaluation with specificity


The proposed D-D matrix-based nave bayes classifier algorithm is implemented with 100 web documents and the obtained
classification results are evaluated with specificity

Specificit y TN/(TN FP)


Where, TP stands for True Positive,
Positive.

(11)

TN stands for True Negative, FN stands for False Negative and FP stands for False

The performance plot of the proposed D-D matrix-based nave bayes classifier algorithm and D-D matrix-based k-NN algorithm is
given in figure 3. From the figure, we can easily understand that the proposed algorithm providing good specificity. The proposed
algorithm reached of about 76% specificity as compared with existing algorithm reaches the value of 70%.

Specificity
80
70
Specificity

60
50
40
30

D-D matrix with nave


bayes

20

D-D matrix with k-NN

10
0
2

Number or rounds

Figure 3. Specificity plot in between the proposed and existing


1062

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

4.3 Evaluation with accuracy


The proposed D-D matrix-based nave bayes classifier algorithm is implemented with 100 web documents having two groups, one is
related with sports articles and other one is related with politics related articles. The obtained classification results are evaluated with
accuracy.

Accuracy (TN TP)/(TN TP FN FP)

Where, TP stands for True Positive,


Positive.

(12)

TN stands for True Negative, FN stands for False Negative and FP stands for False

The performance plot of the proposed D-D matrix-based nave bayes classifier algorithm and D-D matrix-based k-NN algorithm is
given in figure 4. From the figure, we can easily understand that the proposed algorithm providing good accuracy. The proposed
algorithm reached of about 76% accuracy as compared with existing algorithm reaches the value of 73%.

Accuracy
78
76
Accuracy

74
72
70

D-D matrix with nave


bayes

68

D-D matrix with k-NN

66
64
2

Number of rounds

Figure 4. Accuracy plot in between the proposed and existing

5. CONCLUSION
In this paper, Document-Document similarity matrix and Naive-Bayes classification was combined to do web information retrieval.
The semantic entropy measure was used to construct D-D matrix after performing pre-processing and feature construction. Then,
Naive-Bayes classifier was utilized to find the relevant category and subsequently, the required information for the user. The proposed
D-D matrix-based nave bayes classifier algorithm was implemented with 100 web documents having two groups, sports articles and
politics related articles. The performance of the proposed algorithm was analyzed with sensitivity, specificity and accuracy. From the
experimentation evaluation, the finding is that the proposed algorithm reached of about 85% sensitivity as compared with existing
algorithm which reaches only the value of 82%. Also, the proposed algorithm reached of about 76% specificity and 76% accuracy as
compared with existing algorithm.

REFERENCES:
[1] Ming Chen ; Hofestadt, R., "Web-based information retrieval system for the prediction of metabolic pathways", IEEE
Transactions on NanoBioscience, Vol. 3, NO. 3, PP. 192 - 199, 2004.
[2] Killoran, J.B., "How to Use Search Engine Optimization Techniques to Increase Website Visibility", IEEE Transactions on
Professional Communication, vol. 56, no. 1, pp. 50-66, 2013.
1063
www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

[3] Bhm, T, Klas, C.-P. ; Hemmje, M., "ezDL: Collaborative Information Seeking and Retrieval in a Heterogeneous Environment",
computer, IEEE, vol. 47, no. 3, pp. 32-37, 2014.
[4] Sumiya, K., Kitayama, D. ; Chandrasiri, N.P., "Inferred Information Retrieval with User Operations on Digital Maps", IEEE
Internet Computing, vol. 18, no. 4, pp. 70-73, 2014.
[5] Xiaogang Han, Wei Wei ; Chunyan Miao ; Jian-Ping Mei ; Hengjie Song, "Context-Aware Personal Information Retrieval From
Multiple Social Networks", Computational Intelligence Magazine, IEEE, vol. 9, no. 2, 2014.
[6] Junnila, V., Laihonen, T., "Codes for Information Retrieval With Small Uncertainty", IEEE Transactions on Information Theory,
vol. 60, no. 2, pp. 976-985, 2014.
[7] Jinn-Min Yang ; Pao-Ta Yu ; Bor-Chen Kuo, "A Nonparametric Feature Extraction and Its Application to Nearest Neighbor
Classification for Hyperspectral Image Data", IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 12791293, 2010.
[8] Zhao, S. Rui, C.; Zhang, Y., "MICkNN: multi-instance covering kNN algorithm", Tsinghua Science and Technology , IEEE, vol.
18, no. 4, 2013.
[9] Li Ma, Crawford, M.M. ; Jinwen Tian, "Local Manifold Learning-Based k -Nearest-Neighbor for Hyperspectral Image
Classification", IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 11, pp. 4099-4109, 2010.
[10] Aslam, Muhammad Waqar, Zhu, Zhechen ; Nandi, Asoke Kumar, "Automatic Modulation Classification Using Combination of
Genetic Programming and KNN", IEEE Transactions on Wireless Communications, vol. 11, no. 8, pp. 2742-2750, 2012.
[11] Khabbaz, M. Kianmehr, K. ; Alhajj, R., "Employing Structural and Textual Feature Extraction for Semistructured Document
Classification", IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 42, no. 6, pp. 1566
- 1578, 2012.
[12] Xuan-Hieu Phan ; Sendai, Japan ; Cam-Tu Nguyen ; Dieu-Thu Le ; Le-Minh Nguyen, "A Hidden Topic-Based Framework
toward Building Applications with Short Web Documents", IEEE Transactions on Knowledge and Data Engineering, Vol. 23,
no. 7, pp. 961 - 976, 2011.
[13] Yajing Zhao ; Jing Dong ; Tu Peng, "Ontology Classification for Semantic-Web-Based Software Engineering", IEEE
Transactions onServices Computing, Vol. 2, no. 4, pp. 303-317, 2009.
[14] Poonam yadav, SE-K-NN classification algorithm for semantic information retrieval.
[15] Sang-Bum Kim ; Kyoung-Soo Han ; Hae-Chang Rim ; Sung Hyon Myaeng, "Some Effective Techniques for Naive Bayes Text
Classification", IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 11, pp. 1457 - 1466, 2006

1064

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Utilization of six sigma(DMAIC) Approach for Reducing Casting Defects


1

Virender verma,2Amit sharma,3 Deepak juneja

Geeta engineering college panipat (naultha), virenkumar999@gmail.com contact no 9034409997

Abstract DMAIC approach is a business strategy used to improve business profitability and efficiency of all operation to meet
customer needs and expectations. In the present research work, an attempt has been made to apply DMAIC (Define, Measure,
analysis, improve, control) approach. The emphasis was laid down towards reduction in the defects (Blow holes, Misrun, Slag
inclusion, Rough surface) occurred in the sand castings by controlling the parameters with DMAIC technique. The results achieved
shows that the rejection due to sand casting defects has been reduced from 6.98% to 3.10 % which saved the cost of Rs.2.35 lac appx.

KeywordsSix Sigma; DMAIC, Chaff Machine, Casting Industry, Measure Phase, Pie Chart, Ishikawa, Improve Phase, Cost
Benefit.

1.INTRODUCTION
In today highly competitive scenario the markets are becoming global and economic conditions are changing fast. Customers are more
quality conscious and demand for high quality product at competitive prices with product variety and reduced lead time. DMAIC is a
data-driven quality strategy used to improve processes. It is an integral part of a Six Sigma initiative, but in general can be
implemented as a standalone quality improvement procedure or as part of other process improvement initiatives such as lean.
The DMAIC technique is an overall strategy to accelerate improvements in its processes, products and services. This approach is a
project driven management approach to improve the Organization products, services and processes by continually reducing defects in
the Organization. It is a powerful improvement business strategy that enables companies to use Simple and statistical methods for
achieving and sustaining operational excellence. When Improving a current process, if the problem is complex or the risks are high,
DMAIC should be The go-to method. Its discipline discourages a team from skipping crucial steps and increases the chances of a
successful project, making DMAIC a process most projects should follow.
1.

If the risks are low and there is an obvious solution, some of the DMAIC steps could be skipped, but only if:

2.

Trustworthy data show this is the best solution for your problem.

3.

Possible unintended outcomes have been identified and mitigation plans have been developed.

4. There is buy-in from the process owner.


DMAIC approach differs from other quality programs in its top down drive in its rigorous methodology that demands detailed analysis
fact based decisions. It is a rigorous data driven method for dealing with defects, waste and quality problems, in manufacturing,
services and other business activities. This approach is an upcoming quality improvement process and is proving to be a powerful tool
for solving complex problems. It would not work well without full commitment from upper management .It is a scientific method to
improve any aspect of a business, organization process. DMAIC is a methodology to
1. Identify improvement opportunities.
2. Define and solve problems
3. Establish measures to sustain the improvement.
The DMAIC is both a philosophy and a methodology that improves quality by analyzing data to find root cause of quality problems
and to implement controls. Although DMAIC implemented to improve manufacturing and business, processes such as product design
and supply chain management. It is a business improvement strategy used to improve profitability to drive out waste in business
process and to improve the efficiency of all operation that meet of exceed customers needs and expectation. DMAIC is a customerfocused program where cross functional teams works on project aimed at improving customer satisfaction.

1.2 KEY PLAYERS OF DMAIC METHODOLOGY


1065

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Key players are the persons in the industry who play an important role in the in the industry. Their duties and assignments discussed as
below.
Champion: He is the business leader responsible for overall deployment. Champion ensures that process owner support is there
during all phases. Champion learn DMAIC philosophies, deployment strategies, which include selecting high impact projects,
choosing and managing the right people to become master belt. Champion helps transferring project ownership from black belt to
manager who owns the process upon completion of corrective actions.
Black Belt: The Quality leader acts as a team leader in DMAIC project. He is responsible for training and deployment. He is all day
problem solver and assist black belt in applying the method correctly in unusual situations. In organization, normally manager acts as
a black belt.
Green Belt: These employees in the organization execute DMAIC as a part of their overall job while working with black belt. They
gain experience in the practical application of DMAIC methodology and tools. They work as team member in black belt project.
Normally shift supervisors acts as green belt.

1.3 THE FIVE STEPS TO DMAIC APPROACH


The DMAIC methodology has a core process: Define-Measure-Analyze-Improve-Control (DMAIC) methodology. The five steps to
DMAIC approach are shown in Fig. 1.1.

Fig. 1.1 - Five Steps to DMAIC approach


1.
Define: The definition of the problem is the first and the most important step of any DMAIC project because a good
understanding of the problem makes the job much easier. An average definition may mislead people into trying to achieve goal which
are not required or making the problem more complex .Thus, we can say that the definition of the problem forms the backbone of any
DMAIC project.
2. Measure: This problem is affecting all of the departments of the business in the form ofcustomer service, because of its inability to
answer questions from the customers on differentproducts or other issues. Many of the customers may stop returning if customer
servicecontinues to suffer, and this will definitely affect the financial position of the business.From the information collected during
the conversation, the extent of the problem for thephone operators is excessive workload and presumed higher stress level due to this.

1066

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 1.2 Measurement Variation of System


3. Analyse: The analyse phase examines the data collected in order to generate a prioritized list of source of variation. It is the key
component of any defect reducing program. This is the stage at which new goals are set and route maps created for closing the gap
between current and target performance level. The conventional quality technique like brainstorming, root cause analysis, Cause and
effect diagram etc. may be used for carrying out the analysis.
4. Improve: Improve the process to remove cause of defects. Specific problem identified during analysis
1. Use of brain storming and action workouts
2. Extracting the vital few factors through screening
3. Understanding the correlation of the vital few factor
4. Process optimization and confirmation experiment.
5 Control: Control the process to make sure that defects do not recur i.e. remove the root cause of the problem. The control phase is
preventive in nature. All the specific identified problems from the analysis phase were tackled in the control phase. It defines control
plans specifying process monitoring and corrective action. This phase provides systematic re-allocation of resources to ensure the
process continues in a new path of optimization. It also ensures that new process conditions are documented and monitored.

2.PROBLEM FORMULATION
In all processes the smallest variation in quality of raw material, production conditions, operator behavior and other factors can result
in a cumulative variation (defects) in the quality of the finished product. DMAIC approach aims to eliminate these variations and to
establish practices resulting in a consistently high quality product. Therefore, a crucial part of DMAIC work is to define and measure
variation with the intent of discovering its causes and to develop efficient operational means to control and reduce the variation. The
expected outcomes of DMAIC efforts are faster and more robust product development, more efficient and capable manufacturing
processes, and more confident overall business performance.
Present study was done at SHREE BALAJI CASTING SAMALKHA, PANIPAT on application of DMAIC methodology and Selection of tools
and techniques for problem solving, because of its high rejection rate. The main component of SHREE BALAJI CASTING SAMALKHA,
PANIPAT was Upper gear, Lower gear, Key, Roller sporting arm, Worm gear.

2.1 OBJECTIVES OF THE STUDY


The objectives for DMAIC approach implementation at SHREE BALAJI CASTING SAMALKHA, PANIPAT are as follows:-1067

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1. To identify the root factors causing casting defects


2. To improve the quality by reducing the casting defects
The present work deals with elimination of casting defects in a foundry industry. DMAIC approach is justified when root cause of
defect is not traceable. In the present work, an attempt has made to reduce the defects in castings in a foundry shop with the
application DMAIC approach. In the case study the sand casting process has divided in the three stages
(A) First stage includes Sand preparation
Mould making
Core making
(B) Second stage includesMelting and pouring of metal and maintaining accurate chemical composition
(C) Third stage includesFettling, cleaning and machining operation of casting
Step vise application of the DMAIC technique as discussed as below

3. DEFINE PHASE
The present case study deals with reduction of rejection due to casting defects in a foundry industry. The company is making cast iron
castings of handcraft components such as Upper gear, Lower gear, Worm gear, Key, Roller sporting arm in large scale and having
rejection in the form of Blow hole, Misrun, Rough surface and slag inclusions .The five important part of industry were chosen for
complete analysis.
3.1 UPPER GEAR
The upper gear of chaff machine connects through the rolling shaft and rotates by lower gear. It show the rejection due to casting
defects in a foundry industry. Having rejection in the form of Blow hole, Misrun, Rough surface and slag inclusions .The five
important part of industry were chosen for analysis.
3.2 LOWER GEAR
The lower gears of chaff machine connect through the rolling shaft and rotate to worm gear. It show the rejection due to casting
defects in a foundry industry. Having rejection in the form of Blow hole, Misrun, Rough surface and slag inclusions .The five
important part of industry were chosen for analysis.
3.3 WORM GEAR
The worm gears of chaff machine connect through the fly wheel shaft .it show the rejection due to casting defects in a foundry
industry. Having rejection in the form of Blow hole, Misrun, Rough surface and slag inclusions .The five important part of industry
were chosen for analysis.
3.4 ROLLER GEAR SPOTING ARM
The Roller gear sporting arm of chaff machine the arm provide sport of rolling gear.it Show the rejection due to casting defects in a
foundry industry. Having rejection in the form of Blow hole, Misrun, Rough surface and slag inclusions .The five important part of
industry were chosen for analysis.
3.5 KEY
Key of chaff machine and use for nut and bolt tightness.it show the rejection due to casting defects in a foundry industry. Having
rejection in the form of Blow hole, Misrun, Rough surface and slag inclusions .The five important part of industry were chosen for
analysis.

1068

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

The defects such as blow holes , Misrun , slag inclusion, rough surface have been identified by various method and Three months data
of each part was collected from the company which shows the production and rejection status of individual part.
Table 3.1 Detection methods
S.No.

Type of defect

Detection

Appearance

Blow holes

Visual method

Rounded holes

Slag inclusion

Visual method

Pitted surface

Misrun

Visual method

Unfilled cavity

Rough Surface

Touching method

Rough surface

Table 3.2 Data collection (before improvement) - UPPER GEAR


Month

Production
Piece

Rejection
pieces

Misrun
defects

320

Blow
holes
defects
100

40

Slag
inclusion
defects
150

Rough
surface
defects
30

Sep. 2013

4320

Oct.2013

4536

336

98

59

145

332

105

43

152

34
32

Nov.2013

4452

Dec.2013

3780

280

89

54

92

45

Total

17118

1286

392

196

539

141

Total production of Three months = 17118,

Total rejection =1286 pieces

% of rejection = 1286/ 17118 = 0.0751 x 100 = 7.51%


Table 3.3 Data collection (before improvement) - Lower gear
Month

Production
Pieces

Rejection
pieces

Sep.2013
Oct.2013
Nov.2013
Dec.2013
Total

4322
4539
4450
3778
17089

321
332
334
272
1259

Blow
holes
defects
99
96
108
87
390

Misrun
defects
43
57
42
48
190

Slag
inclusion
defects
148
143
155
94
540

Rough
surface
defects
31
36
29
43
139

Total production of Three months = 17089, Total rejection = 1259 pieces


% of rejection = 1259/ 17089 = 0.0736 x 100 = 7.36%
Table 3.4 Data collection (before improvement) Worm gear

1069

Month

Production
Pieces

Rejection
Pieces

Blow holes
defects

Misrun
defects

Sep. 2013
Oct.2013

5020
4998

451
449

219
214

50
17

www.ijergs.org

Slag
inclusion
defects
141
169

Rough
surface
defects
41
49

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Nov.2013
Dec.2013
Total

5110
3940
19068

459
354
1713

225
186
844

45
36
148

139
121
570

50
11
151

Total production of three months = 19068, Total rejection = 1731 pieces


% of rejection = 1731/ 19068 = 0.0907 x 100 = 9.07 %
Table 3.5 Data collection (before improvement) Roller sporting arm
Month

Production
Pieces

Rejection
pieces

Blow
holes
defects

Misrun
defects

Slag inclusion
defects

Sep. 2013
Oct.2013
Nov.2013
Dec.2013
Total

3541
3499
3522
2981
13543

141
139
140
119
539

79
76
68
59
282

19
22
27
18
86

29
27
39
24
119

Rough
surface
defects
14
14
11
18
57

Total production of three months = 13543, Total rejection = 539 pieces


% of rejection = 539/ 13543 = 0.039 x 100 = 3.9%
Table 3.6 Data collection (before improvement) Key
Month

Production
Pieces

Rejection
pieces

Blow holes
defects

Misrun
defects

Sep. 2013
Oct.2013
Nov.2013
Dec.2013
Total

3219
3192
3304
2813
12528

192
191
198
168
749

68
72
74
58
272

38
35
37
39
149

Total production of Three months = 12528,

Slag
inclusion
defects
62
59
63
49
233

Total rejection = 749 pieces

% of rejection = 749/ 12528 = 0.0597 x 100 = 5.97%


But the overall percentage of rejections has been found as below.
Total production of 5 parts in Three months =
Total rejection pieces

79346

5546

Overall % age of rejection 5546/ 79346

= 0.0698 x 100 = 6.98%


Table 3.7 Total rejection data

Defects

No. of defective pieces

Percentage of rejection

Blow holes
Misrun

2180
769

2180/79346= 0.0274 x 100 = 2.74%


769/79346= 0.00969 x 100=0.96%

Slag inclusion
Rough surface

2001
583

2001/79346= 0.0252 x 100=2.52%


583/79346=0.00734x 100=0.73%

1070

www.ijergs.org

Rough
surface
defects
24
25
24
22
95

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig. 3.1 Pie Chart


After collecting the complete data it was clear that rejection was high due to blow holes and slag inclusion casting defects. Therefore
more stress has been given on these defects to reduce the rejection in the industry.

4. Cause-and-Effect analysis tool:


A cause-and-effect, or fishbone, diagram depicts potential causes of a problem. The problem (effect) displays on the right side and the
list of causes on the left side in a treelike structure. The branches of the tree are often associated with major categories of causes. Each
branch has a listing of more specific causes in that category. Although there is no "correct" way to construct a fishbone diagram, some
specific types lend themselves well too many different situations. One of these is the "5M" diagram, so called because five of the
categories on the branches begin with the letter M ("Personnel" is also referred to as "Man").

FIG.4.1 ISHIKIWA DIAGRAM FOR BLOW HOLES DEFECTS

FIG 4.2 ISHIKIWA DIAGRAM FOR MISRUN DEFECTS

5. IMPROVE PHASE

1071

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

5.1 Improvement in blow holes defects: The root factors for blow holes defects were high moisture and low permeability. The
industry was using 5% of new silica sand and 95% of reuse sand. After performing the test with 100 kg of sand sample, it was found
that percentage of moisture was high and percentage of permeability was low. Therefore to improve the blow holes defects it was
necessary to increase the percentage of new silica sand to reduce the moisture and increased the permeability. The different results
have been obtained by increasing the new silica sand as below.
Table 5.1 Percentage recorded of moisture and permeability
S.N

Addition of new
Moisture
Permeability
silica sand
1
5%
6.01 %
125 cc / min
2
5.5 %
5.45 %
131 cc / min
3
6%
4.92 %
138 cc / min
4
6.5 %
4.32 %
145 cc/ min
Moisture content has been reduced in the sand by adding new sand from 5% to 6.5%. So these results in reduction of moisture contact
and permeability have been increased. After testing the sand the following results were obtained which were in comparison with the
standard results towards achievements of reduction of casting defects.
5.2 Improvement in Rough surface defects: The root factors for rough surface defects were poor coating of pattern, loose ramming
so to remove this defects it was very necessary to correct the coating of patterns and loose ramming. Therefore some improvements
have been done to reduce the rough surface defects.
1. Soft ramming has been improved by addition of coal dust from 0.9% to 1.1%.
2. Varnish coating on the pattern has been used.
3. Coating of mould inner surface by zirconium paste.
5.3 Improvement in Misrun defects: The root factors for Misrun defects were core shift and low pouring temp .Therefore to remove
this casting defect temp. has been improved and core shift has been controlled. So following action has been taken to improve this
defect.
1.

Misrun defects have been minimized by increasing tapping temp. 1195 degree to 1235 degree centigrade with addition of
flux (lime stone) from 0.2% to 0.3%.
2. To avoid core shift chaplets have used to reduce Misrun defects
5.4 Improvement in slag defects:
The root factors for slag defects were rough ladle lining and skimming metal. Therefore to
reduce the slag inclusion defect some new material has been added which was not used by the company before applying technique.
1. Slag defects have minimized by addition of slax-30 ( Foseco foundry data hand book pp. 229) material up to 2%
2. By using clean ladle
After implementation of these improvements, the data of the company was collected again.
Table 5.2 Data collection (after improvement) upper gear
Month

Production
Pieces

Rejection
pieces

Blow holes
defects

Misrun
defects

Feb 2014
March 2014
April 2014
May2014
Total

4439
4612
4507
4527
18085

147
158
167
135
607

42
48
51
42
183

27
37
39
28
131

Total production of Three months = 18085, Total rejection =607 pieces


% of rejection =607 / 18085 = 0.0335 x 100 = 3.35%
1072

www.ijergs.org

Slag
inclusion
defects
57
49
47
40
193

Rough
surface
defects
21
24
30
25
100

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Table 5.3 Data collection (after improvement) Lower gear


Month

Production
Pieces

Rejection
pieces

Blow holes
defects

Misrun
defects

Slag
inclusion
defects
38
35
37
32
142

Feb 2014
4421
138
40
38
March 2014
4827
144
48
41
April 2014
4547
141
45
38
May2014
4617
151
41
52
Total
18412
574
169
169
Total production of Three months = 18412, Total rejection =574 pieces
% of rejection =574/ 18412 = 0.0311 x 100 = 3.11
Table 5.4 Data collection (after improvement) Worm gear
Month
Production
Rejection
Blow
Misrun
Slag
Pieces
Pieces
holes
defects
inclusion
defects
defects

Rough
surface
defects
22
25
21
26
94

Rough
surface
defects

Feb 2014

5535

227

86

57

62

22

March 2014
April 2014
May2014
Total

5451
5315
5467
21768

231
213
230
901

92
80
87
345

49
38
48
192

65
68
70
265

25
27
25
99

Total production of Three months = 21768, Total rejection =901 pieces


% of rejection = 901/ 21768= 0.0413 x 100 =4.13%

Table 5.5 Data collection (after improvement) Roller sporting arm


Month

Production
Pieces

Rejection
pieces

Misrun
defects

65

Blow
holes
defects
28

12

Slag
inclusion
defects
17

Rough
surface
defects
08

Feb 2014

3637

March 2014

3840

61

27

13

15

06

April 2014

4056

70

25

13

22

10

May2014

3767

57

20

15

15

07

Total

15300

253

100

53

69

31

Total production of Three months = 15300, Total rejection = 253 pieces


% of rejection =253/ 15300= 0.0165 x 100= 1.65%
Table 5.6 Data collection (after improvement) Key
Month

1073

Production
Pieces

Rejection
pieces

Blow holes
defects

Misrun
defects

www.ijergs.org

Slag
inclusion
defects

Rough
surface
defects

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Feb 2014
3513
98
46
10
32
10
March 2014
3367
95
42
13
28
12
April 2014
3573
97
42
15
25
15
May2014
3586
99
43
17
26
13
Total
14039
384
173
55
111
50
Total production of Three months = 14039, Total rejection =384 pieces % of rejection =384/ 14039= 0.0273x 100= 2.73%
But the overall percentage of rejections has been found as below.
Total production of 5 parts in Three months = 87604
Total rejection

= 2719

Overall %age of rejection

= 2719 / 87604 = 0.0310 x 100= 3.10%

So after the complete analysis it was found that rejection due to casting defects has been reduced.
Table 5.7 Improvements in rejection
So
the

Defects
Blow holes
Misrun
Slag inclusion
Rough surface

Before improvement
2.74%
0.96%
2.52%
0.73%

After improvement
0.11%
0.68%
0.89%
0.42%

after

complete analysis it was found that rejection due to casting defects has been reduced. The DMAIC approach has been successfully
applied and rejections due to casting defects have been reduced from 6.98% to 3.10%.

6. RESULTS AND DISCUSSION


From the result of the application of DMAIC approach in the foundry shop the following results were obtained. The rejection due to
Blow holes defects were reduced from 2.74% to 0.11% by reducing the moisture and increasing the permeability of sand. The
rejection due to slag defects were reduced from 2.52% to 0.89% by using slag -30 material. The rejection due to Misrun defects was
reduced from 0.96 % to 0.68% by using chaplets. The rejections due to rough surface defects were reduced from 0.73 % to 0.42% by
addition of coal dust. The overall result of present work is clearly shows that by applying DMAIC approach the rejection has reduced
from 6.98% to 3.10% and saving of cost Rs 2.35 lac app.

REFERENCES:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.

Dr.F.Frankchen,chair Application of DMAIC to integrate lean manufacturing and six sigma. June 11,2004
Jeannine siviy Relationships between cmmi and six sigma. December2005
M.Sokovic, D.Pavleticn Six sigma process improvements in atomative parts production. Vol 19, issue 1, Number 2006.
Don BreckenDMAIC Process Used for Academic Case Analysis.
Tushar N Desai Six Sigma A New direction to quality & productive management WCECS 2008 Oct 22-24 USA.
Faust justin f Efficiency increasing six sigma statical methodologies Dec 2009.
SHALESH KHEKALE Minimum of cord wastage in belt industry using DMAIC Vol 2 2010.
Anup A Junankar Minimum of rework in belt industry using DMAIC Vol 1 issue 2011.
A.Kumaravadivel,VNatarajan Empirical study on employee job satisfaction upon implementing six sigma. Vol 3,Number
4,2011.
Low Shye Nee, Lau JooHao Integration of seven managements and planning tools and DMAIC. Vol 2 Issue 8,2012.
Nilesh v Fussule Understanding the benefit & limitation of six sigma methodology Vol 2 issue 1 Jan 2012.
ShashankSoni Reduction of welding defect using six sigma technique Vol 2 No.3 July 2013.
Nehagupta An application of DMAIC methodology for increasing the yarn quality in textile industry. Vol 6,Issue 1,2013.

1074

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

14. Mohittaneja Six sigma an approach to improve productivity in manufacturing industry. Vol 5,Nov2013.
15. VarshaKarandikar Process improvement in a fitter manufacturing industry through six sigma DMAIC approach. Vol 4,Aug
2014.

1075

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Phase Shifting Technique in Laser Speckle Image Processing


R.Balamurugan
Dept.of Physics,
KumaraguruCollege of Tecnology,
Coimbatore-641049. India.
E-mail: balamurugan.r.sci@kct.ac.in

Abstract A simple technique of speckle photography has been applied to measure small changes/deformation of the surface of
laser scattering materials. Low-cost commercial charge coupled device photo camera provides the images of Laser speckle pattern
with the beam splitter arrangement. A speckle pattern has been taken with the system at rest and then second image captured after the
deformation was made in the surface of the material. By simple subtraction of the digital pictures a fringe pattern obtained called as
specklegram; it gives information about modification of the position of surface of the material.

Keywords Laser speckle; subtraction of speckle images; Phase shift; fringe pattern; CCD Camera; image processing; deformation.
INTRODUCTION

Quick measurement of surface shape and deformation of mechanical parts of various materials is the ardent need of industries.
Experimental methods in solid mechanics highly depend on surface displacement measurements. The conventional instruments cannot
be used for those of soft materials and complex shape. Scanning method of mechanical probes takes much time hence it is not suited
for quick and in-process measurement. Interferometry techniques are free from these issues have been mainly applied to optically
smooth surfaces of materials. Speckle metrology, a simple and widely used non-destructive evolution tool in metrology [1] is used in
surface deformation analysis under mechanical or thermal loading conditions and determination of in plane translation [2]. Speckle
interferometry is used to measure the deformation of micro electromechanical systems [3, 4].Other applications range from medical
studies on bone dynamics to quality inspection of various products [5] and the measurement of the refractive index of a liquid in a cell
was reported [6]. The most important ones are based on liquid penetrant, ultrasound, magnetic particle, eddy current, acoustic
emission, radiology, active thermography and optical methods [7].When an optically rough surface is illuminated with a coherent
beam, a high contrast granular structure, known as speckle pattern is formed in the space is known as objective speckle pattern as
shown in Fig.1. It can also be observed at the image plane of a lens and it is then referred as subjective speckle pattern as shown in
Fig.2. The scattering regions are statistically independent and uniformly distributed between and . The speckles in the pattern
undergo both positional and intensity changes when the object is deformed. The randomly coded pattern that carries the information
about the object deformation provided to develop a wide range of methods, which can be classified into three broad categories:
speckle photography, speckle interferometry and speckle shear interferometry. Speckle photography includes all those techniques
where positional changes of the speckles are monitored, whereas speckle interferometry includes methods that are based on the
measurement of phase changes and hence intensity changes. If instead of phase change, we measure its gradient, the technique falls
into the category of speckle shear interferometry. All these techniques can be performed using digital/electronic detection using a CCD
and imaging processing system. Illumination of a rough surface with coherent light produces a random intensity distribution in front of
the surface, called speckle pattern [8]. Because the speckle pattern follows the movement of the scattering surface the speckle can be
used for displacement/deformation measurement [9]. Beam division and combination can be analysis on the basis of either by
amplitude (Michelson, Fizeau, Mach-Zehnder and Jamin) or by wave front division (Young and Fresnel- biprism) methods.

SPECKLE INTERFEROMETRY
The optical setup for speckle interferometry is based on the Michelson interferometer [10]. The pattern that results by
imaging a rough surface with a lens is itself a speckle field. The minimum size s of the image speckles is related to the optical system
f-number F and the magnification M, and is given by: s =1.2(1+M) F
... (1)
Where is the wavelength of the laser, and s is the radius of the Airy disc that is formed for the given optical imaging configuration.
The resultant intensity of each point of the object before deformation is given by: I before = Iobj + Iref +2IobjIref cos(0) (2)
where, Iobj and Iref are the local intensities of the object and reference beams respectively and 0 is the unknown, and random, initial
phase distribution of the speckle pattern at that point.
1076

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.1 Basic for Speckle image formation


(Objective type)

Fig.2. Basic for Speckle image formation


(Subjective type)

Fig.3. Basics for Speckle image formation (a) Basic Speckle pattern formation

(b) changes in the microstructure of the surface (The

original surface is indicated by the dashed line)


When the object moves, the phase distribution undergoes a change and the intensity at the object becomes:
1077

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Iafter = Iobj + Iref +2IobjIref cos(0 +obj) . . . (3)


where, obj is the phase change induced by object deformation, which is directly related to the out-of-plane displacement of the
object according to the sensitivity vector theory:
uz = obj/ 2 /(1 +cos i )
. . . (4)
where is the wavelength used and i is the incidence angle.
From equation (3) it is possible to determine the displacement, once obj is known. A simple approach to measure obj is to record
Ibefore and Iafter and to calculate the difference is:
I = Iafter Ibefore
= 2IobjIref [cos(0 +obj)cos0] (5)
which can be rewritten as:
I = 4IobjIref sin{ 0 + obj/2}sin obj /2
... (6)
The intensity change I depends on the random initial phase distribution 0 as well as on the deformation induced phase change obj,
therefore a statistical consideration is required to find a unique relationship between intensity change I and phase change obj during
deformation.

EXPERIMENTAL SETUP
It basically uses a setup analogous to a Michelson interferometer in which both mirrors are replaced by scatter surfaces. The
resultant speckle pattern in the image plane is formed by the interference of the two speckle fields issuing independently from the two
scatter surfaces. One of the surfaces is subjected to deformation and the other serves as the source of reference speckles [11]. The
resultant speckle pattern in the image plane is formed by the interference of the two speckle fields issuing independently from the two
scatter surfaces. One of the surfaces is subjected to deformation and the other serves as the source of reference speckles [11].
The interested object is a rectangular mild steel carbon plate with a size of 220 mm and 165 mm. The light source is a 5-mW
laser diode at the wavelength 630 nm. The beam splitter divides the laser beam into two parts, each of which illuminates a different
surface. The beam splitter also serves to combine the light diffused by the two surfaces. An image forming optical system, for example
a converging lens is used to collect the diffused light and project the image onto a screen. For these conditions there are two
overlapping images of the diffusing surfaces on the screen, each being a speckle field. Because the scattering angle is typically wide,
such surfaces do not need to be exactly aligned. We used a polished glass plate with partial metal coating on one face as a beam
splitter.

1078

Fig.4. Basic model interferometer


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

RESULT AND ANALYSIS

Laser speckle interferograms of deformation on the mild steel carbon plate are obtained before and after
deformation.

Fig.5. Speckle- after deformation

Fig.6. Speckle- after deformation

Fig.7. Fringe pattern Subtracted

Fig.8. Peak value of deformation


The subtraction cancels out the speckle grains that are unchanged leaving a dark area in their place. Overall, a noisy bright fringe
pattern that appears shows the area displaced by odd multiples of /4.contigious fringes differ by integer, it means by a step
displacement of /2. Now the fringe decoding provides the information of deformation. To image the surfaces and detect the
speckles, we used a CCD photo camera, adjusting its operation. The major requirement is that most of the speckle grains are resolved,
1079

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

that is, their size exceeds that of the camera pixels so we must adjust the speckles [12]. To find the deformation by the fringe analysis
method and two red lines are drawn to be tangent to the centre of two adjacent fringes.
The yellow line is drawn to pass through the centre of one of the fringes where it intersects the edge of the test piece. The
distance between the two red lines is the width of one fringe which we call X(here measured to be 5.01 in arbitrary units) and the
distance from the centre of the lower fringe to the yellow line we have called Y (here measured as 1.44). The flatness of t he test
piece is then: Y/X = 1.44/5.01 = 0.282. Hence this value (0.282x2=0.564) equal to 0.6m displaceme nt/deformation shown in
MATLAB. Strain gauges and extensometers available to determine the small deformation, but they are not suitable in all environment
with accurate measurements.
ACKNOWLEDGEMENT
The authors would like to thank management of Kumaraguru College of technology for the continuous support to this work.

CONCLUSION
In this paper, a simple double exposure electronic speckle photography technique to detect small deformation of mild steel
carbon plate has been done. Speckle displacement is detected from the position of correlation peak. Its results agree well those
obtained from the conventional methods. The components for this work are typically inexpensive and the image processing of the
laser speckle pattern is simple. Digital correlation analysis of speckle patterns arising from coherent optical systems is used
effectively for non contact and non destructive measurements by means of low cost photonic device and digital technology.

REFERENCES:
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]

1080

J.W.Goodman, Some fundamental properties of speckle, J opt.Soc.Am, Vol.66.no.11, 1145-1149, (1966).


Jones R Wykes C, Holography and speckle interferometry, 2nd ed. Cambridge University Press,New York, USA, 122142, (1989).
R. S. Sirohi Speckle Metrology, Marcel Dekker, New York, (1993).
R.K. Erf Speckle Metrology, Academic Press, New York, (1978).
P.Aswendt, C. D. Schmidt, D. Zielke, and S. Schubert ESPI solution for non contacting MEMS-on-wafer testing, Opt.
Lasers Eng. 40,501515, (2003).
P . K. Buah-Bassuah, F. Francini and G. Molesini,Measurement of refractive index by double-exposure speckle pattern
recording, Am. J.Phys.57, 366 370,(1989).
P. Shull, Non-destructive Evaluation, Marcel Decker, New York (2002).
J .W. Goodman, Statistical Optics, John Wiley and sons, New York, (1985).
A.R.Ganesan, Sharma D.K., and Kothiyal M. P, Electronic speckle shearing phase shifting pattern interferometer, Appl.
Opt., 27, 4731(1988).
M. Franc Laser Speckle and Applications in Optics, Academic Press, New York, (1979).
J. C. Dainty, Laser Speckle and Related Phenomena, Springer-Verlag, Berlin, 2nd ed. (1984).
R.S.Sirohi, Tay, C.J., Shang, H.M., Boo, W.P Non-destructive assessment of thinning of plates using digital shearography,
Opt.Eng.38 (9),15821585, (1999)

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

EVALUATION OF MECHANICAL PROPERTIES OF ALUMINIUM


ALLOY 7075 REINFORCED WITH SILICON CARBIDE AND RED MUD
COMPOSITE
Pradeep R1*, B.S Praveen Kumar.2 and Prashanth B3
1

Assistant Professor Department of Mechanical Engineering, M S Engineering College, Bangalore.

Associate Professor, Department of Mechanical Engineering, Don Bosco Institute of Technology,

Assistant Professor Department of Mechanical Engineering, M S Engineering College, Bangalore..


*Email: pradeep19903@gmail.com

ABSTRACT

Red mud is one of the major waste material during production of alumina from bauxite by the Bayers process. It is an
insoluble product generated after bauxite digestion with sodium hydroxide at elevated temperature and pressure is
known as red mud or bauxite residue. It comprises of oxides of iron, titanium, aluminium and silica along with some
other minor constituents. Based on economics as well as environmental related issues, enormous efforts have been
directed worldwide towards red mud management issues i.e. of utilization, storage and disposal. Different avenues of
red mud utilization are more or less known but none of them have so far proved to be economically viable or
commercially feasible. The red mud is classified as dangerous, according to NBR 10004/2004, and world while
generation reached over 117 million tons/year. In present work experiments have been conducted under laboratory
condition to assess the mechanical properties of the aluminium red mud and silicon carbide composite under different
working conditions. This has been possible by fabricating the samples through stir casting technique. To enhance the
mechanical properties, the samples were also subjected to heat treatment.
Key words: Al7075, Sic, Red mud, bauxite residue, composites, Stir casting, Mechanical properties,

Introduction

History is often marked by the materials and technology that reflect human capability and understanding. Many
times scales begins with the stone age, which led to the Bronze, Iron, Steel, Aluminium and Alloy ages as improvements
in refining, smelting took place and science made all these possible to move towards finding more advance materials
possible. Progress in the development of advanced composites from the days of E glass / Phenolic radome structures of
the early 1940s to the graphite/ polymide composites used in the space shuttle orbiter-is spectacular.
The recognition of the potential weight savings that can be achieved by using the advanced composites, which in
turn means reduced cost and greater efficiency, was responsible for this growth in the technology of reinforcements,
matrices and fabrication of composites. If the first two decades saw the improvements in the fabrication method,
1081

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

systematic study of properties and fracture mechanics was at the focal point in the 60s. Since then there has been an everincreasing demand for newer, stronger, stiffer and yet lighter-weight materials in fields such as aerospace, transportation,
automobile and construction sectors.
Composite materials are emerging chiefly in response to unprecedented demands from technology due to rapidly
advancing activities in aircrafts, aerospace and automotive industries. These materials have low specific gravity that
makes their properties particularly superior in strength and modulus to many traditional engineering materials such as
metals. As a result of intensive studies into the fundamental nature of materials and better understanding of their structure
property relationship, it has become possible to develop new composite materials with improved physical and mechanical
properties.
These new materials include high performance composites such as Polymer matrix composites, Ceramic matrix
composites and Metal matrix composites etc. Continuous advancements have led to the use of composite materials in
more and more diversified applications. The importance of composites as engineering materials is reflected by the fact
that out of over 1600 engineering materials available in the market today more than 200 are composites.

Experimental
Materials
The problem is associated with the study of mechanical properties of Al- Red Mud and Silicon Carbide Metal
Matrix Composite (MMC) of Aluminium alloy of grade 7075 with addition of varying weight percentage composition of
Red Mud and Silicon Carbide particles by stir casting technique. The mechanical properties were tested under laboratory
conditions. The change in physical and mechanical properties was taken in to consideration. For the achievement of the
above, an experimental set up was prepared to facilitate the preparation of the required specimen. The aim of the
experiment was to study the effect of variation of the percentage composition to predict the mechanical properties as well
as to measure the micro hardness.
The experiment was carried out by preparing the samples of different percentage composition by stir casting technique. A
brief analysis of microstructure had been conducted by Optical Microscope to verify the dispersion of reinforcement in the
matrix.

1082

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Fig.1 Red Mud as Reinforcement

Fig.2 Silicon Carbide (SiC) as Reinforcement

Test performed
The tests performed on different types of specimens are as follows:

Tensile test

Micro hardness test (BHN)

Compression test

Microstructure.

Test Setup
Tensile Test
The ultimate tensile strength was measured using 10 ton capacity servo hydraulic universal testing machine. The test
specimen is in a direction parallel to the applied load. In a stress- strain graph the initial portion of the curve is a straight
line and represents the proportionality of stress to strain according to Hooke's law. As the load is increased beyond which
the stress is no longer proportional to strain.UTS is the maximum stress that a test specimen can bear before fracture and
is based on original area.

Fig.3- Specimens prepared for tensile testing


1083

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

All tests were conducted in accordance with ASTM standards. Tensile tests were conducted at room temperature using UTM in
accordance with ASTM E8-82.The tensile specimens of diameter 8.9 mm and gauge length 76 mm were machined from the cast
composites with the gauge length of the specimen parallel to the longitudinal axis of the castings. Five specimens were tested and the
average values of the ultimate tensile strength (UTS) and Elongation were measured.
.
Hardness Test

Brinell hardness test is also known as indentation hardness test where indentation hardness is defined as the resistant to
permanent or plastic deformation under static or dynamic loads. The static indentation test was the type of test used in the
present study to examine the hardness of the specimens in which a ball indenter was forced into the specimens being
tested. The relationship of the total test force to the area or depth of indentation provides the measure of hardness.

Fig.4- Specimens for hardness test

Fig.5- Brinell hardness testing machine

Compression Test
Compression test was carried out using a standard 10-ton capacity universal testing machine as shown in Fig. 5.3. Compression tests
were conducted on specimens of 20.21 mm diameter and 40 mm length machined from the cast composites, by gradually applied
loads and corresponding strains were measured until failure of the specimen. The tests were conducted according to ASTM E9 at
room temperature as shown in Fig. 6.

1084

Fig.6- Specimens for compression test


www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

100X

HF

500X

HF

100X

Fig.7.1- Al 7075+8% Sic

100X

HF

500X

500X

HF

100X

HF

500X

Fig.7.4. Al7075+SiC4%+Red Mud4%

HF

500X

HF

Fig.7.5 Al 7075+SiC2%+Red Mud6%

1085

HF

Fig. 7.2- Al 7075+8% Red Mud

Fig.7.3 Al 7075+SiC6%+Red Mud2%

100X

HF

www.ijergs.org

HF

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

Results and Discussions


Table 1. Mechanical properties of Al/ Red mud and SiC composite

Varying Wt % Composition

UTS

Hardness

Mpa

BHN

Yield
Strength

Compression
strength

Mpa

Mpa

% Elongation

SiC8%+Al7075
65.65

107

56.0

44.71

1.32

118.54

121

103.48

57.28

2.08

77.26

57.3

68.84

50.53

1.92

77.37

95

66.60

52.92

2.24

59.92

69

51.01

47.05

2.26

SiC6%+Red mud2%+Al7075

SiC4%+Redmud 4%+Al7075

SiC2%+Redmud 6%+Al7075

Redmud 8%+Al7075

1086

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

variation in tensile strength

Tensile Strength Mpa

140
120
100
80
60
40
20
0

SiC8%+Red mud0%
SiC6%+Red mud2%
SiC4%+Red mud4%
SiC2%+Red mud6%
SiC0%+Red mud8%

Fig.8 graph shows variation in tensile strength from 0-8% of SiC and Red mud composite.

Varaiation in compression strength

Compression Strength Mpa

70
60
50
40
30
20
10
0

SiC8%+Red SiC6%+Red SiC4%+Red SiC2%+Red SiC0%+Red


mud0%
mud2%
mud4%
mud6%
mud8%
Fig.8.1 graph shows variation in compression strength from 0-8% of SiC and Red mud composite.

1087

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

140

Variation in hardness number

Brinell hardness number

120
100
80

60
40
20
0
SiC8%+Red SiC6%+Red SiC4%+Red SiC2%+Red SiC0%+Red
mud0% mud2% mud4% mud6% mud8%

Fig.8.2 graph shows variation in hardness number from 0-8% of SiC and Red mud composite.
It has been observed that at SiC6%+Red Mud 2%+Al7075 there is considerable increase in almost all the mechanical properties.

Conclusion
The general conclusion that is revealed from the present work is that by the combination of a matrix material with reinforcement such
as SiC and Red mud particles, it improves mechanical properties like tensile strength, compressive strength, hardness and yield
strength. Also microstructure studies indicate the presence of Aluminium dendrite structure with fine inter metallic particles SiC and
Red mud reinforced in between.
REFERENCES:
[1] k. Upadhya, composite materials for aerospace applications, developments in ceramics and metal matrix composites,
Kamaleshwar Upadhya, Ed.., warren dale,
PA: TMS publications, 1992, pp. 3-24.
[2] T.W. Clyne, An Introductory Overview of MMC System, Types and Developments, in Comprehensive Composite Materials, Vol3; Metal Matrix Composites, T. W. Clyne (ed), Elsevier, 2000, pp.1-26.
[3] L.M.Manocha & A.R. Bunsell Advances in composite materials, Pergamon Press, Oxford, 1980, Vol.2, p 1233-1240.
[4] R.S.Thakur & S.N.Das; Red mud Analysis & Utilization, published by NISCOM and Wiley Eastern Ltd., New Delhi, 1994.

1088

www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 6, October-November, 2014
ISSN 2091-2730

1089

www.ijergs.org

Вам также может понравиться