Академический Документы
Профессиональный Документы
Культура Документы
International Journal of
Computer Science Issues
IJCSI PUBLICATION
www.IJCSI.org
EDITORIAL
In this fifth edition of 2010, we bring forward issues from various dynamic computer science
fields ranging from system performance, computer vision, artificial intelligence, software
engineering, multimedia, pattern recognition, information retrieval, databases, security and
networking among others.
Considering the growing interest of academics worldwide to publish in IJCSI, we invite
universities and institutions to partner with us to further encourage open-access publications.
As always we thank all our reviewers for providing constructive comments on papers sent to
them for review. This helps enormously in improving the quality of papers published in this
issue.
Google Scholar reported a large amount of cited papers published in IJCSI. We will continue to
encourage the readers, authors and reviewers and the computer science scientific community and
interested authors to continue citing papers published by the journal.
Apart from availability of the full-texts from the journal website, all published papers are
deposited in open-access repositories to make access easier and ensure continuous availability of
its proceedings free of charge for all researchers.
We are pleased to present IJCSI Volume 7, Issue 5, September 2010 (IJCSI Vol. 7, Issue 5). Out
of the 198 paper submissions received, 59 papers were retained for publication. The acceptance
rate for this issue is 29.8%.
Dr Tristan Vanrullen
Chief Editor
LPL, Laboratoire Parole et Langage - CNRS - Aix en Provence, France
LABRI, Laboratoire Bordelais de Recherche en Informatique - INRIA - Bordeaux, France
LEEE, Laboratoire d'Esthtique et Exprimentations de l'Espace - Universit d'Auvergne, France
Dr Constantino Malagn
Associate Professor
Nebrija University
Spain
Dr Mokhtar Beldjehem
Professor
Sainte-Anne University
Halifax, NS, Canada
Dr Pascal Chatonnay
Assistant Professor
Matre de Confrences
Laboratoire d'Informatique de l'Universit de Franche-Comt
Universit de Franche-Comt
France
Dr Yee-Ming Chen
Professor
Department of Industrial Engineering and Management
Yuan Ze University
Taiwan
Dr Vishal Goyal
Assistant Professor
Department of Computer Science
Punjabi University
Patiala, India
Dr Dalbir Singh
Faculty of Information Science And Technology
National University of Malaysia
Malaysia
Dr Natarajan Meghanathan
Assistant Professor
REU Program Director
Department of Computer Science
Jackson State University
Jackson, USA
Dr Navneet Agrawal
Assistant Professor
Department of ECE,
College of Technology & Engineering, MPUAT,
Udaipur 313001 Rajasthan, India
Dr Shishir Kumar
Department of Computer Science and Engineering,
Jaypee University of Engineering & Technology
Raghogarh, MP, India
Dr P. K. Suri
Professor
Department of Computer Science & Applications,
Kurukshetra University,
Kurukshetra, India
Dr Paramjeet Singh
Associate Professor
GZS College of Engineering & Technology,
India
Dr Shaveta Rani
Associate Professor
GZS College of Engineering & Technology,
India
Dr G. Ganesan
Professor
Department of Mathematics,
Adikavi Nannaya University,
Rajahmundry, A.P, India
Dr A. V. Senthil Kumar
Department of MCA,
Hindusthan College of Arts and Science,
Coimbatore, Tamilnadu, India
Dr T. V. Prasad
Professor Department of Computer Science and Engineering,
Lingaya's University Faridabad,
Haryana, India
Prof N. Jaisankar
Assistant Professor
School of Computing Sciences,
VIT University
Vellore, Tamilnadu, India
Prof. T Venkat Narayana Rao, Department of CSE, Hyderabad Institute of Technology and
Management , India
Mr. Vikas Gupta, CDLM Government Engineering College, Panniwala Mota, India
Dr Juan Jos Martnez Castillo, University of Yacambu, Venezuela
Mr Kunwar S. Vaisla, Department of Computer Science & Engineering, BCT Kumaon Engineering
College, India
Prof. Manpreet Singh, M. M. Engg. College, M. M. University, Haryana, India
Mr. Syed Imran, University College Cork, Ireland
Dr. Namfon Assawamekin, University of the Thai Chamber of Commerce, Thailand
Dr. Shahaboddin Shamshirband, Islamic Azad University, Iran
Dr. Mohamed Ali Mahjoub, University of Monastir, Tunisia
Mr. Adis Medic, Infosys ltd, Bosnia and Herzegovina
Mr Swarup Roy, Department of Information Technology, North Eastern Hill University, Umshing,
Shillong 793022, Meghalaya, India
TABLE OF CONTENTS
1. Dynamic Shared Context Processing in an E-Collaborative Learning Environment
Jing Peng, Alain-Jrme Fougres, Samuel Deniaud and Michel Ferney
Pg 1-9
2. Domain Specific Modeling Language for Early Warning System: Using IDEF0 for
Domain Analysis
Syed Imran, Franclin Foping, John Feehan and Ioannis M. Dokas
Pg 10-17
Pg 18-29
4. Dynamic Clustering for QoS based Secure Multicast Key Distribution in Mobile Ad
Hoc Networks
Suganyadevi Devaraju and Ganapathi Padmavathi
Pg 30-37
Pg 38-44
Pg 45-50
Pg 51-63
Pg 64-72
Pg 73-81
Pg 82-88
Pg 89-93
Pg 94-101
13. Extracting Support Based k most Strongly Correlated Item Pairs in Large
Transaction Databases
Swarup Roy and Dhruba K. Bhattacharyya
Pg 102-111
Pg 112-116
Pg 117-121
Pg 122-127
17. Comparison between Conventional and Fuzzy Logic PID Controllers for
Controlling DC Motors
Essam Natsheh and Khalid A. Buragga
Pg 128-134
Pg 135-141
Pg 142-147
20. A Three Party Authentication for Key Distributed Protocol Using Classical and
Quantum Cryptography
Suganya Ranganathan, Nagarajan Ramasamy, Senthil Karthick Kumar
Arumugam, Balaji Dhanasekaran, Prabhu Ramalingam, Venkateswaran Radhakrishnan
and Ramesh Kumar Karpuppiah
Pg 148-152
21. Fault Tolerance Mobile Agent System Using Witness Agent in 2-Dimensional Mesh
Network
Ahmad Rostami, Hassan Rashidi and Majidreza Shams Zahraie
Pg 153-158
22. Securing Revocable Iris and Retinal Templates using Combined User and Soft
Biometric based Password Hardened Multimodal Fuzzy Vault
V. S. Meenakshi and Ganapathi Padmavathi
Pg 159-166
23. Interactive Guided Online/Off-line search using Google API and JSON
Kalyan Netti
Pg 167-174
24. Static Noise Margin Analysis of SRAM Cell for High Speed Application
Debasis Mukherjee, Hemanta Kr. Mondal and B. V. Ramana Reddy
Pg 175-180
Pg 181-186
Pg 187-190
Pg 191-197
Pg 198-205
29. Development of Ontology for Smart Hospital and Implementation using UML and
RDF
Sanjay Kumar Anand and Akshat Verma
Pg 206-212
Pg 213-220
31. Semantic layer based ontologies to reformulate the neurological queries in mobile
environment
Youssouf El Allioui and Omar El Beqqali
Pg 221-230
Pg 231-238
Pg 239-247
Pg 248-252
35. Design of Compressed Memory Model Based on AVC Standard for Robotics
Devaraj Verma C and VijayaKumar M.V
Pg 253-261
Pg 262-267
Pg 268-271
38. Modeling and design of evolutionary neural network for heart disease detection
K. S. Kavitha, K. V. Ramakrishnan and Manoj Kumar Singh
Pg 272-283
Pg 284-288
Pg 289-295
Pg 296-301
Pg 302-309
Pg 310-317
Pg 318-326
Pg 327-330
46. A Simple Modified Transmission Line Model for Inset Fed Antenna Design
M. Fawzi Bendahmane, Mehadji Abri, Fethi Tarik Bendimerad and Noureddine BoukliHacene
Pg 331-335
47. Optimum Multilevel Image Thresholding Based on Tsallis Entropy Method with
Bacterial Foraging Algorithm
P. D. Sathya and R. Kayalvizhi
Pg 336-343
Pg 344-349
49. Automated Test Data Generation Based On Individual Constraints and Boundary
Value
Hitesh Tahbildar and Bichitra Kalita
Pg 350-359
Pg 360-366
Pg 367-373
Pg 374-381
Pg 382-393
Pg 394-398
55. Risk Quantification Using EMV Analysis - A Strategic Case of Ready Mix
Concrete Plants
Roopdarshan Walke, Vinay Topkar and Sajal Kabiraj
Pg 399-408
56. Rule Based Machine Translation of Noun Phrases from Punjabi to English
Kamaljeet Kaur Batra and Gurpreet Singh Lehal
Pg 409-413
Pg 414-417
Pg 418-423
59. Efficient 2.45 GHz Rectenna Design with high Harmonic Rejection for Wireless
Power Transmission
Zied Harouni, Lotfi Osman and Ali Gharsallah
Pg 424-427
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
In this paper, we propose a dynamic shared context processing
method based on DSC (Dynamic Shared Context) model, applied
in an e-collaborative learning environment. Firstly, we present
the model. This is a way to measure the relevance between
events and roles in collaborative environments. With this method,
we can share the most appropriate event information for each
role instead of sharing all information to all roles in a
collaborative work environment. Then, we apply and verify this
method in our project with Google App supported e-learning
collaborative environment. During this experiment, we compared
DSC method measured relevance of events and roles to manual
measured relevance. And we describe the favorable points from
this comparison and our finding. Finally, we discuss our future
research of a hybrid DSC method to make dynamical information
shared more effective in a collaborative work environment.
Keywords: Dynamical Shared Context, Relevant Information
Sharing, Computer Supported Collaborative Learning, CSCW
1. Introduction
Everyone now recognizes that effective collaboration
requires that each member of the collaboration receives
relevant information on the activity of his partners. The
support of this relevant information sharing consists of
models for formalizing relevant information, processors
for measuring relevance, and indicators for presenting
relevant
information.
Various
concepts
and
implementations of relevant information models have been
presented in the areas of HCI and Ubiquitous Computing.
System Engineering (SE) is the application field of our
research activity. According to ISO/IEC 15288:2002
standard [1], designing a system-of-interest needs 25
processes which are grouped into technical process,
project processes, etc. The system life cycle stages are
described by 11 technical processes. The first ones are the
Stakeholder Requirements Definition Process, the
Requirements Analysis Process and the Architectural
Design Process. They correspond to the left side of the
well-known entity Vee cycle [2] which links the technical
processes of the development stage with the project cycle
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
Division of labour
Task
Tool
Object
Role
Subject
Requirement
Community
Rule
Phase
3. Method of Research
Our research has been carried out with two steps in two
years experiments:
Step 1: Context Factor collection. During the first
experiment, we collected the context factors
according to our context model from all the
information of events.
Step 2: Relevance measurement. During this
experiment, we calculate the relevance dynamically
through the context factor, which represent the
event attribute.
DSC is a generic extensible context awareness method,
which includes simple but powerful and lightweight
mechanisms for the generation of relevant shared
information to the appropriate users. The concept of DSC
is based on event attributes, roles interest degree, and
relevance measurement. Relevance has been studied for
the recent fifty years, and the general practice of relevance
analysis is to research or to analyze the relationships
between events or entities. The relevance measure or score
is a quantitative result of the relevance analysis [14, 15,
16]. In this paper, our method is based on context model
and relevance analysis (Figure 2). This method can be
descried in three parts from left to right:
Part 1: Event capture and Roles interests capture.
In this part, we suppose that events could be
presented by text and we gathered them during the
experiment. Then we used Natural Language
Processing tool to capture the key words as our
context factors. Similar demonstration can be made
for the roles interest capture. By supposing that the
roles interests could be presented by text, we
captured them during the ongoing of the experiment.
Part 2: Relevance measurement. Details of this part
are illustrated in the following paragraph.
Part 3: Relevantly shared information. The measured
information will be shared by relevance with
different role in order to reduce the redundancy of
information shared.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
Event
capture
Roles
Interests
capture
Event
Attributes
Context
model
Roles
Interests
Degree
Relevance
measurement
Relevantly
shared
information
cffi
ni
nk
(3)
iefi log
E
e : ci e
(2)
(5)
refi
er : ci er
Er
(6)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
(8)
v v
relevanceve , vr cos e r
ve vr
Product
SA 1
SA 2
SA 3
(9)
SA 4 +product
Layer
L2
L2
L2
L2 + L1
(10)
(11)
4. Experiment
In this section, we test the above method about calculating
the relevance degree between events and roles by an
example from our project. In practice, we apply this
method in our Google App collaborative learning where it
helps to send the email and announce to the appropriate
receivers instead of all members in the project (Figure 3).
The project given to the students aims at designing an
assembly system following the lean manufacturing
principles, using the e-collaborative learning environment
to manage cooperation in groups and project. The studied
product is a hydraulic motor composed of 4 subassemblies (SA). The inherent structure of this product
implies the use of the first three sub-assemblies to
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Fig. 3: Illustration of the e-collaborative learning environment used by the students: here, the project leader of Group 11 is currently editing the work of item 6
(information will be forwarded by email to members of Group 11)
0,83463261
64
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
64
DS1
DS2
0.15
0.1428
0.2
0,3333
0.1428
0.75
0,3333
0.7143
0.6667
0.4
0,3333
0.1428
......
64
0.5
0.1428
SR
DS1
DS2
0.1333
0,1111
0.5
0.2667
0,1111
0,0667
0,1333
QR
0,2155
0,2052
0,1669
0,4789
0,1192
0,8346
0,3500
0,2782
0,3333
0,3111
0,3838
0,1556
0,1370
......
......
0,4173
0,1192
64
SR
DS
0.1
0.125
SR
DS1
DS2
QR
SR
DS1
DS2
0,1915
0,0927
0,4173
0,2226
0,0927
0,6077
0,1113
CF
64
0.125
64
QR
CF
QR
......
TM
TM
......
CF
QR
......
TM
......
QR
SR
DS1
0,0467
0,0583
0,1043
64
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
vT1DS1 =
vT1DS2 =
vT2M =
vT2QR =
vT2SR =
vT2DS1 =
vT2DS2 =
vT3M =
vT3QR =
vT3SR =
vT3DS =
[0, 0, 0.3111, 0, , 0]
[0, 0.8346, 0, 0, , 0]
[0, 0.1915, 0.2226, 0.6077, , 0.1113]
[0, 0.0927, 0.0927, 0, , 0]
[0, 0, 0, 0, , 0]
[0, 0, 0, 0, , 0]
[0, 0.4173, 0, 0, , 0]
[0, 0, 0.0467, 0, , 0]
[0, 0, 0.0583, 0, , 0.1043]
[0, 0, 0, 0, , 0]
[0, 0, 0, 0, , 0]
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
T1DS2
0.0405
Low
5
Low
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
No
T2DS2
0.0689
Low
6
Low
No
No
No
No
No
Yes
No
No
No
No
Relevance
Relevanc
e
DSC
Model
0-1
Model
Used to
sharing
Manual
analysis
Relevanc
e
DSC
Model
0-1
Model
Used to
sharing
Manual
analysis
T3DS
0.0912
Low
5
Low
No
No
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
References
ISO/IEC 15288:2002, "Systems engineering System life
cycle processes", ISO 2002.
[2] K. Forsberg, and H. Mooz, "The relationship of system
engineering to the project cycle", in Proc. of the NCOSE
conference, 1991, pp. 57-65.
[3] ISO/IEC 26702:2007, "Standard for Systems Engineering
Application and management of the systems engineering
process", ISO 2007.
[4] J. Peng, A.-J. Fougres, S. Deniaud, and M. Ferney, "An ECollaborative Learning Environment Based on Dynamic
Workflow System", in Proc. of the 9th Int. Conf. on
Information Technology Based Higher Education and
Training (ITHET), Cappadocia, Turkey, 2010, pp.236-240.
[5] M. Kaenampornpan, and E. ONeill, "Modelling context: an
activity theory approach", In Markopoulos, P., Eggen, B.,
Aarts, E., Croeley, J.L., eds.: Ambient Intelligence: Second
European Symposium on Ambient Intelligence, EUSAI 2004.
pp. 367374, 2004.
[6] J. Cassens, and A. Kofod-Petersen, "Using activity theory to
model context awareness: a qualitative case study". In Proc.
of the 19th International Florida Artificial Intelligence
Research Society Conference, 2006, pp. 619-624.
[7] A. J. Cuthbert, "Designs for collaborative learning
environments: can specialization encourage knowledge
integration?", in Proc. of the 1999 conference on Computer
support for collaborative learning (CSCL '99), International
Society of the Learning Sciences, 1999.
[8] A. Dimitracopoulou, "Designing collaborative learning
systems: current trends & future research agenda", in Proc. of
the 2005 conference on Computer support for collaborative
learning (CSCL '05), International Society of the Learning
Sciences, 2005, pp. 115-124.
[9] L. Lipponen, "Exploring foundations for computer supported
collaborative learning", in Proc. of the Conf. on Computer
Support for Collaborative Learning (CSCL '02), International
Society of the Learning Sciences, 2002, pp.72-81.
[10] A. Weinberger, F. Fischer, and K. Stegmann, "Computer
supported collaborative learning in higher education: scripts
for argumentative knowledge construction in distributed
groups", in Proc. of the 2005 conference on Computer support
for collaborative learning (CSCL '05), International Society
of the Learning Sciences, 2005, pp. 717-726.
[11] T. Gross, and W. Prinz, "Modelling shared contexts in
cooperative environments: concept, implementation, and
evaluation", Computer Supported Cooperative Work 13,
[1]
5. Conclusion
E-collaborative learning systems have been developed to
support collaborative learning activity, which makes the
study more and more effective. Now we think that learning
activity will have a better performance and effectiveness if
we can eliminate the redundant shared information in the ecollaborative system. Therefore, we proposed an
implementation of DSC method supported by an ecollaborative learning system environment. This
environment can supply pertinent useful resources, propose
appropriate workflow to realize project, share and
disseminate relevant information.
We have described an original methodology of DSC model
to address the problem of shared redundancy in
collaborative learning environments. In this method,
context model are applied for preparing the DSC modeling.
A collective activity analysis approach (Activity Theory)
allows us to build this context model. This phase is critical,
since the dynamic shared context model depends on this
context model to obtain satisfying context factors.
After describing our methodology we presented an
illustration in an e-collaborative learning context. In this
illustration, we selected 82 typical events from the project,
and measured the relevance between these events and roles
by using the tool Relevance Processing V_0.1. From the
results of relevance, we got a useful advice to share the
information in a collaborative environment.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Domain analysis plays a significant role in the design phase of
domain specific modeling languages (DSML). The Integration
Definition for Function Modeling (IDEF0) is proposed as a
method for domain analysis. We implemented IDEF0 and
managed to specify reusable domain specific language artifacts
in a case study on water treatment plants safety. The
observations suggest that IDEF0 not only examines the technical
aspects of the domain but considers also its socio-technical
aspects; a characteristic listed as DSML requirement in the case
study.
Keywords: Domain Analysis, Domain Specific Modeling,
IDEF0, Reusable Domain Models, Early Warning Systems.
1. Introduction
The objective of our research is to develop a graphical
language able to represent concepts related to the domain
of water treatment plants operations and safety. The
graphical language will be an integrated component of an
Early Warning System (EWS). The models specified by
the graphical language will represent different facets of
the domain and executable code will be generated
automatically. In short, the objective of our research is to
develop a Domain Specific Modeling Language (DSML)
for EWS.
The first phase in the development of any DSML is to
gather and process the required information and to define
reusable language artifacts. This phase is known as
domain analysis [1]. The idea of reusable artifacts is of
importance. It forces the definition of filters based on
which commonalities and variabilities of the domain
become discernible.
During domain analysis, knowledge about the domain is
acquired from various sources such as experts, technical
documents, procedures, regulations and other materials.
The purpose of this phase is to produce domain models.
Domain models consist of concepts, relations, terminology
10
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
11
2. Methods
Some of the characteristics of IDEF0 are [9]:
2.1 IDEF0
IDEF0 is designed to model the activities, processes and
actions of an organization or system [8]. It is an
engineering technique for performing and managing needs
analysis, benefits analysis, requirements definition,
functional analysis, systems design, maintenance, and
baselines for continuous improvement [9]. It enhances the
domain expert involvement by through simplified
graphical representation.
With IDEF0 one can specify five elements: the activities,
input, output, constraints or control and mechanisms or
resources of a system (see Fig. 1). The activities are
receiving certain inputs that need to be processed by
means of some mechanism, and are subject to certain
control such as guidelines, policies, before being
transformed into output. The activities can be
instantaneous or can happen over a period of time.
IDEF0 diagrams are organized in hierarchical structure,
which allows the problem domain to be easily refined into
greater details until the model is descriptive as necessary.
The first level of the hierarchical structure represents a
single high level activity, which then can be decomposed
to lower levels that represent the processes of the system,
the resources and the information that passed between
them in more detail.
It
provides
comprehensiveness
and
expressiveness in graphically representing a wide
variety of business, manufacturing and other types of
enterprise operations to any level of detail;
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
12
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
13
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
14
IDEF0 model highlighting technical and sociotechnical factors in regulating raw water intake
pumps.
Activity: A0-1
Name: Intake and Storage process
Mechanism: Intake pumps
Control: Pump control
Description: The raw water intake pumps takes
water from a lake and supplies it to the raw water
reservoir in WTP. This activity involves control
measures, which are influenced by different technical
and subsequently socio-technical factors.
Technical:
1. Feedback from activity A5 (Treated water
outlet), information of the treated water level from
treated drinking water reservoirs.
2. Raw water quality parameters.
3. Raw water intake level.
4. Raw water storage tank level.
5. Maintenance and other technical reasons.
Socio-technical:
The treated water supply demand should be kept
in consideration to the WTP capacity. If the demand
of drinking water is more than the treatment capacity
of WTP then the technical factor 1 will be affected
as the level of drinking water reservoir will mostly
be low and would ultimately effect the pump control
action in regulating raw water intake pumps.
Raw water quality parameters are set by national
and EU standards and legislation.
Raw water intake level is determined not only by
the water resource but also depends on the level of
water in the lake. As in our case study, the lake water
level up to which limit the water can be taken into
WTP is set by Water Services Authority. There is
hydroelectric plant downstream from our water
intake, which requires subsequent amount of water to
be fully functional.
Raw water storage tank level depends on many
factors such as, the availability of water and the
demand of treated water in supply. In some cases raw
water storage tanks are not available because of
financial limitations.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
15
Activity
Intake
Sequence Subsequence
No
No.
A0-1
A1
Mechanisms
Commonalities
Controls
Variabilities
Commonalities
1. Flow meter
1. Penstock
1. Pumps Control
2. Tanks
2. Metal Screens
2. Manual Inspection
3. Sampling test
laboratory
3. Intake pumps
3. Maintenance
Variabilities
1. Penstock selection
level.
2. Raw water quality
parameters
4. Sampling for
cryptosporidium test and
laboratory test.
Intake and it is also act as a superclass to the subactivities specified at its in its lower decomposition level
that are Raw Water Intake Proc. and Raw Water
Storage Proc..
The classes are also specified for mechanisms and
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
controls,
each
having
subsequent
subclasses
representing
their
variabilities.
Moreover,
commonalities are defined in a separate class. It is also
to be noticed that commonality can be define as an
attribute of variabilities but the vice versa is not allowed.
For example, the class Penstock which is categorized as
a variability of a mechanism in an Intake activity can
have an attribute class Sensor but the other way around
is not possible.
Once the required classes are defined they are referenced
to each other accordingly as defined by the IDEF0 model.
A class with subsequence number A1 is to be a referred by
class with a sequence number A0-1 as it is the sub activity
of A1. The sensor class is referred by the mechanism
class which is further referenced by the category class.
Here the sensor is defined as commonality because it is
repeated in many sub activities that belong into a lower
hierarchy level of the Intake and Process shown in Fig 4.
The activities in the lower hierarchy levels made use of
different types of sensors. Given this repetition we defined
the sensor concept as commonality. Fig 4 shows the
2nd level of the IDEF0 analysis to demonstrate the result
of IDEF0 method. In reality we developed a series of
hierarchical IDEF0 models that describe each activity box
shown in Fig 4 in more detail. All classes are referred by
the Map class in our case. In EMFs ecore metamodel
the class Map is considered as a starting point or
mapping point to rest of the metamodel. The class Map
itself does not take part directly in modeling of domain
concepts but it is used as a point where the whole
architecture of the metamodel is initially mapped as
referenced. In the case of controls, we used a declarative
language in a metamodel level, the object constraint
language, and also specify separate control class, to
define their constraints.
5. Conclusions
This paper presents the use of IDEF0 as domain analysis
approach for the development of a DSML. The case study
demonstrated that IDEF0 contributed in identifying the
reusable building blocks of a DSML and proved very
useful in collecting, organizing and representing the
relevant domain information.
On the other hand, it was observed that IDEF0 models can
become so concise that it can cause comprehension
difficulties and only domain experts can understand it
fully. Difficulties can also arise in communicating various
domain concerns between different domain experts.
Furthermore, it has been noticed that the domain
information can change during the course of time such as
16
References
[1] J. M. Neighbors, Software construction using
components, doctoral dissertation, Univ of California,
Irvine, 1981.
[2] R. Tairas, M. Mernik, and J. Gray, Using ontologies in the
Domain Analysis of Domain Specific languages Models
in Software Engineering. Vol. 5421, 2009, pp. 332-342.
[3] M. Mernik, and J. Heering, and A.M. Sloane, When and
how to develop domain-specific languages, ACM
Computing Surveys (CSUR), Vol. 37, 2005, pp. 316-344.
[4] K. Kang, S. Cohen, J. Hess, W. Nowak, S. Peterson,
Feature-Oriented Domain Analysis (FODA) Feasibility
Study, Technical Report CMU/SEI-90-TR-21, Software
Engineering Institute, Carnegie Mellon University,
Pittsburgh, PA, 1990.
[5] D. M. Weiss, C. T. R. Lai, Software Product-Line
Engineering: A Family Based Software Development
Process, Addison-Wesley Longman Publishing Co., Inc.
Boston, MA, USA, 1999.
[6] W. Frakes, R. Prieto-Diaz, C. Fox, DARE: Domain
Analysis and Reuse Environment, Annals of Software
Engineering, Vol. 5, 1998, pp. 125-151.
[7] J.-P. Tolvanen, Keeping it in the family, Application
Development Advisor, 2002.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[8] P.S. DeWitte, R.J. Mayer, and M.K. Painter, IDEF family
of methods for concurrent engineering and business reengineering applications, Knowledge Based Systems.
College Station, TX, USA, 1992.
[9] FIPS, Integeration Definatin of Functional Modeling
(IDEF0), Federal Information Processing Standards.
Publication 183, Springfield, VA, USA, 1992.
[10] A. Presley, and D. Liles, The Use of IDEF0 for the
Design and Specification of Methodologies, Proceedings
of the 4th Industrial Engineering Research Conference,
Nashville, USA, 1995.
[11] C. L. Ang, R. K. Gay, L. P. Khoo, and M. Luo, A
knowledge-based Approach to the Generation of IDEF0
Models, International Journal of Production Research,
Vol. 35, 1997, pp. 1385-1412.
[12] Eclipse
Modeling
Framework
(EMF),
http://www.eclipse.org/emf
[13] Essential MOF (EMOF) as part of the OMG MOF 2.0
specification,
http://www.omg.org/docs/formal/06-0101.pdf.
17
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Collaborative system is currently underway to many
organizations of the fact that it is not just a catch
phrase or fad, but it is truly an essential shift in the
way technology delivers value to various businesses
nature. Schools are the place for handling information
and knowledge and, in most developed countries,
Internet has been a source of information as well as a
tool for teaching and learning. Thus, it is crucial to
have a transformation in our education field to allow
new generations to become competent in the
technology use. The purpose of this paper is to find
out the technique that is able to enhance the
collaborative learning process which is known as
Think-Pair-Share. This study also aims at proposing a
collaborative system that will apply the Think-PairShare technique to ease the collaborative process
among teacher and students. The CETLs project is
meant to address the support for collaborative learning
and the establishment of shared understanding among
students and teachers. This paper also introduces a
collaborative framework for CETLs which adapt the
use of Think-Pair-Share in a collaborative
environment.
1.
technique,
Introduction
18
19
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2.
Related Research
2.1
Collaborative Activities
Individual student
Educator as Collaborative
Facilitator
Individual student
Individual student
Collaborative Learning
occurs among student
within groups both in
and outside class.
Groups work as a team
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Table 1: Collaborative Learning Features
CL Features
2.2
Supporting Tools
Synchronous
Tools
- Audio conferencing
- Video Conferencing
- Instant Messaging
.- Web conferencing
.- Chat
- Whiteboards
Asynchronous
Tools
- Discussion boards
- Links
- E-mail
.- Calendar
.- Group Announcements
- Surveys and Polls
Document
Management
- Resource Library
- Upload / Download
Collaborative Process
2.3
Collaborative Techniques
2.3.1 Fishbowl
This technique has a team of three or four
students to work on a problem or exercise. At the
same time, other teams of three or four observe
the first team. In particular, the first teams work
on seeking other points-of-view, listening to and
paraphrasing ideas, and other communication
skills while solving the given problem. The
second teams focus their attention on the team
dynamic and make sure they are prepared to
discuss how well or poorly the first teams
worked together to solve the problem. After the
given duration of time, the class discusses what
had and had not happen during the activity [9].
2.3.2 Jigsaw
The Jigsaw technique begins with pairs
preparation, where each pair has a subject to
study. The students must read and summarize
their material and plan how to teach it to the rest
of their own initial group. Then, new pairs of
students are formed. Typically one student
listens the material presented by the other and
suggests improvements. Then, the roles are
interchanged. The cooperative goal is that all
students know all parts of the educational
material, where all of the students must teach and
learn. While students work, the teacher moves
from group to group. Her job is to observe the
work and assist the students in their processes.
At the end of the session, students learning must
be evaluated using an individual test on all
lesson topics [10].
20
21
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2.4
2.3.4 Think-Pair-Share
This technique involves sharing with a partner
which enables students to assess new ideas and if
necessary, clarify or rearrange them before
presenting them to the larger group [12]. Detail
explication about the Think-Pair-Share technique
will be discussed in Section 4.
Table 2: Comparing the Five Existing Systems with Collaborative Learning Features
CL Features
User
(Student)
(Teacher)
System
Functionality
Member
Login
WebICL
CoVis
Group formations
Group
Joining
Group
Activity
VLE
CoMMIT
GREWP
tool
Audio Conferencing
Synchronous
tools to support
communications
Video conferencing
Chat
Instant Messaging
Discussion Boards /
Forums
Whiteboards / Editor
Calendar
Asynchronous
tools to support
communications
Surveys/Polls
Document
Management
Collaborative
Technique
Resource Library
Upload / Download
N/A
N/A
N/A
N/A
Paired
Annotatio
ns
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.
What is CETLs
Since
CETLs
applies
Think-Pair-Share
technique, it allows the students to think
individually, interact with their pair and share
the information with all the students and their
teacher. This technique helps students to
improve and enhance their knowledge by sharing
all the information, ideas and skills [19]. It
educates the student to be more active and
participate during the learning process rather
than to be a passive learner. Implementing
CETLs at school will ease the work of
organizing the students and teachers activities
such as uploading and downloading the
assignment, distributing quizzes, marks and
comments, as well as helping the teacher in
terms of their time management since all the
work and activities can be done online.
3.1
22
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.1.2 To
enhance
Technology
Learning
With
3.1.3 Introducing
Approach
Students-centered
4. Think-Pair-Share: A Collaborative
Technique For CETLs
After reviewed some common collaborative
techniques in Section 2.3, Think-Pair-Share
technique is chosen to be applied in CETLs due
to some reasons. Think-Pair-Share is a cooperative learning technique which is said as a
multi-mode discussion cycle in which students
listen to a question or presentation, have time to
think individually, talk with each other in pairs,
and finally share responses with the larger group
[22]. It is a learning technique that provides
processing time and builds in wait-time which
enhances the depth and breadth of thinking [23].
Using a Think-Pair-Share technique, students
think of rules that they share with partners and
then with classmates in a group [22]. The general
idea of the think-pair-share is having the students
independently think or solve a problem quietly,
then pair up and share their thoughts or solution
with someone nearby. Every student should be
prepared for the collaborative activities; working
with pair, brainstorming ideas, and share their
thoughts or solution with the rest of the
collaborators. Obliquely, this technique let the
team learns from each other. When applying this
technique, there will be a wait time period for the
students [24]. The use of timer gives all students
the opportunity to discuss their ideas. At this
knowledge construction stage, the students will
find out what they do and do not know which is
very valuable for them. Normally this active
process is not widely practice during traditional
lectures. Teachers had time to think as well and
they were more likely to encourage elaboration
of original answers and to ask more complex
questions [25]. The Think-Pair-Share technique
also enhances the student's oral communication
skills as they have ample time to discuss their
ideas with the one another and therefore, the
responses received are often more intellectually
concise since students have had a chance to
23
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
DESCRIPTION
What?
Why?
How?
5.
5.1
24
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5.2
25
26
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5.3
CETLs Interfaces
Individual working
space (Think)
27
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Pop-up window to
display conversation
between pair
Fig. 6 The Pair Stage in CETLs
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
To fulfill the concept of Think-PairShare, all of the three stages of think, pair, and
share is provided with a timer, where the
supervisor/teacher is able to set the time for each
task, while the learner need to conclude their
answers within the specified time. To further
enhance this educational system, CETLs does
not only offer the think, pair, and share features,
but some other learning features to support the
teaching and learning activities. The features
include the class management, assignment
management, notes/assignment upload and
download as well as e-mail.
Computer-supported
collaborative
learning
environment is a good opportunity for learning
communities to leverage new technology. Thus,
REFERENCES
[6]
[7]
[8]
[9]
[10]
6.
[1]
[2]
[3]
[4]
[5]
Conclusions
28
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[11]
[12]
[13]
[14]
[15]
[16]
G. E. Lautenbacher, J. D. Campbell, B. B.
Sorrows, D. E. Mahling, Supporting
Collaborative,
Problem-Based
Learning
Through Information System Technology,
Frontiers in Education Conference, 27th
Annual Conference. Teaching and Learning in
an Era of Change. Proceedings 1997, vol. 3,
pp.1252-1256.
[17]
[18]
[19]
[20]
29
30
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Many emerging applications in mobile ad hoc networks involve
group-oriented communication. Multicast is an efficient way of
supporting group oriented applications, mainly in mobile
environment with limited bandwidth and limited power. For
using such applications in an adversarial environment as military,
it is necessary to provide secure multicast communication. Key
management is the fundamental challenge in designing secure
multicast communications. Multicast key distribution has to
overcome the challenging element of 1 affects n phenomenon.
To overcome this problem, multicast group clustering is the best
solution. This paper proposes an efficient dynamic clustering
approach for QoS based secure multicast key distribution in
mobile ad hoc networks. Simulation results shows the
demonstration of Dynamic clustering approach have better
system performance in terms of QoS performance metrics such
as end to end delay, energy consumption, key delivery ratio and
packet loss rate under varying network conditions.
Keywords: Mobile Ad hoc Networks, Multicast, Secure
Multicast Communication, QOS Metrics.
1. Introduction
A MANET (Mobile Ad Hoc Network) is an autonomous
collection of mobile users that offers infrastructure-free
architecture for communication over a shared wireless
medium. It is formed spontaneously without any
preplanning.
Multicasting
is
a
fundamental
communication
paradigm
for
group-oriented
communications such as video conferencing, discussion
forums, frequent stock updates, video on demand (VoD),
pay per view programs, and advertising. The combination
of an ad hoc environment with multicast services [1, 2, 3]
induces new challenges towards the security
infrastructure. In order to secure multicast communication,
security services such as authentication, data integrity,
access control and group confidentiality are required.
Among which group confidentiality is the most important
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Security requirements:
Forward secrecy: In this case, users left the group should
not have access to any future key. This ensures that a
member cannot decrypt data after it leaves the group.
Backward secrecy: A new user who joins the session
should not have access to any old key. This ensures that a
member cannot decrypt data sent before it joins the group.
Non-group confidentiality: Users that are never part of the
group should not have access to any key that can decrypt
any multicast data sent to the group.
Collusion freedom: Any set of fraudulent users should not
be able to deduce the currently used key.
The process of updating the keys and distributing them to
the group members is called rekeying operation. A critical
problem with any rekey technique is scalability. The rekey
process should be done after each membership change,
and if the membership changes are frequent, key
management will require a large number of key exchanges
per unit time in order to maintain both forward and
backward secrecies. The number of TEK update messages
in the case of frequent join and leave operations induces
several QOS characteristics as follows:
Reliability:
Packet Drop Rate: The number of TEK update messages
in the case of frequent join and leave operations induces
high packet loss rates and reduces key delivery ratio which
makes unreliable.
Quality of service requirements:
1-affects-n: If a single membership changes in the group,
it affects all the other group members. This happens
typically when a single membership change requires that
all group members commit to a new TEK.
Energy consumption: This induces minimization of
number of transmissions for forwarding messages to all
the group members.
End to end delay: Many applications that are built over the
multicast services are sensitive to average delay in key
delivery. Therefore, any key distribution scheme should
take this into consideration and hence minimizes the
impact of key distribution on the delay of key delivery.
Key Delivery Ratio: This induces number of successful
key transmission to all group members without any loss of
packet during multicast key distribution.
Thus a QOS based secure multicast key distribution in
mobile ad hoc environment should focus on security,
reliability and QOS characteristics.
To overcome these problems, several approaches propose
a multicast group clustering [9, 10, and 11]. Clustering is
31
2. Related Work
Key management approaches can be classified into three
classes: centralized, distributed or decentralized. Figure 2
illustrates this classification.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
32
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
33
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
34
3. Proposed Methodology
The methodology of efficient CBMT is proposed in order
to assure reliable QOS based secure multicast key
distribution for mobile ad hoc networks. The specific
contributions are structured in four Phases.
Phase I : Integration of OMCT with DSDV [27]
Makes easy election of LC
Improves key delivery ratio
Phase II : Enhancement of OMCT with DSDV[28]
Reduces end to end delay
Consumes less energy
Phase III : CBMT with MDSDV[29]
Improves reliability
Reduces packet drop rate
Phase IV: Efficient CBMT
Improves Key Delivery Ratio
Consumes Less Energy
Reduces end to end delay
Reduces Packet Drop Rate
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
35
The density of
members
Network surface
(1000m*1000m,
1500m*1500m,
2000m *2000m).
10km/h (2.77m/sec)
20 seconds
200 seconds
IEEE 802.11
Random waypoint model
Mobility Aware MDSDV
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
key
36
5. Conclusion
References
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
37
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Breast cancer continues to be a significant public health
problem in the world. Early detection is the key for
improving breast cancer prognosis. The CAD systems can
provide such help and they are important and necessary
for breast cancer control. Microcalcifications and masses
are the two most important indicators of malignancy, and
their automated detection is very valuable for early breast
cancer diagnosis. The main objective of this paper is to
detect, segment and classify the tumor from mammogram
images that helps to provide support for the clinical
decision to perform biopsy of the breast. In this paper, a
classification system for the analysis of mammographic
tumor using machine learning techniques is presented.
CBR uses a similar philosophy to that which humans
sometimes use: it tries to solve new cases of a problem by
using old previously solved cases. The paper focus on
segmentation and classification by machine learning and
problem solving approach, theoretical review have been
undergone with more explanations. The paper also
describes the theoretical methods of weighting the feature
relevance in case base reasoning system.
Key words: Digital Mammogram, Segmentation, Feature
Extraction and Classification.
1. Introduction
Breast cancer is the most common cancer of western
women and is the leading cause of cancer-related death
among women aged 15-54 [14]. Survival from breast
cancer is directly related to the stage at diagnosis. Earlier
38
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
section 6.
39
Machine Learning
Approach
Image Preprocessing
Image Segmentation
3.1 Digitization
First, the X-ray mammograms are digitized with an
image resolution of 100 100 m2 and 12 bits per pixel by
a laser film digitizer. To detect microcalcifications on the
mammogram, the X-ray film is digitized with a high
resolution. Because small masses are usually larger than
3mm in diameter, the digitized mammograms are
decimated with a resolution of 400 400 mm 2 by
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.2 Preprocessing
Preprocessing is an important issue in low-level image
processing. The underlying principle of preprocessing is to
enlarge the intensity difference between objects and
background and to produce reliable representations of
breast tissue structures. An effective method for
mammogram enhancement must aim to enhance the
texture and features of tumors. The reasons are: (1) lowcontrast of mammographic images; (2) hard to read
masses in mammogram because it is highly connected to
surrounding tissues; the enhancement methods are
grouped as global histogram modification approach and
local processing approach. Current work is carried out in
global histogram modification approach.
Preprocessing
Approach
Description
Advantage
Global Histogram
Modification
Approach
Re-assign the
intensity values of
pixels to make the
new distribution
of the intensities
uniform to the
utmost extent
Effective in
enhancing the entire
image with low
contrast
Local Approach
Feature-based or
using nonlinear
mapping locally
Effective in local
texture enhancement
3.3 Segmentation
After preprocessing, next stage is to separate the
suspicious regions that may contain masses from the
background parenchyma, that is to partition the
mammogram into several non-overlapping regions, then
extract regions of interests (ROIs), and locate the
suspicious mass candidates from ROIs. The suspicious
area is an area that is brighter than its surroundings, has
almost uniform density, has a regular shape with varying
size, and has fuzzy boundaries. The Segmentation
methods do not need to be excruciating in finding mass
locations but the result for segmentation is supposed to
40
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
41
Description
g (i, j) g (i, j)
Skewness
N 1
i , j 0
g (i, j) g (i, j)
g (i, j ) g (i, j )
g (i, j ) g (i, j )
N 1
i , j 0
Kurtosis
Retrieved Case
N 1
New case
i , j 0
N 1
Acquired Case
New case
Problem
i , j 0
A1
A
Circularity
Compactness
Domain Knowledge
Revised case
P2
A
Retain Problem
P1 P2
P1
Contrast
Standard deviation
2
g (i, j ) 1 / N i , j 0 g (i, j )
N 1
Intensity
tumor area
Area
Length
Breadth
Convex Perimeter
Roughness
Solved Case
Case Base
SimilarityCase _ x, Case _ y r
i 1
Wi xi yi
42
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
(1)
Where Case_x, Case_y are two cases, whose similarity is
computed; F is the number of features that describes the
case; xi y i represent the value of the ith feature of case
Case_x and Case_y respectively; and wi is the weight of
the ith feature. In this study we test the Minkowskys
metric for three different values of r: Hamming distance (r
= 1), Euclidean distance (r = 2), and Cubic distance (r =
3). This similarity function needs to compute the feature
relevance ( wi ) for each problem to be solved. Assuming
an accurate weight setting, a case-based reasoning system
can increase their prediction accuracy rate. We use also
the Clarks and the Cosine distance, both are based on
distance concept and also use weighting features.
Sometimes human experts can not adjust the feature
relevance, automatic method can solve this limitation.
5.1 Feature Selection Based on Rough Set theory
This paper presents a review on weighting method based
on the Rough Sets theory introduced by Pawlak [10]. It is
a single weighting method (RSWeight) that computes the
feature weights from the initial set of train cases in the
CBR system. We also introduce a weighting method that
computes the Sample Correlation among the features and
the classes that the cases may belong to. The idea of the
rough set consists of the approximation of a set by a pair
of sets, called the lower and the upper approximation of
this set. In fact, these approximations are inner and closure
operations in a certain topology generated by the available
data about elements of the set. The main research trends in
Rough Sets theory which try to extends the capabilities of
reasoning systems are: (1) the treatment of incomplete
knowledge; (2) the management of inconsistent pieces of
information; (3) the manipulation of various levels of
representation, moving from refined universes of
discourse to coarser ones and conversely .
We compute from our universe (finite set of objects that
describe our problem, the case memory) the concepts
(objects or cases) that form partitions of that Universe.
The union of all the concepts made the entire Universe.
Using all the concepts we can describe all the equivalence
relations (R) over the universe. Let an equivalence relation
be a set of features that describe a specific concept. The
universe and the relations form the knowledge base,
defined as KB = (U; R). Every relation over the universe
is an elementary concept in the knowledge base [10].
All the concepts are formed by a set of equivalence
relations that describe them. So we search for the
minimum set of equivalence relations that define the same
concept as the initial set. The set of minimum equivalence
Sample_ Correlation xi , z
1 N
N 1 j 1
xijxi
Sx
i
z j z
S z
(2)
6. Experimental Results
Currently the project is in the initial stage (prototype)
and first phase of implementations are done in matlab.
Therefore there are forty six X-ray mammograms taken
for testing the method. The mammograms were taken from
the patient files in the Free Mammogram Database
(MIAS). In addition, 10 mammograms were used for
training of the classifier. The 46 mammograms include 15
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
43
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Conclusion
The paper provides the methodology with partial
results of segmentation and explains theoretically how
mammogram tumor classification is performed through
case base reasoning method. First stage of mammogram
mass segmentation result is shown in this paper, second
stage is under implementation, so the conceptual
framework of classification method is described on the
paper. Info structure presented in this paper when
successfully implemented would have an immense impact
in the area of computer-aided diagnosis system. In future
the methodology can be applied in a variety of medical
image applications
Acknowledgments
I would like to thank School of Computer Science and
Institute of Post graduate Studies, University Sains
Malaysia for supporting to progress my research activities.
References
[1] Kohonen T, Self Organization and Associative Memory,
Spring-Verlag, Hidelbarg, (1998)
[2] Hall EL, Computer Image Processing and Recognition ,
Academic Press, New York, (1978).
[3] Woods R.E, Digital Image Processing, Adisson Welsely,
Reading, (1992).
[4] Kapur T, Model based Three Dimensional Medical Image
Segmentation, MIT, (1992).
[5] Sheshadri HS, Kandaswamy A., Detection of breast cancer
by mammogram image segmentation. JCRT journal, Page
no 232-234(2005).
[6] S.M. Lai, X. Li, and W.F. Bischof, On techniques for
detecting circum- scribed masses in mammograms, IEEE
Trans Med Imaging 8, 377 386 (1989).
[7] H.D. Li, M. Kallergi, L.P. Clarke, V.K. Jain, and R.A. Clark,
Markov random field for tumor detection in digital
mammography, IEEE Trans Med Imaging 14,
565 576,
(1995).
[8] H. Kobatake, M. Murakami, H. Takeo, and S. Nawano,
Computerized detection of malignant tumors on digital
mammograms, IEEE Trans Med Imaging 18, 369378,
(1999).
[9] N. Otsu, A threshold selection method from gray-level
histograms, IEEE Trans System Man Cybernet SMC-9, 62
66 (1979).
[10] Z.Pawalak, Rough Sets: Thoertical Aspects of Reasoning
Data, Kluwer Academic Publication (1991).
[11] Jiang, Y., R.M. Nishikawa and J. Papaioannou,
"Requirement of Microcalcification Detection for
44
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
45
Abstract
To assist the judgment of human operators, a computer-aided
drug delivery system with adaptive predictive control is
suggested. The assisting system can predict future responses of a
patient to drug infusions, calculate optimal inputs, and present
those values on a monitor. Regardless of sudden disturbance such
as bleeding, human operators of the computer-aided system were
able to control arterial blood pressure. To improve the computeraided system, future studies will be required to consider the
method for emergency warning or correspondence to multiple
drug infusions.
Keywords: Adaptive Predictive Control, Computer-aided
System, Neural Networks, Arterial Blood Pressure, System
Evaluation
1. Introduction
2. Control System
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Computer-aided system
Target
value, r
+
- e(t+i)
2. Predictive Loop
Optimal value
Delay
1. Learning Loop
Human
operator
BPNN(t+i)
u(t)
BP(t)
u(t)
Infusion
pump
Patient
1 2 1
[BP BPNN ] 2
2
2
(2)
Sensor
BP(t)
Model response
BPNN(t+i)
Neural
Network
Delay
46
Blood
pressure
Target values
w * w Kn w
Predictive response
by NN
(3)
where
Determined
infusion rates
Calculated
optimal input
w
Operation
panel for inputs
BPNN
E
w
w
2.2 NN Output
(1)
Q (t ) r (t i ) BPNN (t i )
(4)
i 1
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
47
3. System Evaluation
(a) Step response
25
3.1 Participants
20
15
MAP(t)=K(1-exp(-(t-L)/T))
10
BP (mmHg)
5
0
(b) Unit impuluse response
1.5
1.0
g(t)*T=(K/T)*exp(-(t-L)/T)
0.5
BP(t ) K 1 exp
t L
(t L)
(5)
BPmodel (t ) g ( ) T u (t )
0
where
(6)
60
120
180
240
300
Time (s)
Fig. 2 The average step response changed from baseline during a 5-min
NE infusion (a) and the unit impulse response (b).
(B) BP control
Using the model response and the suggested control
system, the BP control tasks were performed. The
objectives of the first and second tasks were to study the
effect of initial learning of beginners on BP control. Target
values were set to two steps: +20 mmHg (60-400 s) and
+10 mmHg (410-720 s). Although the actual control time
was 720 s, the single trial in this study was performed in
an abridged form: 288 s (4 s 72 times) meaning total
thinking time for selection of drug infusion rates.
The purpose of the third task was to study the accuracy of
correspondence to an unexpected emergency (e.g., the
sudden change induced by bleeding). Target values were
the same as those in the first and second tasks. A large
disturbance of -30 mmHg was added to the BP response in
the last half of the task (360-720 s).
3.3 Procedures
g (t )
K
t L
exp
T
T
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
10
3rd
-10
Infusion rate
(g/kg/min)
8
6
4
3rd
1st,2nd
0
0
120
240
360
Time (s)
480
600
720
Fig. 3 BP control based on APC-NN. The third trial had an acute and
unknown disturbance of -30 mmHg.
40
BP (mmHg)
1st,2nd
20
30
BP (mmHg)
48
1st
30
2nd
20
10
0
3rd
-10
8
Infusion rate
(g/kg/min)
1st
3rd
6
2nd
2
0
4. Results
The results for automatic control based on APC-NN are
shown in Fig. 3. An overshoot was observed in an initial
adjustment of BP to the target value of +20 mmHg;
however, BP outputs totally converged on the target values,
120
240
360
Time (s)
480
600
720
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
MAP (mmHg)
Subjects
No.
Non-assist group
Assist group
1st
2nd
3rd
1st
2nd
3rd
3.53
2.99
4.93
3.93
3.65
2.98
3.88
2.34
4.02
4.47
2.39
3.38
3.06
1.39
2.18
4.62
3.03
1.97
4.79
4.09
3.71
2.15
2.69
2.60
3.94
3.18
4.33
2.98
2.75
2.43
2.32
2.45
3.10
2.06
3.30
2.83
2.16
2.42
3.68
5.38
4.34
4.24
3.38
(0.94)
2.70
(0.84)
3.71
(0.88)
3.66
(1.29)
3.17
(0.67)
2.92
(0.73)
Average
(SD)
30
49
20
2nd
10
5. Discussion
3rd
0
-10
Infusion rate
(g/kg/min)
2nd
3rd
6
1st
4
2
0
0
120
240
360
480
600
720
Time (s)
Fig. 5 Typical example of BP control in the computer-aided group.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
6. Conclusion
The effectiveness of a computer-aided drug delivery
system based on APC-NN was assessed from the
viewpoint of the cognitive and learning abilities of
beginners. A positive effect of the computer-aided system
was observed in the case of an acute disturbance. In future
studies, the assistant system will need effective fusion of
the ability of quick searching for optimal inputs in
computers with careful and delicate control in humans.
Acknowledgment
This study was funded by a Grant-in-Aid for Young
Scientists (B) from the Ministry of Education, Culture,
Sports, Science and Technology of Japan (KAKENHI,
22700466).
References
[1] T. Suzuki, and T. Konishi, JJSCA, Vol. 29, No. 4, pp. 406406, 2009.
[2] A. M. Fields, K. M. Fields, and J. W. Cannon, Closed-loop
systems for drug delivery, Curr Opin Anaesthesiol, Vol. 21,
No. 4, 2008, pp. 446-451.
50
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
51
Aalto University
Espoo, Finland
Abstract
E-business refers to the utilization of information and
communication technologies (ICT) in support of all the activities
of business. The standards developed for e-business help to
facilitate the deployment of e-business. In particular, several
organizations in e-business sector have produced standards and
representation forms using XML. It serves as an interchange
format for exchanging data between communicating applications.
However, XML says nothing about the semantics of the used
tags. XML is merely a standard notation for markup languages,
which provides a means for structuring documents. Therefore the
XML-based e-business software is developed by hard-coding.
Hard-coding is proven to be a valuable and powerful way for
exchanging structured and persistent business documents.
However, if we use hard-coding in the case of non- persistent
documents and non-static environments we will encounter
problems in deploying new document types as it requires a long
lasting standardization process. Replacing existing hard-coded ebusiness systems by open systems that support semantic
interoperability, and which are easily extensible, is the topic of
this article. We first consider XML-based technologies and
standards developed for B2B interoperation. Then, we consider
electronic auctions, which represent a form of e-business. In
particular, we represent how semantic interoperability can be
achieved in electronic auctions.
Keywords: B2B, Open Systems, Electronic Auctions, Semantic
Interoperability, Web Services, Ontologies.
1. Introduction
Electronic business, or shortly e-business, refers to a wide
range of online business activities for products and
services. It is usually associated with buying and selling
over the Internet, or conducting any transaction involving
the transfer of ownership or rights to use goods or services
through a computer-mediated network.
Business-to-business (B2B) is a form of e-business. It
describes commerce transactions between businesses, such
as between a manufacturer and a wholesaler, or between a
wholesaler and a retailer. Other forms of e-business are
business-to-consumer (B2C) and business-to-government
(B2G). The volume of B2B transactions is much higher
than the volume of B2C transactions [1].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
52
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
53
systems that are initially developed for local uses, but are
eventually expanded to participate in open environments.
Developing open systems is challenging as the system
should cope with the scale of the number of participant
and preserve the autonomy of local heterogeneous systems
while maintaining coordination over these systems.
Web services [12] provide a methodology that supports
open, distributed systems. They are frequently application
programming interfaces (API) or web APIs that can be
accessed over a network, such as the Internet, and
executed on a remote system hosting the requested
services. Technically Web services are self-describing
modular applications that can be published, located and
invoked across the Web. Once a service is deployed, other
applications can invoke the deployed service.
There are two ways of using Web services: the RPCcentric view (Remote Procedure Callcentric) and the
document-centric view [13]. The RPC-centric view treats
services as offering a set of methods to be invoked
remotely while the documentcentric view treats Webservices as exchanging documents with one another.
Although in both approaches transmitted messages are
XML-documents, there is a conceptual difference between
these two views.
In the RPC-centric view the application determines what
functionality the service will support, and the documents
are only business documents on which the computation
takes place. Instead the document-centric view considers
documents as the main representation and purpose of the
distributed computing: each component of the
communicating system reads, produces, stores, and
transmits documents. The documents to be processed
determine the functionality of the service. Therefore,
document centric view corresponds better with our goal of
applying services in open environments.
4. E-Business Frameworks
The standards developed for B2B interoperation, which
are also called e-business frameworks, guide the
development B2B implementations by specifying the
details for business processes, exchanged business
documents, and secure messaging.
Even though the interoperation in B2B is nowadays
usually based on Web services, it is useful to make a
classification of the interoperation/integration approaches
[14].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4.1 EDI
Electronic Data Interchange (EDI) [15, 16] refers to the
transmission
of
electronic
documents
between
organizations by electronic means. These documents
generally contain the same information that would
normally be found in a paper document used for the same
organizational function. However, EDI is not confined to
just business data but encompasses all fields including
medicine, transport, engineering and construction.
The first B2B implementations were bilateral private
message-oriented solutions which were not based on any
standard. The need of common B2B standards
strengthened as the amount of private point-to-point
solutions that the companies had to maintain increased.
The development of EDI (Electronic Data Interchange)
standards for B2B began in 1970s. The first EDI
standards versions (X12) were published in 1983. It is
most commonly used EDI syntax in North America. The
next EDI standard (EDIFACT) originated in 1985. It is
dominant EDI standard outside North America.
In the 1970s, when the development of the EDI standards
began, messaging information was expensive. Therefore
the EDI syntax is very compact in size, which in turn gives
rise that EDI documents are hard to read and maintain.
However, EDI has advantages over manual business
interactions as it reduce paper consumption, eliminates
data entry errors, and speed up the transfer of business
documents. Newer XML/EDIFACT is an EDI- format
54
4.2 ebXML
The goal of ebXML is to provide an open XML-based
infrastructure enabling the global use of electronic
business information in an interoperable, secure and
consistent manner by all parties [17]. The objective of
ebXML is to be a global standard for governmental and
commercial organizations of all sizes to find business
partners and interact with them.
The development of the ebXML started at 1999 and it was
sponsored by UN/CEFACT (United Nations centre for
Trade Facilitation and Electronic Business) and OASIS
(Organization for the Advancement of Structured
Information Standards).
The ebXML standard is comprised of a set of
specifications designed to meet the common business
requirements and conditions for e-business [13]. The CC
(Core Components) provides the way business information
is encoded in the exchanged business documents. The
BPSS (Business Process Specification Schema) is an
XML-based specification language that can be used in
defining the collaboration of the communicating business
partners. However, BPSS is quite limited in that it can
only express simple request-response protocols. In
addition, BPSS lacks formal semantics, and thereby it
cannot be ensured that both communicating parties have
the same interpretation of the exchanged documents.
The vocabulary that is used for an ebXML specification
consists of a Process-Specification Document, a
Collaboration Protocol Profile (CPP), and a Collaborative
Partner
Agreement
(CPA).
Process-Specification
Document describes the activities of the parties in an
ebXML interaction. It is expressed in BPSS. CPP
describes the business processes that the organization
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
55
4.3 RosettaNet
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
6. Electronic Auctions
In e-business buyers and sellers should be able to
interoperate inside an architecture that is easy to use and
maintain [26, 27]. Electronic auctions (e-auctions)
represent one approach to achieve this goal by bringing
together business in the web [28, 29, 30].
E-auction is a system for accepting bids from bidders and
computing a set of trades based on the offers according to
56
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
7. Auction Ontologies
A feature of ontologies is that depending on the generality
level of conceptualization, different types of ontologies are
needed. Each type of ontology has a specific role in
information sharing and exchange. For example, the
purpose of the auction ontology is to describe the concepts
of the domain in which auction take place.
An auction ontology describes the concepts and their
relationships related to a variety of auction types, e.g., on
English auction, combinatorial auction, second-price
auction, sealed auction, and multi-attribute auction. To
illustrate this, a simple auction ontology is graphically
presented in Figure 1. This graphical representation is
simplified in the sense that it does not specify cardinalities
such as whether an offer may concern one or more items.
Neither does it specify the properties of classes such as the
identification of a bidder, the type of an auction, or the
price of an offer.
Auction
participates
has_placed
Bidder
concerns
Offer
Item
57
participates
type
Auction
type
e254
type
B2B
Corporation
type
participates
type
Bidder
type
participates
ARGO
Corporation
has_placed
type
has_placed
type
has_placed
$85
type
concerns
type
concerns
Offer
type
$87
type
concerns
p12
type
Item
type
p12
8. Software Architecture
By software architecture we refer to the structure, which
comprises software components, the externally visible
properties of those components, and the relationships
between them [45]. The main goal of the auction software
architecture is to allow the reconsideration and redesign of
auction processes. This goal is important because the
marketplace may be forced to rapidly change the existing
auction processes or to develop new ones to better utilize
the new information technology. In order to facilitate the
changes of the auction processes we can use workflowtechnology [46] in implementing the auction engine,
which coordinates the auction processes.
The auction system has three types of users: buyers, sellers
and the auction system administrator. Figure 3 illustrates
the communication structure between the users and the
system as well as the communication between the
components of the system.
The Ontology managers are implemented by Knowledge
Management Systems which are computer based systems
for managing knowledge (ontologies) in organizations.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Buyer
Bidder
Bidder
Auctionagent
Ontology
manager
Auctionagent
WSinterface
WSinterface
Ontology
manager
WSinterface
Ontology
manager
Auctionengine
(BPELengine)
System
administrator
Marketplace
Seller
9. Auction Processes
We model auctions as business processes. A business
process is a series of tasks from one to the next over time
[13]. Depending on the specific business process, its tasks
58
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
ec
la
tp
ek
r
a
M
r
ey
u
B
59
Sendoffer
request
Receive
offer
Sendbest
offer
Send
offer
Receive
bestoffer
Receive
Initiation
request
Receiveoffer
request
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
60
http://www.it.lut.fi/ontologies/auction_ontology.
Using this URL as the prefix of an XML-element we can
give globally unique names for auction models and their
elements. For convenience, however, it is useful to specify
an abbreviation for the URL, e.g., ao. This can be
specified as follows:
<xmlns:
ao=http://www.it.lut.fi/ontologies/auction_ontology >
Now, for example, the element <ao:bidder> is a globally
unique name of a class of the auction ontology. Hence,
for example the previous natural language sentence can be
bind to an ontology and presented in RDF as follows:
<rdf:RDF
xmlns:rdf=
http://www.w3.org/1999/02/22-rdf-syntax-ns#
xmlns : xsd=
http://www.w3.org/2001/XMLSchema#
xmlns: ao=
http://www.lut.fi/ontologies/auction_ontology#>
<rdf:Description rdf:about=OF44>
<rdf:type rdf:resource=&ao;offer/>
<ao:value rdf:datatype=xsd;integer>
85
</ao:value>
<ao:item rdf:resource=p12/>
<ao:bidder rdf:resource=B2B-Corporation/>
</rdf : Description>
</rdf:RDF>
Now, semantic interoperation can be carried out by
including this RDF description in the body part of a SOAP
message, and by sending the offer to the Web service of
the marketplace.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
12. Conclusions
The sophistication of information technology and
communications is changing our society and economy.
Computers and other electronic devices increasingly
communicate and interact directly with other devices over
the Internet. Businesses have been particularly quick to
recognize the potential and realize the benefits of adopting
new computer-enabled networks. Businesses use networks
even more extensively to conduct and re-engineer
production
processes.
61
processes
and
streamline
procurement
References
[1] Turban, E., King, D., Viehland, D., and Lee, J. (2006).
Electronic commerce. PrenticeHall.
[2] Harold, E. and Scott Means, W. (2002). XML in a Nutshell.
OReilly & Associates.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
62
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
63
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
This study points to an automatic channel assignment for
unadministrated, chaotic WLANs to take advantage on the given
capacity of the IEEE 802.11 frequency spectrum and to enhance
the quality of the entire WLAN sphere.
This paper determines four public channel assignment algorithms
for IEEE 802.11 networks. We show the problem of channel
assignment in unadministrated WLANs and describe each
algorithms functional principles. We implemented each one and
simulated them on a huge amount of random topologies. The
results show the timing behavior, the used iterations and the error
statistics. Based on these data we determined problems in each
algorithm and found graphs were they failed to find a collision
free solution. We also implemented some improvements and
finally a modified algorithm is presented that shows best results.
Keywords: Wireless LAN, Channel Selection, Heuristic,
Optimization.
64
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
CWmin CW CWmax
(1)
(2)
65
4. Algorithms
4.1 Channel selection in chaotic wireless networks
Matthias Ihmig and Peter Steenkiste have published a
channel selection method [7] discussed in this section.
Their method tends to optimize the channel selection in
chaotic wireless networks. Chaotic wireless networks are a
group of single WLAN Access Points including its clients
(wireless stations), within different administration
domains and without coordination between those. Due to
this scenario this method depends only on locally
measurements and without communication. We will call
this method CHAOTIC afterwards.
The CHAOTIC channel selection procedure is
divided in three modules: monitoring module, evaluation
module and channel switching module.
The monitoring module permanently collects
information about the channel load on a single dedicated
channel. The AP collects data for at least t hold seconds.
Then it switches to the next channel if it is necessary. For
t hold Ihmig et al proposed a 10 seconds interval. To
determine the channel load the so called MAC delay is
used as metric in [7]. During the measurement the channel
load can fluctuate significantly, so they use an
exponentially weighted moving average to smooth the
measured load value:
xk xk 1 (1 ) xk ,
n
n 1
(3)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
channel
way AP1 detects AP2's and AP3's high channel load (and
maybe of AP0 also, when AP1 and AP0 startup
simultaneously) and switches to channel 1. The updated
threshmin, AP1 threshcurrent is up to two or three times
> thresh
current
current
thresh
66
= second
smallest
entry
thresh
current
= thresh
current
min
step 1
2
Channel 1
Channel 2
channel
> thresh
current
current
thresh
= second
smallest
entry
thresh
current
= thresh
current
min
Fig. 3 Program flow chart of channel selection [7] in error condition - the
algorithm locks in a loop.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
67
step 1
initial state
2) Choose the bottleneck (with channel utilization V). If
there are several bottlenecks choose one randomly.
3) Identify bottleneck's assigned channel k. For each
and
available channel n from 1 to N with n k
neighbor AP j, temporarily modify the channel
assignment by reassigning only AP j with
channel
n. Save the minimum channel utilization W of all
iterations.
4)
V=2, Wmin=2
V=2
0
bottleneck
random (4.b)
step 2
a)
V=2, Wmin=1
0
bottleneck
channel 1
channel 2
.........
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Si :
cj :
interference list of i
ci :
begin
F (ci ) :
f (c , c )
jS i
for k:=1 to K do
begin
F (k ) :
f (k , c )
j S i
if
F (ci ) F (k ) then
begin
F (ci ) : F (k )
ci : k
end
end
j Si
68
end
F (ci ) . In the next step the loop calculates the cost sum
Fk in each iteration and compares it to F (ci ) . If Fk is
smaller than F (ci ) a better channel is found because the
interference costs are decreasing when using this channel.
Rating of the DH algorithm
The algorithm in [2] is very small, very fast and easy to
implement. Because of the very greedy sequence it
converges to a stable solution in only a few steps but also
comes to many incorrect solutions. One example graph
were the algorithm fails to find a collision free solution is
presented in figure5.
initial state
2
0
step1
3
1
channel 1
channel 2
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
69
channel 1
channel 2
5. Comparison of algorithms
pi 0, j i
pi (1 b) pi
b)
pi (1 b) pi
b
, j i
c 1
5) Go back to step 2.
The parameter b is called Learning Parameter and
influences the speed of channel changes. In our simulation
we always used b 0.1 according to statements in [5].
Leith et. al. used a measurement of MACAcknowledgements to measure the channel quality, we
used a simple binary decision. A channel is always good if
no other neighbor is using the same one.
Rating of CFL-Algorithm
The CFL is also very easy to implement and always
converges to a stable solution. On the other hand it deals
with a lot more iterations than a greedy algorithm e.g. the
one presented in section 4.3. This is because the selection
decision is based on probabilities which are updated in
every step in dependence to b (Learning Parameter).
Another thing we found out is that there are some
general graphs or subgraphs which cannot be handled
correctly by the CFL under special circumstances.
Figure 6 shows one general graph with two available
channels where the CFL fails to calculate a correct
solution. If the edge nodes are using the same channel i the
probability pi is set to pi 1 according to step 3a and
will never change again. The nodes in the middle therefore
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
70
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
71
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
72
References
6. Conclusion
In this paper we compared four different algorithms
for channel assignment in IEEE 802.11 networks. These
algorithms are from different publications and are using
different strategies to solve the problem. We implemented
them in C programming language and simulated all
algorithms on a huge amount of random graphs. We
determined weaknesses/problems in every single one and
could therefore not recommend to use one of these in real
world networks with a higher amount of nodes.
The best results achieved the modified CHAOTIC
algorithm. For practical usage this one would be the best
when regarding its performance and robustness.
Or future goal is to develop a WLAN backbone
architecture with a self organizing channel assignment for
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
73
Abstract
Query refinement is one of the main approaches for overcoming
natural language lexical ambiguity and improving the quality of
search results in Information Retrieval. In this paper we propose
a knowledgerich personalized approach for iterative query reformulation before sending it to search engines and entropybased approach for refinement quality estimation. The underling
hypothesis is that the query reformulation entropy is a valuable
characteristic of the refinement quality. We use multi-agent architecture to implement our approach. Experimental results confirm that there is a trend for significant improvement in the quality of search results for large values of entropy.
Keywords: semantic search, multi-agent system, query refinement, query refinement entropy
1. Introduction
In general, the quality of returned from search engine results heavily depends on the quality of the sending query.
Because of natural language ambiguity (such as polysemy
and synonymy) it is very difficult to formulate unambiguous query even for experts in the search domain. That is
why techniques to assist the user in perspicuous formulating a search query are needed for improving the performance of the search engines.
Query refinement is a well-known method for improving
precision and recall in information retrieval. Query improvement may be made before it is sent to the search engines, or after returning the first results. In the first case
the active user involvement and knowledge rich resources
are needed. Therefore, this approach is the most effective
only for expert users. For non-expert user automatic query
expansion, based on statistical analysis of returned results
is more successful. In all cases, the quality of the initial
query is crucial for obtaining relevant results, since the
query reformulation (automatic or iterative) everything is
made on the basis of these results. Therefore, knowledge-
Query Refinement (or expansion) is a process of supplementing a query with additional terms (or general reformulation in some cases), as the initial user query usually is
incomplete or inadequate representation of the users information needs. Query expansion techniques can be classified in two categories (Fig. 1): those based on the retrieved results and those that are based on knowledge. The
former group of techniques depends on the search process,
uses user relevance feedback in an earlier iteration of
search and statistical algorithms (as Local Context Analysis (LCA), Global Analysis [20], and Relevance Feedback
techniques) to identify the query expansion terms. Query
reformulation is made after initial query sending on the
base of the all returned results or using only the first N results automatically (as shown on Fig.2, road (3)) or by little user participation (Relevance feedback, road (4) on the
Fig.2). Only the first few of the most frequently used in returned documents terms are used in query refinement
process.
Global analysis techniques for query expansion usually select the most frequently used terms in all returned documents. They are based on the association hypothesis,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Interactive
Refinements, based on
retrieved results
Knowledgebased refine-
Thesaurus
based
Ontology
based
RDFbased
Combined
Automatic
Phrase
finder
Local approaches
Global approaches
Dimensionality
Term
reduction
Clustering
Feedback Ranking
based
based
Relevance
feedback
Local
LCA
feedback
RTQ
Query
reformulation
Initial
query
Profile
ontology
(2)
(1)
Topic
Query
reformula(4) tion
Ranked
documents
Domain
ontology
(4)
Query
reformulation
(3)
(3)
(usually relation-based) from some group of returned documents. They have shown to be more effective than global
techniques when initial query is clear and unambiguous,
but they are not robust and can seriously hurt retrieval
when few of the retrieved documents are relevant [22].
Experiments on a number of collections, both English and
non-English, show that local context analysis offers more
effective and consistent retrieval results than global term
extraction and categorization techniques [21]. Feedback
74
75
Table 1: differences in the results for similar queries - yahoo and hakia
1) abscissa
2) abscissa +axis
average
1) tiger
2) tiger +wood
1) snake
2) snakes
1)negative+integer
2) negative+integer
+number
1) dog
2) dog +animal
1) trigonometry
2) trigonometry +
mathematics
1) insect
2) insects
1) pelican
2) pelican +bird
queries
Search
engines
hakia
yahoo
1000
88
8,8 %
100
11
11,0 %
1000
31
3,1 %
100
2,0 %
13
12,7 %
998
161
0,0 %
996
52
36,6 %
988
372 37,7%
100
19
19,0 %
995
24
100
3,0 %
994
9,0 %
0,2 %
2,4 %
14,5 %
11,7 %
Table 2: differences in the results for similar queries - clusty and ask
clusty
ask
200
83
41,5% 198
34 17,2%
176
11
6,3%
196
12
6,1%
1) trigonometry
2) trigonometry +
mathematics
196
39
19,9% 224
10
4,5%
1) tiger
2) tiger +wood
200
3,0%
2,1%
175
84
48,0% 216
30 13,9%
179
73
40,8% 200
63 31,5%
198
3,0%
179
37
20,7% 199
1) snake
2) snakes
1)negative+integer
2) negtive+integer
+number
1) dog
2) dog +animal
1) abscissa
2) abscissa +axis
average
22,9%
195
198
1) insect
2) insects
1) pelican
2) pelican +bird
queries
Our experience in Web searching shows that little semantic changes in the query lead to major changes in the returned results. We will call this query property query volatility. We have made a lot of experiments, sending
groups of two semantically very close queries to four
search engines: yahoo, hakia, clusty and ask and counting
the number of the URLs, returned from the two queries in
every group. Some of the results are shown in table 1 for
Search
engines
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
0,0%
28 14,1%
11,2%
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
performed in three stages: (1) Query analysis; (2) Generating query suggestions; (3) Formulating and sending the refined query.
(1). Query analysis. The main aim in this stage is finding
the meaning of the query. The system performs syntactic
and lexical analysis as searcher type, the query using
WordNet. If this analysis fails (WordNet doesnt contain
proper nouns, compound domain terms), another knowledge resources as gazetteers, domain ontology or specialized annotated corpus may be used. Many of them may be
downloaded from internet (manually or automatically, using Search web resources module (Fig. 3).
Corpus
Initial
query
Gazetteers
Linguistic
query
analysis
Query interpretations
Top
ic
Refined
query
Thesaurus
Domain
ontology
Profile
ontology
Search web
resources
module
Profile
generation
module
Search
engines
Ranked
documents
Figure 3. The conceptual schema of the proposed initial
query refinement
76
77
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Query
linguistic
analysis agent
User interface
WordNet
Search
engine
Coordination
agent
Domain
ontologies
Thesauruses
Profile
ontology
Profile
information
agent
keywords ha-
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5. Experimental Results
To verify experimentally our hypothesis we have selected
190 search queries, having entropy between 1 and 40400.
A key objective in query selection was to provide many
different values of entropy, to investigate the relationship
between the magnitude of the entropy and returned results
quality. We have manually evaluated relevance of the first
70 results of each sent query and its semantically refined
ones by opening and reviewing each of them, received by
the original and it semantically refined query. The evaluation is based on adopted common five-level relevance criteria: 4 points for every high-quality result (including high
quality relevant to the topic information, represented in a
perfect form, including needed hyperlinks to related top-
78
79
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
180
40
30
20
10
0
-10
initial query
refined query
1,5
2,5
160
five-level precision .
50
140
120
100
80
60
entropy
40
entropy
-20
1,5
2,5
initial query
120
130
refined query
100
80
30
-20 3
3,5
4,5
5,5
6,5
-70
7,5
8
entropy
five-level precision
.
180
80
60
40
20
entropy
0
3
600
500
five-level precision .
-120
400
300
200
100
entropy
0
8
10
11
12
13
14
15
16
17
18
19
20
3,5
4,5
5,5
200
180
160
140
120
100
80
60
40
20
0
6,5
initial query
10
11
12
13
14
15
16
1200
1000
800
600
400
five-level precision...
p rec is io n c h an g in g rate
1400
refined query
17
18 19 20
120
100
80
60
40
entropy
20
entropy
0
20
40
60
80
20
100
30
40
1400
1200
1000
800
five-level precision ..
180
1600
50
60
70
80
90
100
initial query
refined query
200
1800
precision changing rate (%).
140
1600
200
7,5
initial query
refined query
160
1800
160
140
120
100
80
60
40
20
0
600
100
entropy
1000
10000
100000
100
entropy
1000
10000
100000
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
80
6. Conclusion
References
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
81
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
The presence of transient network links, mobility and limited
battery power of mobile nodes in MANETs poses a hefty
challenge for such networks to scale and perform efficiently
when subjected to varying network condition. Most of the
topology control algorithms proposed have high control overhead
to discover and maintain route from source to destination. They
also have very high topology maintenance cost. To minimize
routing overhead and topology maintenance cost CBRP (Cluster
Based Routing Protocol) was developed. It performs better than
other approaches in most of the cases.
In this paper, an energy and mobility aware clustering approach
is presented. The clustering approach is incorporated in a DSR
like protocol for routing in MANET to evaluate the performance
improvement gained due to clustering using proposed approach.
Rate of cluster head changes, throughput of the network, delay
and routing overhead is evaluated using NS2. Simulation results
reveal that proposed approach has better performance in
comparison with CBRP.
Keyword: Mobile Ad-Hoc Networks, Topology Control,
Clustering
1. Introduction
A mobile ad-hoc network (MANET) is a collection of
mobile nodes that form a wireless network without the
existence of a fixed infrastructure or centralized
administrative. This type of network can survive without
any infrastructure and can work in an autonomous manner.
Hosts forming an ad-hoc network take equal responsibility
in maintaining the network. Every host provides routing
service for other hosts to deliver messages to the remote
destinations. As such network does not require any fixed
infrastructure; it makes them best for deployment in
volatile environment such as battle field and disaster relief
situations. Some of the constraints
82
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. Related Works
Various topology control algorithms are available for
mobile ad-hoc networks that try to utilize battery power of
mobile nodes in an efficient way. This section briefly
reviews some the prior works on topology control.
Local Information No Topology (LINT) proposed by
Ramanathan et al. [5] uses locally available neighbor
information collected by routing protocols to keep the
degree of neighbors bound. All network nodes are
configured with three parameters, the desired node degree
dd, a high threshold of the node degree dh and a low
threshold of node degree dl. A node periodically checks
the number of its active neighbors. If the degree is greater
than dd, the node reduces its operational power. If the
degree is less than dd the node increases its operational
power. If neither is true then no action is taken.
On the other hand, selective backbone construction for
topology control [6] is a backbone based approach. In this
approach a chain of connected nodes are constructed. All
the other nodes in MANET should be neighbor of a node
that participates in construction of the backbone. The
backbone construction in SBC starts from a small number
of seed nodes and propagates outwards to sweep the
network. When a node selects its neighbors to include in
the backbone, it also transfers the topology information it
knows so far to these neighbors. Thus, the neighbors can
make more intelligent coordinator selection decisions
based upon more topology information and avoid
redundancy. When choosing coordinators, SBC
simultaneously considers the energy requirement,
movement and location of nodes to maximize energy
conservation, and ability to maintain good networking
performance. The problem with this scheme is its high
topology maintenance cost.
In this paper Cluster Based Routing Protocol (CBRP) [5]
will be given more emphasis because the protocol divides
the nodes of the ad hoc network into a number clusters. By
83
3. CBRP Overview
Cluster Based Routing Protocol (CBRP) [7-9] is a routing
protocol designed
for use in mobile ad hoc networks.
The protocol divides the nodes of
the ad hoc network
into a number of overlapping or disjoint 2-hop-diameter
clusters in a distributed manner. A cluster head is elected
for each cluster to maintain cluster membership
information. Inter-cluster routes are discovered
dynamically using the cluster
membership information
kept at each cluster head. By clustering nodes into groups,
the protocol efficiently minimizes the flooding traffic
during route discovery and speeds up this process as well
[10]. Furthermore, the protocol takes into consideration
the existence of uni-directional links and uses these links
for both intra-cluster and inter-cluster routing.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.4 Routing
Routing in CBRP is based on source routing. RREQ is
flooded in the network to discover the route. Due to the
clustering approach very few nodes are disturbed, only the
cluster heads are flooded. If node S seeks a route to node
R, node S will send out a RREQ, with a recorded source
route listing only itself initially. Any node forwarding this
packet will add its own ID in this RREQ. Each node
forwards a RREQ only once and it never forwards it to
node that already appears in the recorded route. The
source unicasts the RREQ to its cluster head. Each clusterhead unicasts the RREQ to each of its bi-directionally
linked neighboring clusters, which has not already
appeared in the recorded route through the corresponding
gateway. This procedure continues until the target is found
or another node can supply the route. When the RREQ
reaches the target, the target may choose to memorize the
reversed route to the source. It then copies the recorded
route to a Route Reply packet and sends it back to the
source.
84
(1)
4.3. Methodology
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
85
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
86
The simulation has been run for 100 seconds. The results
are averaged over 5 randomly generated nodal topologies.
The performance of DEMAC is compared with CBRP
considering number of cluster head changes, throughput,
delay and routing overhead with respect to maximum
speed of the node.
Parameter
N
Meaning
Number of
Nodes
Value
50
mxn
Size of
the scenario
500 x 500
Max Speed
Maximum
Speed
5,10,15,20,25 (m/s)
Tx
Transmission
Range
90 m
P.T
Pause Time
0.0 sec
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
87
References
[1] Kenneth J. Supowit, The relative neighbourhood graph
with application to minimum spannig trees, Journal of
theACM, Vol. 30, No. 3, 1983, pp. 428-448.
[2] Ning Li, Jennifer C Hou and Lui Sha, Design an Analysis
of an MSTBased Topology Control Algorithm, in
INFOCOM, 2003.
[3] Nicos Christofides, Graph Theory : An Algorithmic
Approach, Academic press, 1975.
[4] Hu L, Topology control for multihop packet radio
networks, in Communications, IEEE Transactions on,
Vol. 41, Iss. 10, 1993, pp. 1474- 1481.
[5] Ramanathan R. and Rosales-Hain R., Topology
control of multihop wireless networks using transmit
power adjustment, in Proc. IEEE INFOCOM, 2000.
[6] Haitao Liu and Rajiv Gupta, Selective Backbone
Construction for Topology Control in Ad Hoc Networks,
IEEE, 2004, pp. 41-50.
[7] Chien-Chung Shen, Chavalit Srisathapornphat, Rui Liu,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
88
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
89
Assistant Professor, Department of Comp. Sc. & Engg. Vaish College of Engineering,
Rohtak(Haryana), India
Assistant Professor, Department of Comp. Sc. & Engg. Vaish College of Engineering,
Rohtak(Haryana), India
Abstract
Today the Internet is a worldwide-interconnected computer
network that transmits data by packet switching based on the
TCP/IP protocol suite. Internet has TCP as the main protocol of
the transport layer. The performance of TCP is studied by many
researchers. They are trying to analytically characterizing the
throughput of TCPs congestion control mechanism. Internet
routers were widely believed to need big memory spaces.
Commercial routers today have huge packet memory spaces,
often storing millions of packets, under the assumption that big
memory spaces lead to good statistical multiplexing and hence
efficient use of expensive long-haul links.
In this paper, we summarize the works and present the
experimental study result with big memory space size and give a
qualitative analysis of the result. Our conclusion is that the,
round-trip time (RTT) is not increased by linear, but by quadric
when the memory space size of the bottleneck is big enough. Our
goal is to estimate the average queue length of the memory space
size and develop a TCP model based on RTT and the average
queue length.
1. Introduction
The traffic across internet is increasing so congestion is
becoming an important issue in network applications.
When congestion is occurred the packet round trip time is
increased and probability is lost and decreased in the
throughput. Transport protocol must deal with this traffic
congestion in order to make best use of the capacity of
network. TCP adopt a window based congestion control
mechanism. Traditionally experimental study and
measurement have been the tools of choice for checking
the performance of various aspects of TCP. But the
amount of non-TCP traffic flows (such as multimedia
traffic) keep increasing in todays Internet, non-TCP flows
should share the bandwidth with TCP flows friendly,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
90
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
B lim Bt lim
t
Nt
t
91
1 p
2 b
p
3b
B( p)
RTT
2 b
8 (1 p )
2 b
3 bp
3b
2 b (1 p )
2 b
3 p
6
1
RTT
3
2 bp
B( p) min min2
RTT
3bp
2bp
p(1 32 p 2 )
To min 3
RTT
3
8
W1
W2
W3
5. Experimental Work
TDP 1
A1
TDP 2
TDP 3
A2
A3
(2)
40~900
35 ms
15 Mb
1000
Queue Algorithm
Drop Tail
Bandwidth of Branch
128 Mb
Delay of Branch
2 ms
(3)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Experiment No.
TCP Connections
1
16
2
24
3
32
4
64
TFRC Connections
16
24
32
64
92
1.4
1.2
1
0.8
TCP
TFRC
0.6
0.4
0.2
0
50
150
250
350
Figure 4
450
550
650
750
850
1.6
1.4
1.4
1.2
1.2
1
TCP
0.8
TFRC
0.8
0.6
0.4
0.6
0.2
0.4
TCP
TFRC
0.2
50
150
250
350
450
550
650
750
850
0
50
Figure 2
150
250
350
450
550
650
750
850
1.4
1.2
6. Analysis
1
0.8
TCP
TFRC
0.6
0.4
0.2
0
50
150
250
350
Figure 3
450
550
650
750
850
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
7. Related Work
A survey on TCP performance in a heterogeneous network
is given in [1]. It considers the different characteristics of
a path crossed by TCP traffic, focusing on bandwidth
delay product, round trip time (RTT), on congestion
losses, and bandwidth asymmetry.
It presents the problems and the different proposed
solutions. The model used in this paper was proposed in
[7]. TFRC [3] use it to calculate the sending rate of the
sender. But only the experimental study result with small
memory size was given.
References
[l] Chadi Barakat, Eitan Altman, and Walid Dabbous, On TCP
Performance in a Heterogeneous Network: A Survey, IEEE
Communications Magazine, January 2000.
93
94
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
This research deals with a vital and important issue in computer
world. It is concerned with the software management processes
that examine the area of software development through the
development models, which are known as software development
life cycle. It represents five of the development models namely,
waterfall, Iteration, V-shaped, spiral and Extreme programming.
These models have advantages and disadvantages as well.
Therefore, the main objective of this research is to represent
different models of software development and make a
comparison between them to show the features and defects of
each model.
Keywords: Software Management Processes, Software
Development, Development Models, Software Development Life
Cycle, Comparison between five models of Software Engineering.
1. Introduction
Computer Science
Theories
Computer Function
Client
Problems
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
95
concern.
The pure waterfall lifecycle consists of several nonoverlapping stages, as shown in the following figure. The
model begins with establishing system requirements and
software requirements and continues with architectural
design, detailed design, coding, testing, and maintenance.
The waterfall model serves as a baseline for many other
lifecycle models.
1.
2.
3.
4.
Specification.
Design.
Validation.
Evolution.
System Requirements
Software Requirements
Architectural Design
Detailed Design
Coding
Testing
Maintenance
3. Five Models
A Programming process model is an abstract
representation to describe the process from a particular
perspective. There are numbers of general models for
software processes, like: Waterfall model, Evolutionary
development, Formal systems development and Reusebased development, etc. This research will view the
following five models :
1. Waterfall model.
2. Iteration model.
3. V-shaped model.
4. Spiral model.
5. Extreme model.
These models are chosen because their features
correspond to most software development programs.
Requirements
Definition
System and
Software Design
Implementation
and Unit Testing
Integration and
System Testing
Operation and
Maintenance
Fig. 3 Waterfall model[2].
The following list details the steps for using the waterfall
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
model:
1 System requirements: Establishes the components
for building the system, including the hardware
requirements, software tools, and other necessary
components. Examples include decisions on
hardware, such as plug-in boards (number of
channels, acquisition speed, and so on), and decisions
on external pieces of software, such as databases or
libraries.
2
Coding:
Implements
specification.
the
detailed
1.
2.
3.
design
96
4.
5.
6.
1.
2.
4.
5.
6.
7.
its weaknesses, it is
important stages of
does not apply this
these stages and its
Advantages :
Easy to understand and implement.
Widely used and known (in theory!).
Reinforces good habits:
define-before- design,
design-before-code.
Identifies deliverables and milestones.
Document driven, URD, SRD, etc. Published
documentation standards, e.g. PSS-05.
Works well on mature products and weak teams.
Disadvantages :
Idealized, doesnt match reality well.
Doesnt reflect iterative nature of exploratory
development.
3. Unrealistic to expect accurate requirements so
early in project.
Software is delivered late in project, delays discovery
of serious errors.
Difficult to integrate risk management.
Difficult and expensive to make changes to
documents, swimming upstream.
Significant administrative overhead, costly for small
teams and projects [6].
Pure Waterfall
Concept.
Requirements.
Architectural design.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4.
5.
6.
Detailed design.
Coding and development.
Testing and implementation.
Table 1: Strengths & Weaknesses of Pure Waterfall
Strengths
Minimizes planning
overhead since it can
be done up front.
Structure minimizes
wasted effort, so it
works well for
technically weak or
inexperienced staff.
Weaknesses
Inflexible
Backing up to
address mistakes is
difficult.
97
Modified Waterfall
Strengths
Weaknesses
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
98
Requirements
System Test
Planning
High Level
Design
Low Level
Design
Advantages
1.
2.
3.
4.
Integration
Test
Planning
Unit Test
Planning
System
Testing
Integration
Testing
Unit
Testing
Implementation
Disadvantages
1.
2.
3.
4.
1.
2.
3.
Advantages
High amount of risk analysis.
Good for large and mission-critical projects.
Software is produced early in the software life cycle.
1.
2.
3.
Disadvantages
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Projects success is highly dependent on the risk
analysis phase.
Doesnt work well for smaller projects [7].
4.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
1.
99
under which the system would produce win-lose or loselose outcomes for some stakeholders.
3. Identify and Evaluate Alternatives: Solicit
suggestions from stakeholders, evaluate them with respect
to stakeholders' win conditions, synthesize and negotiate
candidate win-win alternatives, analyze, assess, resolve
win-lose or lose-lose risks, record commitments and areas
to be left flexible in the project's design record and life
cycle plans.
4. Cycle through the Spiral: Elaborate the win conditions
evaluate and screen alternatives, resolve risks, accumulate
appropriate commitments, and develop and execute
downstream plans [8].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
Advantages
Lightweight methods suit small-medium size projects.
Produces good team cohesion.
Emphasises final product.
Iterative.
Test based approach to requirements and quality
assurance.
1.
Disadvantages
Difficult to scale up to large projects where
documentation is essential.
Needs experience and skill if not to degenerate into
code-and-fix.
Programming pairs is costly.
2.
3.
4.
100
2.
3.
REFERENCES
[1] Ian Sommerville, "Software Engineering", Addison
Wesley, 7th edition, 2004.
[2] CTG. MFA 003, "A Survey of System Development
Process Models", Models for Action Project: Developing
Practical Approaches to Electronic Records Management
and Preservation, Center for Technology in Government
University at Albany / Suny,1998 .
[3] Steve Easterbrook, "Software Lifecycles", University
of Toronto Department of Computer Science, 2001.
[4] National Instruments Corporation, "Lifecycle Models",
2006 , http://zone.ni.com.
[5] JJ Kuhl, "Project Lifecycle Models: How They Differ
and When to Use Them",2002 www.businessesolutions.com.
[6] Karlm, "Software Lifecycle Models', KTH,2006 .
[7] Rlewallen, "Software Development Life Cycle
Models", 2005 ,http://codebeter.com.
[8] Barry Boehm, "Spiral Development: Experience,
Principles, and Refinements", edited by Wilfred J.
Hansen, 2000 .
Nabil Mohammed Ali Munassar was born in Jeddah, Saudi
Arabia in 1978. He studied Computer Science at University of
Science and Technology, Yemen from 1997 to 2001. In 2001 he
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
received the Bachelor degree. He studied Master of Information
Technology at Arab Academic, Yemen, from 2004 to 2007. Now
rd
he Ph.D. Student 3 year of CSE at Jawaharlal Nehru
Technological University (JNTU), Hyderabad, A. P., India. He is
working as Associate Professor in Computer Science &
Engineering College in University Of Science and Technology,
Yemen. His area of interest include Software Engineering, System
Analysis and Design, Databases and Object Oriented
Technologies.
Dr.A.Govardhan: received Ph.D. degree in Computer Science
and Engineering from Jawaharlal Nehru Technological University
in 2003, M.Tech. from Jawaharlal Nehru University in 1994 and
B.E. from Osmania University in 1992. He is Working as a
Principal of Jawaharlal Nehru Technological University, Jagitial. He
has published around 108 papers in various national and
international Journals/conferences. His research of interest
includes Databases, Data Warehousing & Mining, Information
Retrieval, Computer Networks, Image Processing, Software
Engineering, Search Engines and Object Oriented Technologies.
101
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Support confidence framework is misleading in finding
statistically meaningful relationships in market basket data. The
alternative is to find strongly correlated item pairs from the
basket data. However, strongly correlated pairs query suffered
from suitable threshold setting problem. To overcome that, top-k
pairs finding problem has been introduced. Most of the existing
techniques are multi-pass and computationally expensive. In this
work an efficient technique for finding k top most strongly and
correlated item pairs from transaction database, without
generating any candidate sets has been reported. The proposed
technique uses a correlogram matrix to compute support count of
all the 1- and 2-itemset in a single scan over the database. From
the correlogram matrix the positive correlation values of all the
item pairs are computed and top-k correlated pairs are extracted.
The simplified logic structure makes the implementation of the
proposed technique more attractive. We experimented with real
and synthetic transaction datasets and compared the performance
of the proposed technique with its other counterparts (TAPER,
TOP-COP and Tkcp) and found satisfactory.
Keywords: Association mining, correlation coefficient,
correlogram matrix, top-k correlated item pairs.
1. Introduction
Traditional support and confidence measures [17] are
insufficient at filtering out the uninteresting association
rules [1]. It has been well observed that item pairs with
high support value may not imply statistically highly
correlated. Similarly, a highly correlated item pair may
exhibit low support value. To tackle this weakness, a
correlation analysis can be used to provide an alternative
framework
for
finding
statistically
interesting
relationships. It also helps to improve the understanding of
meaning of some association rules. Xiong et. al.,
introduced the notion of strongly correlated item pairs in
their work TAPER [2,15], which retrieves all strongly
correlated item pairs from transaction database based on
user specified threshold . A number of techniques have
already been proposed [3, 12, 13, 15, 16] to handle this
problem. However, setting up an appropriate value for is
102
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. Background
Association mining [1] is a well studied problem in data
mining. Starting from market basket data analysis, now it
spreads its spectrum of applications in different domains
like machine learning, soft-computing, computational
biology and so on. Standard association mining technique
is to extract all subsets of items satisfying minimum
support criteria. Unlike traditional association mining, the
all-pair-strongly correlated query is to find a statistical
relationship between pairs of items from transaction
database. The problem can be defined as follows.
Definition 1: Given a user-specified minimum correlation
threshold and a market basket database with I={I1 I2,
I3,. IN, } set of N distinct items and T transactions in
database D is a subset of I, a all-strong-pairs correlation
query finds collection of all item pairs (Ii, Ij) with
correlations above the threshold . Formally, it can be
defined as:
SC ( D, ) ( I i , I j ) | ( I i , I j ) I , I i I j ( I i , I j ) (1)
103
( I i k , I j k ) ( I i k 1 , I j k 1 )
Next we discuss Pearson correlation measure in order to
compute the correlation coefficient between each item pair.
(I i , I j )
(3)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Items
1,2,4,5,6
2,4
2,3,6
1,2,4,5
1,3,6
2,3
1,3
1,2,3,4,5
Pair
{1,2}
{1,3}
{1,4}
{1,5}
{1,6}
{2,3}
{2,4}
{2,5}
{2,6}
{3,4}
{3,5}
{3,6}
{4,5}
{4,6}
{5,6}
Support
0.37
0.37
0.37
0.37
0.25
0.37
0.5
0.37
0.25
0.12
0.12
0.25
0.37
0.12
0.12
Corr
-0.44
-0.66
0.25
0.6
0.06
-0.44
0.57
0.44
-0.14
-0.77
-0.46
0.06
0.77
-0.25
-0.06
Output
{4,5}
{1,5}
{2,5}
{1,4}
{1,6}
{3,6}
3. Related Works
Extraction of top-k correlated-pair from large transaction
database has gained considerable interest very recently.
Top-k problem is basically an alternative solution for the
all-pair strongly correlated pairs query problem. A very
few techniques have been proposed so far to address the
problem of answering top-k strongly correlated pairs
query. A brief discussion on different key techniques has
been presented below.
3.1 TAPER
TAPER [15] is a candidate generation based technique for
finding all strongly correlated item pairs. It consists of two
steps: filtering and refinement. In filtering step, it applies
two pruning techniques. The first technique uses an upper
bound of the correlation coefficient as a coarse filter.
The upper bound upper((X,Y)) of correlation coefficient
for {X, Y} as follows:
Sup ( Y )
Sup ( X )
{2,4}
b) k =7
(X,Y) upper((X,Y))=
104
1 Sup ( X )
1 Sup ( Y )
3.2 TOP-COP
TOP-COP [11] is an upper bound based algorithm for
finding top-k strongly correlated item pairs and extended
version of TAPER. TOP-COP exploits a 2-D monotone
property of the upper bound of correlation coefficient
for pruning non-potential item pairs i.e. pairs which do not
satisfy the correlation threshold . The 2-D monotone
property is as follows:
For a pair of items (X,Y), if Sup(X)>Sup(Y) and fix item
Y, the
upper((X,Y)) is monotone increasing with
decreasing support of item X. Based on the 2-D monotone
property a diagonal traversal technique, combined with a
refine-and filter strategy has been used to efficiently mine
top-k strongly correlated pairs.
Discussion
Like TAPER, TOP-COP is also a candidate generation
based technique. The 1-D monotony property, used in
TAPER provides a one dimensional pruning window for
eliminating non-potential item pairs. Moving one step
further, TOP-COP exploits the 2-D monotone property,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.3 Tkcp
Tkcp [14], an FP-tree [4] based technique, for finding topk strongly correlated item pair. The top-k strongly
correlated item pairs are generated without any candidate
generation. Tkcp includes two sub processes: (i)
construction of the FP-Tree, and (ii) computation of
correlation coefficient of each item pairs using the support
count from FP-tree and extraction of all the top-k strongly
correlated item pairs based on correlation coefficient value
. The efficiency of FP-Tree based algorithm can be
justified as follows: (i) FP-Tree is a compressed
representation of the original database, (ii) the algorithm
scans the database twice only and (iii) the support value of
all the item pairs is available in the FP-Tree.
Discussion
Although the algorithm is based on efficient FP-tree data
structure, yet it also suffers from the following two
significant disadvantages.
(i) Tkcp constructs the entire FP-tree with initial support
threshold zero. The time taken to construct such huge
FP-tree is quite large..
(ii) Moreover, it also requires large space to store the
entire FP-Tree in the memory; particularly when the
number of items is very large.
The techniques discussed above are either generates a
large number of candidates or generates large tree. They
also need multiple passes over the entire the database. It
will be more expensive when the database contains large
numbers of transactions or rows. The next section
discusses an efficient one pass top-k correlated pairs
extraction technique that addresses the shortcomings of
the algorithms reported above.
105
1
2
3
4
5
6
7
8
1
2
2
1
1
2
1
1
2
4
3
2
3
3
3
2
6
4
6
Item
1
Frequency of
2
itemset {3}
3
4
5
6
1
3
Co-occurrence
frequency
of
item pair {4, 5}
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
106
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
107
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
108
No of
Transaction
100,000
No of
Items
400
Avg size of
transaction
10
No of
Patterns
20
T10I600D100K
100,000
600
10
20
T10I800D100K
100,000
800
10
20
T10I1000D100K
100,000
1000
10
20
T10P1000D100K
100,000
1000
10
1000
6. Performance Evaluation
The performance of k-SCOPE is evaluated in comparison
to its other counterparts, and tested in light of synthetic as
well as real-life datasets. Several synthetic datasets
generated according to the specifications given in Table 1
for testing purpose. The synthetic datasets were created
with the data generator in ARMiner software
(http://www.cs.umb.edu/~laur/ARMiner/) which also
follows the basic spirit of well-known IBM synthetic data
generator for association rule mining. The size of the data
(i.e. number of transactions), the number of items, the
average size and number of unique patterns in transactions
are the major parameters in the synthesized data
generation.
We also used real life Mushroom dataset from UCI ML
repository(http://www.ics.uci.edu/~mlearn/MLRRepository.html
), Pumsb from IBM. Pumsb is often used as the
benchmark for evaluating the performance of association
mining algorithms on dense data sets. The Pumsb data set
corresponds to binarized versions of a census data set from
IBM (available at http://fimi.cs.helsinki.fi/data/) is used
for the experiments (see Table 2).
We used Java for implementation of k-SCOPE and
modified TAPER. We used code of TOP-COP as provided
Data Set
(Binarized)
Mushroom
No of
Transaction
8124
No of
Items
128
Source
UCI
Pumsb
49046
2113
IBM Almaden
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
7. Conclusion
An efficient correlogram matrix based technique, kSCOPE, has been presented in this paper to extract top-k
109
References
[1] J Han and M. Kamber, Data mining Concepts and
Technique, Morgan Kaufmann Publishers, San Francisco,
CA, 2006.
[2] H Xiong, S Shekhar, S., P-N Tan, V Kumar, Exploiting a
Support-based Upper Bound of Pearson's Correlation
Coefficient for Efficiently Identifying Strongly Correlated
Pairs. Proceedings of SIGKDD04, pp 334-343, 2004
[3] Z He, S Deng, and X Xu, An FP-Tree Based Approach for
Mining All Strongly Correlated Item Pairs, LNAI 3801, pp.
735 740, 2005
[4] Han, J., Pei, J., Yin, J, Mining Frequent Patterns without
Candidate Generation, Proceedings of SIGMOD00, pp 1-12,
2000.
[5] T Henry, Reynolds. The Analysis of Cross-classifications.
The Free Press, New York, 1977
[6] C. Borgelt, An implementation of the fp-growth algorithm.
Workshop Open Source Data Mining Software (OSDM'05,
Chicago, IL), pp. 1-5, 2005.
[7] S Roy and D K Bhattacharyya, Efficient Mining of Top-K
Strongly Correlated Item Pairs using One Pass
Technique,Proc. of ADCOM,pp. 416-41, IEEE 2008.
[8] W P Kuo et.al, Functional Relationships Between Gene Pairs
in Oral Squamous Cell Carcinoma, Proc. of AMIA
Symposium, pp.371-375, 2003
[9] K Donna, Slonim, From patterns to pathways: gene
expression data analysis comes of age, Nature Genetics
Supplement, vol 32, pp. 502-508, 2002
[10] A.J. Butte & I S Kohane, Mutual information relevance
networks: functional genomic clustering using pairwise
entropy measurements, Pac. Symp. Bio-comput.418429,2000.
[11] H Xiong et.al, Top-k - Correlation Computation,
INFORMS Journal on Computing, Vol. 20, No. 4, pp.
539552,Fall 2008
[12] L Jiang et. al., Tight Correlated Item Sets and Their
Efficient Discovery, LNCS, Vol 4505, Springer Verlag 2007
[13] S Li, L Robert, L S Dong, Efficient mining of strongly
correlated item pairs, Proc. of SPIE, the International Society
for Optical Engineering,ISSN 0277-786X.
[14] Z He, X Xu, X Deng, Mining top-k strongly correlated item
pairs without minimum correlation threshold, Intl. Journal of
Knowledge-based and Intelligent Engineering Systems 10,
IOS Press, pp 105-112, 2006.
[15] H Xiong et.al.(2006), TAPER: A Two-Step Approach for
All-Strong-Pairs Correlation Query in Large Databases,
IEEE Trans. On Knowledge and Data Engineering, vol. 18
no. 4, pp. 493-508.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
110
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
111
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
The past decades have seen a growing interest in underwater
wireless communications researches, but this faces many
challenges, specially through physical processes that cause
impact in this communication technique.
Despite this, underwater wireless networks seems to be a good
solution for water quality monitoring in pisciculture tanks, since
this service requires a lot of time and manpower. The Amazon
region has many resources to increase pisciculture development
so the idea of making this service automatic is very promising.
In this paper we aim to analyze the performance of this network
by simulation, in order to check the viability of the underwater
wireless networks in pisciculture activity at Amazon.
Keywords: underwater wireless, Amazon, pisciculture, acoustic
network, simulation.
1. Introduction
The Amazon is known worldwide by its biodiversity and
water availability. The Amazon River basin is the largest
in the world, occupying an approximated area of
7.008.370 km2, with nearly 60% located in Brazil [1]. The
importance of these water resources is mainly for fishing
and navigation.
Its estimated a range of three thousand species of fish in
this region [2] and a high rate of deforestation due to
conversion of forest to pasture lands and agriculture.
Nowadays these data are highly relevant because
of the high demand for alternative livelihood activities that
require little deforestation. One of these activities is the
fish farming or pisciculture, as it is called the fish
cultivation, mainly from fresh water, which may be
ornamental or sustainable and generate jobs and finances
to the population. This makes this activity a great social
and economical option for this region.
However, this activity needs supporting
technology, in order to have a water quality monitoring at
the farms, as it has a great influence on growth and
survival of fish [3] and it is too vulnerable to pollution
caused by industrial and urban waste, and the use of
pesticides and fertilizers.
112
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
113
4. Case Study
4.1 Scenario Description
The used scenario in this simulation is based in a
pisciculture tank with 4.500m2 and 1,5m depth, since such
tanks can vary from a few square meters until many
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
114
4.2 Results
From the obtained simulation results, graphics were
generated to prove the model characteristics and analyze
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Figure 6. Graphic 4 Throughput
[10]
5. Conclusions
This paper presented a performance analysis for the
proposed use of underwater wireless networks at
pisciculture tanks for water quality monitoring.
The obtained results by simulation demonstrates
the viability of the proposal, since the used scenario has
dimensions considered large for this type of activity,
therefore in smallest scenarios, the results tend to be
better.
Because the data to be collected do not require
high transmission power and neither high bandwidth, the
proposal appears to be suitable for automation of water
monitoring in pisciculture activity, as it needs a lot of time
and manpower because the measurements are performed
more than once a day.
115
[11]
[12]
[13]
[14]
[15]
[16]
[17]
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[18]
[19]
[20]
[21]
Simulator."
[Online].
Available:
http://www.dei.unipd.it/wdyn/?IDsezione=3966.
[Accessed:
Apr. 14, 2010]
Universit degli Studi di Padova, "dei80211mr: a new 802.11
implementation
for
NS-2."
[Online].
Available:
http://www.dei.unipd.it/wdyn/?IDsezione=5090.
[Accessed:
Apr. 14, 2010]
Universit
degli
Studi
di
Padova,
"Underwater
Communications."
[Online].
Available:
http://www.dei.unipd.it/wdyn/?IDsezione=5216.
[Accessed:
May 12, 2010]
Universit degli Studi di Padova. Department of Information
Engeneering,
"SIGNET."
[Online]
Available:
http://telecom.dei.unipd.it/labs/read/3/. [Accessed: May 12,
2010]
E. Carballo, A.V. Eer, T.V. Schie, and A. Hilbrands,
Piscicultura de gua doce em pequena escala, Agromisa
Foundation, 2008.
116
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
117
Abstract
For a long time, Geo-spatial information was printed on paper
maps whose contents were produced once for specific purposes
and scales. These maps are characterized by their portability,
good graphic quality, high image resolution and good placement
of their symbols and labels. These maps have been generated
manually by cartographers whose work was hard and fastidious.
Today, Computers are in use to generate the map as per
requirement called the cartographic generalization. The purpose
of cartographic generalization is to represent a particular
situation adapted to the needs of its users, with adequate
legibility of the real situation and its perceptional congruity with
the representation. Interesting are those situations which, to
some degree, vary from the real situation in nature. In this paper,
a simple approach is presented for the simplification of contour,
roads and building ground plans that are represented as 2D line,
square and polygon segments. As it is important to preserve the
overall characteristics of the buildings; the lines are
geometrically simplified with regard to geometric relations. It
also holds true for contour and road data. In this paper, an
appropriate transformation and visualization of contour and
building data is presented.
1. Introduction
In natural environment human senses perceive
globally, without details. Only when one has a particular
interest he or she observes details. It is a natural process;
otherwise abundance of details would lead to confusion.
For similar reasons, in the process of cartographic
generalization many details may be omitted which are of
least interest to the user at that context or these are merger
together for the sake of map space. The concept of
generalization is ubiquitous in nature and so similarly in
cartography. It is basically a process of compilation of
map content. The actual quantitative and qualitative basis
of cartographic generalization is determined by the map
purpose and scale, symbols, features of the represented
objects, and other factors. One of the applications of
cartographic
generalization
is
simplifying
and
representing map objects for display at low resolution
devices like mobile and GPS system.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. Simplification
When a map is represented graphically and the
representation scale is reduced then some area features
will become too insignificant to be represented, i.e. the
object can be regularly or irregularly shaped. In this paper
a simple approach has been presented to simplify contour
lines and building plans to make the map more accurate
and understandable. In this paper simplification for
contour lines and buildings has been defined.
Vertex Simplification: Line simplification is also
referred to as vertex simplification. Often a vertex has too
much resolution for an application, such as visual displays
of geographic map boundaries or detailed animated
figures in games or movies. That is, the points on the
vertexes representing the object boundaries are too close
together for the resolution of the application. For example,
in a computer display, successive vertices of the vertex
may be displayed at the same screen pixel so that
successive edge segments start, stay at, and end at the
same displayed point. The whole figure may even have
all its vertices mapped to the same pixel, so that it appears
simply as a single point in the display. Different
algorithms for reducing the points in a vertex to
118
V4
V3
V5
V7
V6
V1
V0
V
V2
Keep Vertex
Original Polyline
Discard Vertex
Reduced Polyline
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Algorithm:
Step [1]: Generate the Map.
Step [2]: Get the tolerance value from the user.
Step [3]: Calculate the number of objects or buildings on
the screen.
Step [4]: Call the DOUGLAS PEUKAR recursive
simplification routine to simplify the vertex.
Step [5]: Generate the simplified Map.
A sample contour data set from Survey of India (SoI)s
stored in Open Geo-spatial Consortium (OGC)s
Geography Markup Language (GML) format is used for
simplification of line and buildings. Using the following
step the map can be simplified for the level of tolerance.
119
Feature
Code
Major
Code
Minor
Code
Category
Condition
Road
RD
11
1100
Highway
Metalled
11
1300
Motorway
Metalled
11
5300
Motorway
Unmetalled
11
6100
Pack-track
plains
Unmetalled
11
6410
CartTrack
Plains
Unmetalled
11
6500
Foot-path
Plains
Unmetalled
15
3000
Motorway
Metalled
11
6300
Track
follows
Streambed
Unmetalled
37
2100
Residential
Block
Village/Town
37
1100
Residential
Hut
Temporary
37
1200
Residential
Hut
Permanent
37
1400
Residential
Hut Oblong
Permanent
37
3020
Religious
Chhatri
37
3040
Religious
Idgah
37
3100
Religious
Temple
Building
Ntdb:00
000001
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
120
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. Conclusion
There are many approaches proposed for the process of
vertex as well as polygon simplification. The technique
proposed here modifies the existing technique to arrive at
a more efficient model. The final map will be more
accurate and understandable.
Acknowledgments
This research is being carried out under the project
activity Cartographic Generalization of Map Object
sponsored by Department of Science & Technology
(DST), India. This project is currently underway at
Central Electronics Engineering Research Institute
(CEERI), Pilani, India. Authors would like to thank
Director, CEERI for his active encouragement and
support and DST for the financial support.
References
[1] Robert Weibel and Christopher B. Jones, Computational
Perspective of Map Generalization, GeoInformatica, vol. 2, pp
307-314, Springer Netherlands, November/December 1998.
121
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
122
Abstract
This paper investigates ontology. Ontology exhibits enormous
potential in making software more efficient, adaptive, and
intelligent. It is recognized as one of the areas which will bring
the next breakthrough in software development. Ontology
specifies a rich description of the terminology, concepts and
properties explicitly defining concepts. Since understanding
concepts and terms is one of the difficulties in modeling
diagrams, this paper suggests an ontology aiming to identify
some heavily used modelling diagrams concepts to make them
easier to perform
Keywords: Ontology, modeling, concept, requirement
1. Introduction
In systems engineering and software engineering,
requirements analysis encompasses all of the tasks that go
into the instigation, scoping and definition of a new or
altered system. Requirements analysis is an important part
of the system design process, whereby requirements
engineers and business analysts, along with systems
engineers or software developers, identify the needs or
requirements of a client. Once the client's requirements
have been identified, the system designers are then in a
position to design a solution [1].
The requirement phase based on robust conceptual models.
Ontologies are a promising means to achieve these
conceptual models, since they can serve as basis for
comprehensive
information
representation
and
communication.
During requirements stage many modelling can be used
for the similar or different systems. It depends on the
multiple views of the developers and the types of the
systems. Authors of requirements use different
terminology and hence the same term is applied to
different concepts and different terms are used to denote
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
123
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Diagram
Name
Diagram Definition
Concepts
Concept Definition
124
5. Ontological Analysis
We agree that ontological analysis and evaluation is one of
several approaches that should be used to improve the
modelling diagrams. We evaluate the
results by percentage technique that depends on the
frequency.
We choose the 30 highest frequencies of the results that
shown in table(4.4) to limit our study where the frequency
was increased steadily for the first thirty concepts.
To emphasize the importance of the thirty highest
frequencies concepts for our results we use percentage
technique as follows:
Identify all the selected modelling diagrams.
Classify the concepts for each modelling diagram in a
category.
Then count the concepts for each category
Demonstrate the thirty highest frequency concepts,
then find and remark if any of those concepts exist in
each category.
Count remarked concepts for each category.
Define the proportion.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Concepts (Con.)
Diagram
No.
All
Co
n.
Action
,Actor,
Constraint,
43
No.
Pro
100
30
po-
Co
rtio
n.
7:4
20.
10:
37.
27
Table (5.1) shows the results of the previous steps. For the
first diagram Activity diagram we identify in the third
column a collection of its important concepts as action,
actor, and constraint. Then we set in the fourth column
number of all its concepts that we identify in column three.
We use the category of the highest frequency concepts that
we outline in table (4.4), to search about those specific
concepts if they exist or not in the collection of the
Activity concepts that we show in column three. After
that, we count how many important concepts exist and
write the number in column five. The proportion between
the numbers of Activity concepts (column four) to the
number of specific existing concepts (column five) are
7:34, which present in column six. The proportion 7:34
indicates 20.6% which shows in the last column of the
table.
Indeed, the other modelling diagrams can be recognized in
the same way for Activity diagram.
Nodes,
Expansion
Region,
Exception
Handlers,
State,
Symbol,
State,
Transition,
Control,
Thread,
Rendezvous,
Swimlane
Use
Case,
Description,
Name,
Requirement,
27
10
125
Points,
System
Visibility,
Component,
Boundary,
Class,
Property,
Core
Element,
Association,
Extend,
Generalization,
Include,
Diagram, Element,
System.
Table (5.1): Evaluate the results by Percentage Technique
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Definition of BWW
concepts
The elementary unit in
the BWW ontological
model. The real world
is made up of things.
Things
possess
properties. A property
is modeled via a
function that maps the
thing into some value.
A set of things that can
be defined by their
possessing a particular
set of properties
TOTAL Concepts
BWW
Our
Ontology
Equivalent?
Thing
Object,
Entity
Yes
Property
Attribut
e
Yes
Class
Class
Yes
30
30
25
7. Conclusions
The purpose of this paper is to enhance the existing
modelling diagrams which are used during requirement
phase by building ontology which includes defining
terms and classifying them. In order to reduce ambiguity
of modelling diagram concepts, and to understand
concepts and terms by the system developer and users
without complexity and conflicts.
Many modelling can be implemented in various ways.
There are different concepts that the developers and the
customers must learn. The difference of modelling in the
real world splits the developers in different schools; each
one may focus on different aspects of the world and show
that developers understand the world differently. Ontology
can enhances the existing modelling techniques which are
used during requirement phase. In addition, ontology
includes definitions, classifications and formalization of
terms.
126
8.References
[1] Barry Boehm, A view of 20th and 21st century software
engineering, Proceedings of the 28th international
conference on Software engineering, Shanghai, 2006,
ISBN:1-59593-375-1, ACM, pp: 12 29
[2] Leo Obrst, Ontologies for Semantically Interoperable
Systems, CIKM03, 2003.
[3] Milton, S.K., & Kazmierczak, E., "Enriching the Ontological
Foundations of Modelling in Information Systems",
Proceedings of Workshop on Information Systems
Foundations: Ontology, Semiotics, and Practice, (Ed,
Dampney, C.), 2000, pp: 55-65.
[4] Stephen J. Mellor, A Comparison of the Booch Method and
Shlaer-Mellor OOA/RD, by Project Technology, Inc, 1993.
[5] Recker, Jan C. and Indulska, Marta, An Ontology-Based
Evaluation of Process Modeling with Petri Nets, IBIS International Journal of Interoperability in Business
Information Systems Vol.2 No.1, 2007:pp. 45-64.
[6] Malcolm Shroff & Robert B. France, Towards a
Formalization of UML Class Structures in Z, Computer
Software and Applications Conference, 1997. COMPSAC
'97. Proceedings., pp:646 651.
[7] Johannes Koskinen , Jari Peltonen , Petri Selonen , Tarja Systa ,
Kai Koskimies, Model processing tools in UML, 23rd
International Conference on Software Engineering, 2001pp:819820.
[8]Yeol Song and Kristin Froehlich, "Entity-Relationship Modeling: A
Practical How-to Guide," IEEE Potentials, Vol. 13, No. 5, 19941995, pp. 29-34.
[9]
Kevin L.Mills, Hassan Gomaa, Knowledge-Based
Automation of a Design Method of Current Systems, IEEE
Transaction of Software Engineering, Vol.28. No.3, 2003.
[10] Federico Vazques, Identification of complete data flow
diagrams , ACM SIGSOFT Software Engineering Notes,
Vol. 19 , Issue 3, 1994, pp: 36 - 40 .
[11] luciano baresi, mauro pezz`e, Formal Interpreters for
Diagram Notations, ACM Transactions on Software
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
127
WandWeber Model, Softw Syst Model, 2002 pp. 43
67.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Department of Management Information Systems, College of Applied Studies & Community Services, King Faisal University
Hofuf, Al-Hassa 31982, Saudi Arabia
2
Department of Information Systems, College of Computer Sciences & IT, King Faisal University
Hofuf, Al-Hassa 31982, Saudi Arabia
Abstract
Fuzzy logic and proportional-integral-derivative (PID)
controllers are compared for use in direct current (DC) motors
positioning system. A simulation study of the PID position
controller for the armature-controlled with fixed field and fieldcontrolled with fixed armature current DC motors is performed.
Fuzzy rules and the inferencing mechanism of the fuzzy logic
controller (FLC) are evaluated by using conventional rule-lookup
tables that encode the control knowledge in a rules form. The
performance assessment of the studied position controllers is
based on transient response and error integral criteria. The results
obtained from the FLC are not only superior in the rise time,
speed fluctuations, and percent overshoot but also much better in
the controller output signal structure, which is much remarkable
in terms of the hardware implementation.
1. Introduction
Lotfi Zadeh, the father of fuzzy logic, claimed that many
sets in the world that surrounds us are defined by a nondistinct boundary. Zadeh decided to extend two-valued
logic, defined by the binary pair {0, 1}, to the whole
continuous interval [0, 1], thereby introducing a gradual
transition from falsehood to truth [1].
Fuzzy control is a control method based on fuzzy logic.
Just as fuzzy logic can be described simply as "computing
with words rather than numbers"; fuzzy control can be
described simply as "control with sentences rather than
equations". A fuzzy controller can include empirical rules,
and that is especially useful in operator controlled plants.
A comprehensive review of the classical design and
implementation of the fuzzy logic controller can be found
in the literature [2], [3], [4]. A fuzzy IF-THEN rule-based
system consists of the following modules [5], [6]:
128
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Z -1
e (kT)
U (k)
se (kT)
-1
PID
1
T
FLC
de (kT)
129
(1)
U k K P e k K i se k K d de k
that is similar to the ideal PID control algorithm:
(2)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Fig. 2 The general response of the second-order and third-order systems.
130
Z2
N
Z
P
Z3
Z5
Z1
N
Z4
P
N
Z
P
Sum of the error
N
Z
P
N
N
N
Z
N
N
N
N
Z
Z
P
Z
P
P
Z
P
Z
P
P
P
Error
Error
(x)
negative
zero
U x
K x
positive
Kx
Ux
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
IAE e t dt
ITAE t e t dt
131
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
132
Armature-controlled
DC motors
U s s T f s 1 Tm s 1
Field-controlled
DC motors
DC motors whether armature-controlled or fieldcontrolled, are used in our simulation. The block diagram
of such systems is shown in Fig. 5. The control objective
for both types of DC motors is to reach a specified motor
position using an appropriate input drive voltage.
A zero-order holder device is used to keep a constant
controller output during each interval. The PID controller
inputs are defined as follows:
e kT setpo int t position t | t kT
e kT e k 1 T
T
se kT se k 1T T e k 1T
de kT
e (t)
e*(t)
PID-like
FLC
U(t)
Zero-Order
Hold
U*(t)
DC
Motor
Output (t)
U s s Tm s 1
10
x x
x
i
11
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
133
Using
conventional
PID controller
Using PID-like
FLC
Tr
Ts
(Sec.)
(Sec.)
Os
(%)
IAE
Using
conventional
PID controller
Using PID-like
FLC
12
2.29
160.48
631.83
2.16
126.93
590.69
Ts
(Sec.)
Os
(%)
IAE
ITAE
14
7.03
246.59
1490.1
4.95
178.29
856.61
ITAE
Tr
(Sec.)
Armaturecontrolled DC
motor system
Fieldcontrolled DC
motor system
Tr
Ts
(Sec.)
(Sec.)
Os
(%)
IAE
ITAE
43%
58%
6%
21%
7%
38%
43%
30%
28%
43%
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. Conclusions
The design and implementation of armature-controlled and
field-controlled DC motor system using both conventional
PID and PID-like FLC have been presented. Comparisons
of experimental results of the conventional PID controller
and PID-like FLC show that the PID-like FLC is able to
perform better than the conventional PID controller.
Results indicate that even without knowing the detail of
the control plants, we were able to construct a well
performed fuzzy logic controller based on the experience
about the position controller.
Acknowledgments
The authors would like to express their appreciation to
Deanship of Scientific Research at King Faisal University
for supporting this research.
References
[1] J. Jantzen, "Tutorial on Fuzzy Logic", Technical University
of Denmark: Department of Automation, Technical report
number
98-E
868,
19
Aug
1998,
URL:
http://www.iau.dtu.dk/
[2] J. Jantzen, "Design of Fuzzy Controllers", Technical
University of Denmark: Department of Automation,
Technical report number 98-E 864, 19 Aug 1998, URL:
http://www.iau.dtu.dk/
[3] C. C. Lee, "Fuzzy Logic in Control Systems: Fuzzy Logic
Controller-Part I", IEEE Transactions on Systems, Man, and
Cybernetics, Vol. 20, No. 2, pp. 404-418, 1990.
[4] C. C. Lee, "Fuzzy Logic in Control Systems: Fuzzy Logic
Controller-Part II", IEEE Transactions on Systems, Man, and
Cybernetics, Vol. 20, No. 2, pp. 419-435, 1990.
[5] R. Isermann, "On Fuzzy Logic Applications for Automatic
Control, Supervision, and Fault Diagnosis", IEEE
Transactions on Systems, Man, and CyberneticsPart A:
Systems and Humans, Vol. SMC-28, No. 2, pp. 221-235,
1998.
[6] R. R. Yager and D. P. Filev, "Essentials of Fuzzy Modeling
and Control", John Wiley & Sons, 1994.
[7] T. E. Marlin, "Process Control: Designing Process and
Control Systems for Dynamic Performance", McGraw-Hill,
1995.
134
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
135
Department of Computer Engg, Zakir Husain College of Engg. & Technology, AMU
Aligarh, U.P., India
3
Abstract
With the rapid development of a network multimedia
environment, digital data can now be distributed much faster and
easier. To maintain privacy and security cryptographic alone is
not enough. In recent years, steganography has become attractive
vicinity for network communications. In this paper, an attempt is
made to develop a methodology which calculate the variance of
secret message (which the sender wishes to send) and
accordingly create a carrier file. This carrier file can be sent in
any network (secure & unsecure) without giving any doubt in the
attackers mind. The practical implementation of this has been
done on Microsoft platform. Experimental results show the
feasibility of the proposed techniques.
.
Keywords: Steganography, Cryptography, Image file, Stego file.
1. Introduction
The growing use of Internet need to store, send and
receive personal information in a secured manner. For this,
we may adopt an approach that can transfer the data into
different forms so that their resultant data can be
understood if it can be returned back into its original form.
This technique is known as encryption. However, a major
disadvantage of this method is that the existence of data is
not hidden. If someone gives enough time then the
unreadable encrypted data may be converted into its
original form.
A solution to this problem has already been achieved by
using a technique named with the Greek word
steganography giving a meaning to it as writing in
hiding. The main purpose of steganography is to hide
data in a cover media so that other will not notice it [10].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. Related Works
The most suitable cover media for Steganography is image
on which numerous methods have been designed. The
main reason is the large redundant space and the
possibility of hiding information in the image without
attracting attention to human visual system.
In this
respect, a number of techniques have been developed [1,7]
using features like
Substitution
Masking and Filtering
Transform Technique
136
3. Proposed Methodology
It is the principle concept in steganography that one has to
take care to conceal the secret data in a carrier file such
that the combination of both the carrier and the
information embedded will never raise any doubt over it.
This carrier file can be sent in any communication media
for secured communication.
To achieve this, we propose here a model for
communication channel. The model is divided into two
parts. The first part (Sender Part) of the model takes the
secret data (which sender wish to send) make analysis and
accordingly creates an image file and sends to the receiver
end. The second part (Recipient Part) of the model
receives the image (stego image) and retrieves the secret
data and displays. The basic model of stego image file
creation is shown in Figure 1.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
137
8-bit images
24-bit images.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
138
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
139
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Integer I, J, Y, A, L, M;
Integer Red, Blue, Green;
Input I, J, Y;
M=1;
For ( A=1; A <= L; A++ )
{
Z = i + (j 1) *Y
IF ( Z== A) The
{
GetRGB (&Red, &Green, &Blue, Stego_buffer [A]);
Decode (Secret_buffer [M], Red);
Decode (Secret_buffer [M], Green);
Decode (Secret_buffer [M], Blue);
A=A+2;
M=M+1;
}
}
140
References
[1]. G. Sahoo and R.K. Tiwari Designing an Embedded
Algorithm for Data Hiding using Steganographic Technique by
File Hybridization, IJCSNS International Journal of Computer
Science and Network Security, Vol. 8 No. 1 January 2008.
[2]. Donovan Artz Digital Steganography: Hiding Data within
Data, IEEE Internet computing, pp.75-80 May June 2001.
[3]. Mitchell D.Swanson, Bin Zhu and Ahmed H. Tewfik
TRANSPARENT ROBUST IMAGE WATERMAKING IEEE
0-7803-3258-x/96. 1996
[4]. M.M Amin, M. Salleh, S.Ibrahim, M.R. Katmin, and M.Z. I.
Shamsuddin Information hiding using Steganography IEEE 07803-7773-March 7, 2003.
[5]. Lisa M. Marvel and Charles T. Retter, A
METHODOLOGY FOR DATA HIDING USING IMAGES,
IEEE 0-7803-4506-1/98, 1998
[6]. Bret Dunbar, A Detailed Look at Steganography techniques
and their use in an Open Systems Environment , January 18,
2002 SANS Institute.
[7]. C. Cachin, An Information Theoretic Model for
Steganography, inn proceeding 2nd Information Hiding
Workshop, vol 1525, pp.303-318, 1998.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[8]. F.A.P
Peticolas, R.J. Anderson and M. G. Kuhn,
Information Hiding A Survey, in proceeding of IEEE, pp.
1062-1078, July 1999.
[9] Venkatraman. S, Ajith Abraham, Marcin Paprzycki
Significance of Steganography on Data Security , IEEE 0-76952108-8, 2004.
[10]. N.F. Johnson and S Jajodia, Exploring Steganography:
Seeing the Unseen , Computer, vol31, no. 2, Feb 1998, pp. 2634.
[11] Ross J. Anderson and Fabien A.P. Petitcoals On the Limits
of Steganography, IEEE Journal on Selected Areas in
Communications, Vol. 16 NO. 4 MAY 1998.
[12] K.B. Raja, C.R. Chowdary, Venugopal K R, L.M. Patnaik
A Secure Image Steganography using LSB, DCT, and
Compression Techniques on Raw Images, IEEE 0-7803-95883/05.
[13] S. Craver, On public-key steganography in the presence of
an active warden, In Second International Workshop on
Information Hiding, Springer- Verlag, 1998.
[14]. A. Allen, Roy (October 2001). "Chapter 12: Microsoft in
the 1980's". A History of the Personal Computer: The People and
the Technology (1st edition). Allan Publishing. pp. 1213. ISBN
0-9689108-0-7.
http://www.retrocomputing.net/info/allan/eBook12.pdf.
[15]. Tsung-Yuan Liu and Wen-Hsiang Tsai, A New
Steganographic Method for Data Hiding in Microsoft Word
Documents by a Change Tracking Technique, Information
Forensics and Security, IEEE Transactions on Volume 2, No. 1,
March 2007 p.p 24 30.
[16]. Dekun Zou and Shi, Y.Q., Formatted text document data
hiding robust to printing, copying and scanning, Circuits and
Systems, IEEE International Symposium, Vol.5 , 2005 p.p- 4971
4974.
[17]. A Castiglione, A. De Santis, C. Soriente Taking advantage
of a disadvantage: Digital forensics and steganography using
document metadata The Journal of system and software 80
(2007) 750-764.
[18]. Sahoo, G. and Tiwari, R.K. (2010) Some new
methodologies for secured data coding and transmission, Int. J.
Electronic Security and Digital Forensics, Vol. 3, No. 2, pp.120
137.
141
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
A new approach of using data mining tools for customer
complaint management is presented in this paper. The association
rule mining technique is applied to discover the relationship
between different groups of citizens and different kinds of
complainers. The data refers to citizens' complaints from the
performance of municipality of Tehran, the capital of Iran.
Analyzing these rules, make it possible for the municipality
managers to find out the causes of complaints, so, it leads to
facilitate engineering changes accordingly. The idea of contrast
association rules is also applied to identify the attributes
characterizing patterns of complaints occurrence among various
groups of citizens. The results would enable the municipality to
optimize its services.
Keywords: association rule, customer complaint management,
e-government, data mining.
1. Introduction
With the advent of information and communications
technology (ICT), many governments have been
promoting using information systems to deliver services
and information to citizens effectively [1], [2]. Some
researchers have studied the adoption of IT and
technological innovations in public sector [3], [4], [5], [6],
[7], [8], [9], [10], [11], [12], [13]. Using such electronic
means in governments is defined as e-government that
facilitates providing services and helps governments to be
more citizen-oriented [1], [2].
One of the topics that is often included in this domain, is
the adoption of Customer Relationship Management
(CRM) in public sector. King (2007) [14] introduced CRM
as a key e-government enabler that uses ICT to collect and
store data which can then be used to discover valuable
knowledge about customers. Schellong (2005) [15]
introduced the concept of CiRM as a part of new
NewPublicManagement (NPM) that is included in the area
142
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. Methodology
In this section, we discuss the association rule and apriori
algorithm that are used in analyzing the customer data.
143
(2)
3. Case study
In this section, the association rule mining is applied to
discover the relationship between different citizens and the
type of their complaints in the case of Tehran
municipality.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Subjec
t ID
301
299
1040
1038
Message
Time
2008-11-25
15:38:50
2008-10-13
13:57:13
2009-10-20
08:40:00
2010-02-24
11:12:00
Citizen
Region
4
Citizen
Gender
Male
Citizen
Education
Master
22
Male
Bachelor
Female
Student
Male
Student
ID
1
2
3
Attribute
Subject ID
Message Time
Citizen Region
4
5
Citizen Gender
Citizen Education
definition
The subject of citizen call
The time of citizen call
The geographical location of
where the citizen lives
Male or Female
The degree of citizen education
such as BSc, MSc, PhD.
144
ID
Antecedent
Consequent
CR = 10
CR = 10
CR = 10
CR = 11
CR = 11
6
7
CR = 11
CR = 22
and Se =
spring
CR = 22
and Se =
spring
CR = 22
and Se =
spring
CR = 22
and Se =
spring
8
9
10
S
(%)
70.7
C
(%)
7.77
12
1.32
10.3
1.14
30.8
0.73
27
0.62
10.3
50
0.24
0.18
S = delinquency of
employees and managers
25
0.1
S = Transportation and
traffic
8.3
0.03
S = ICT
8.3
0.03
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
ID
Gender
Antecedent
Consequent
females
males
males
CR = 13 and
CE = BS
CR = 13 and
CE = BSc
CR = 22
females
CR = 22
S = socialcultural
S = socialcultural
S=
Transportation
and traffic
S=
Transportation
and traffic
S
(%)
1.4
C
(%)
80
0.31
15.6
1.2
43
9.6
145
ID
Education
Graduated
Nongraduated
Nongraduated
Graduated
Antecede
nt
CR = 14
and CG =
male
CR = 14
and CG =
male
CR = 12
and CG =
male
CR = 12
and CG =
male
Consequent
S = delinquency
of employees
and managers
S = delinquency
of employees
and managers
S = the urban
service
management
system
S = the urban
service
management
system
S
(%)
0.89
C
(%)
66.7
0.89
10.8
0.89
80
0.21
15
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Table 6: contrast rules in different citizens' groups
ID
Segment
49
2,5,6,7,8,9
Antecedent
CE = BSc
and CG =
male
CE = BSc
and CG =
male
CE = BSc
and CG =
male
CE = BSc
and CG =
male
Consequent
S = The urban
service
management
system
S = The urban
service
management
system
S = The urban
service
management
system
S = The urban
service
management
system
S
(%)
C
(%)
9.01
35
6.44
34.2
3.18
Very
low
21.4
Less
than
1
[2]
[3]
[4]
[5]
[6]
4. Conclusion
Using association rule mining in customer complaint
management is the main purpose of this paper. The data of
citizens' complaints on Tehran municipality were
analyzed. Using this technique made it possible to find the
primary factors that cause complaints in different
geographical regions in different seasons of the year.
The idea of contrast association rules was also applied to
discover the variables that influence complaints
occurrence. In order to accomplish this objective, citizens
were grouped according to the demographical and cultural
characteristics and the contrast association rules were
extracted.
The results show that there is a strong relationship between
citizen gender and education and patterns of complaints
occurrence. Given have been focused on cultural
characteristics, it is notable that some segments are alike,
while other segments are similar. Applying this approach
for CiRM shows an understandable way for clustering the
data using the feature impact on finding similar clusters.
Acknowledgments
We are thankful to our colleagues in Data Mining
Laboratory in Computer Engineering Department at Iran
University of Science and Technology for their
cooperation and support. This work is also partially
supported by Data and Text Mining Research Group at
Computer Research Center of Islamic Sciences, NOOR co.
Tehran, Iran.
References
[1]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
146
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[17] R. Silva and L. Batista, "Boosting government reputation
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
147
studies and its applications. Her current researches focus on using data
mining in citizen relationship management.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
In the existing study of third party authentication, for
message transformation has less security against attacks
such as man-in-the-middle, efficiency and so on. In this
approach, we at hand give a Quantum Key Distribution
Protocol (QKDP) to safeguard the security in larger
networks, which uses the combination of merits of
classical cryptography and quantum cryptography. Two
three-party QKDPs, one implemented with implicit user
authentication and the other with explicit mutual
authentication, which include the following:
1.
2.
3.
1. Introduction
Computer networks are typically a shared resource used
by many applications for many different purposes.
Sometimes the data transmitted between application
processes is confidential, and the applications would
prefer that others be unable to read it. For example, when
purchasing a
148
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
149
3. QKDPs Contributions
As mentioned, quantum cryptography easily resists replay
and passive attacks, where as classical cryptography
enables efficient key verification and user authentication.
By integrating the advantages of both classical and
quantum cryptography, this work presents 2 QKDPs with
the following contributions:
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. The Preliminaries
Two interesting properties, quantum measurement and nocloning theorem on quantum physics, are introduced in
this section to provide the necessary background for the
discussion of QKDPs.
150
(2)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
6.2.1 TC
1.
If (rTA||RTA)i=0, (KTA)i= 0,
then (QTA)i is (1/2(|0) + (|1)).
(3)
If (rTA||RTA)i=1, (KTA)i=0,
then (QTA)i is (1/2(|0) - (|1)).
(4)
(5)
151
In item 2 of users, only Tom (or Tin), with the secret key
KTA (or KTB) is able to obtain SK||UA||UB (or SK|| UB||
UA) by measuring the qubits QTA (or QTB) and computing
h(KTA,rTA) RTA (or h(KTB,rTB) RTB). Hence, Tom
(or Tin) alone can verify the correctness of the ID
concatenation
UA||UB (or UB||UA)
(6)
6.2.2 Users
Let
() be the advantage in breaking the
UCB assumption used in 3AQKDP.
()
8. Conclusion
This study proposed two three-party QKDPs to
demonstrate the advantages of combining classical
cryptography with quantum cryptography. Compared with
classical three-party key distribution protocols, the
proposed QKDPs easily resist replays and passive attacks.
This proposed scheme efficiently achieves key
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
152
References
[1] G.Li Efficient network authentication protocols: Lower bounds
and Optimal implementations, Distributed computing, Vol 9, No. 3
pp.1995.
[2] J.T.Kohi, The evolution of the Kerberros Authentication
Service European conf. proc pp 295-313-1991. B.Nuemann and T.
Tso Kerberros An authentication service for computer networks
IEEE comm., Vol 32, No.9 pp33-38 1994.
[3] W.Stallings, Cryptography and network security: principles and
practice, prentice hall 2003.
[4] N. Linden, S. Popescu, B. Schumacher, and M. Westmoreland,
"Reversibility of local transformations of multiparticle
entanglement",
quant-ph/9912039
W. Dr, J. I. Cirac, and R. Tarrach, "Separability and distillability of
multiparticle quantum systems", Phys. Rev. Lett. 83, 3562 (1999)
[5] Ll. Masanes, R. Renner, M. Christandl, A. Winter, and J. Barrett,
"Unconditional security of key distribution from causality
constraints", quant-ph/0606049
[6] C. H. Bennett, G. Brassard, C. Crpeau, and M.-H.
Skubiszewska, "Practical quantum oblivious transfer", Lecture Notes
in Computer Science 576, 351 (1991)
[7] C. H. Bennett, P. W. Shor, J. A. Smolin, and A. V. Thapliyal,
"Entanglement-assisted capacity of a quantum channel and the
reverse Shannon theorem", Phys. Rev. Lett. 83, 3081 (1999)
[8] P. W. Shor, "Equivalence of additivity questions in quantum
information theory", Commun. Math. Phys. 246, 453 (2004)
[9] M. B. Hastings, "A counterexample to additivity of minimum
output entropy", Nature Physics 5, 255 (2009)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
mining and Data Structures. He has also attended many
conferences and done certification courses which will support the
carrier growth.
153
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
153
Fault Tolerance Mobile Agent System Using Witness Agent in 2Dimensional Mesh Network
Ahmad Rostami1, Hassan Rashidi2, Majidreza Shams Zahraie3
1
Abstract
Mobile agents are computer programs that act autonomously on
behalf of a user or its owner and travel through a network of
heterogeneous machines. Fault tolerance is important in their
itinerary. In this paper, existent methods of fault tolerance in
mobile agents are described which they are considered in linear
network topology. In the methods three agents are used to fault
tolerance by cooperating to each others for detecting and
recovering server and agent failure. Three types of agents are:
actual agent which performs programs for its owner, witness
agent which monitors the actual agent and the witness agent after
itself, probe which is sent for recovery the actual agent or the
witness agent on the side of the witness agent. Communication
mechanism in the methods is message passing between these
agents. The methods are considered in linear network. We
introduce our witness agent approach for fault tolerance mobile
agent systems in Two Dimensional Mesh (2D-Mesh) Network.
Indeed Our approach minimizes Witness-Dependency in this
network and then represents its algorithm.
Keywords: mobile agent system, mesh network
1. Introduction
Mobile agents are autonomous objects capable of
migrating from one server to another server in a computer
network[1] . Mobile agent technology has been considered
for a variety of applications [2] , [3] , [4] such as systems
and network management [5] , [6] , mobile computing [7] ,
information retrieval [8] , and e-commerce [9] .
In the through mobile agent life cycle may happen failures
[10] . The failures in a mobile agent system, may lead to a
partial or complete loss of the agent. So the fault
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
154
but not the least, the lost agent due to the failure should be
recovered when a server failure happens. However, an
agent has its internal data which may be lost due to the
failure. Therefore, we have to check-point the data of an
agent as well as rollback the computation when necessary.
A permanent storage to store the check-pointed data in the
server is required. Moreover, messages are logged in the
log of the server in order to perform rollback of
executions. The overall design of the server architecture is
shown in Fig. 1.
i
arrive
i
arrive
, to Wi-1.
i
leave
i
leave
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
msg
i
arrive
i
leave
i
leave
. If the messages
is considered as
leave
msg
i
arrive
arrive
and msg
i
leave
155
i
arrive
or msg
i
leave
i
leave
i
alive
. If Wi is
i
alive
to Wi-1.
i 1
alive
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
156
i, j
3. Mesh Network
as msg
In a mesh network, the nodes are arranged into a kdimensional lattice. Communication is allowed only
between neighboring nodes; hence interior nodes
communicate with 2k other nodes [19] . A k-dimensional
mesh has n1 n2 n3 nk, where ni is the size of ith
dimension. Fig. 4 illustrates a 44 2D-Mesh with 16
nodes.
In a mesh network, two nodes (x1, x2, x3, xk) and (x'1,
x'2, x'3, x'k) are connected by an edge if there exist i, j
such that |xi x'i|=1 and xj=x'j for ji.
NewWitnessNumber
message log
i
leave
agent has finished its task and data stored on the stable
storage. When the owner receives the message, it checks
the new witness indices (i.e. new witness location in the
2D-Mesh is identified by using the number of row and
column) with the existing elements in the witnesses array
from the last to the first for finding its neighbor 1 . For
example assume a condition which there is a node that
W[e] is its indicator and this node is adjacent to the new
witness; the owner sends a message to the witness (No
new witness) to points to the new witness. This type of
message is denoted as msg
i, j
Po int NewWitness
4. Proposed Scheme
In our scheme the assumptions are as follows:
The network topology is 2D-Mesh. Each agents in
2D-Mesh network are shown as a pair (i, j), where i
is the number of row and j is the number of column.
Agent itinerates dynamically in the network, i.e. it
doesnt have any specified path list.
No failure happens for the owner of the actual agent.
Replication can solve the problem.
1 In the most case that is considered, the new witness agent was neighbor
with the last elements of array.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
157
msg
i, j
NewWitnessNumber
and msg
i, j
i, j
NewWitnessNumber
to
i, j
Po int NewWitness
1
Whereas W[Index] is adjacent with new visited node, the comparison
should be started from W[Index-1]
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Mesh network.
topology: The topology can be
considered in Hyper-Cube Networks.
The topology can be considered without a
coordinator for holding the array. That is the array
can be part of agents data.
Hyper-Cube
References
[1]
158
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
159
Abstract
Biometric systems are subjected to a variety of attacks. Stored
biometric template attack is very severe compared to all other
attacks. Providing security to biometric templates is an important
issue in building a reliable personal identification system. Multi
biometric systems are more resistive towards spoof attacks
compared to unibiometric counterpart. Soft biometric are
ancillary information about a person. This work provides
security and revocability to iris and retinal templates using
combined user and soft biometric based password hardened
multimodal biometric fuzzy vault. Password hardening provides
security and revocability to biometric templates. Eye biometrics
namely iris and retina have certain merits compared to
fingerprint. Iris and retina capturing cameras can be mounted on
a single device to improve user convenience. Security of the
vault is measured in terms of min-entropy.
Keywords: Biometric template security, min-entropy, fuzzy
vault, retina, iris, revocability, Soft Biometrics
1. Introduction
1.1 Merits of eye biometrics - Iris and Retina
Iris is the colored ring surrounding the pupil of the eye.
Retinal scan capture the pattern of blood vessels in the
eye. Retina and iris as biometric have certain merits
compared to other biometrics. They are very difficult to
spoof. More over retinal patterns do not change with age.
When a person is dead then the lens will not converge the
image that fall on the retina. Retina and iris are internal
organs and is less susceptible to either intentional or
unintentional modification unlike fingerprint. Retina is
located deep within ones eyes and is highly unlikely to be
altered by any environmental or temporal conditions. Both
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
The union of the chaff point set hides the genuine point set
from the attacker. Hiding the genuine point set secures the
secret data S and user biometric template T. The vault is
unlocked with the query template T. T is represented by
another unordered set U. The user has to separate
sufficient number of points from the vault V by comparing
U with V. By using error correction method the
polynomial P can be successfully reconstructed if U
overlaps with U and secret S gets decoded. If there is not
substantial overlapping between U and U secret key S is
not decoded. This construct is called fuzzy because the
vault will get decoded even for very near values of U and
U and the secret key S can be retrieved. Therefore fuzzy
vault construct become more suitable for biometric data
which show inherent fuzziness hence the name fuzzy vault
as proposed by Sudan [2]. The security of the fuzzy vault
depends on the infeasibility of the polynomial
reconstruction problem. The vault performance can be
improved by adding more number of chaff points C to the
vault.
160
2. Related Work
Karthick Nandakumar et al [5] used the idea of password
transformation for fingerprint and generated transformed
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
161
general
3. Proposed Method
3.1 Extraction of Bifurcation Feature point from
Retina
Thinning and joining morphological operation is
performed on the Retinal texture. The proposed work uses
the idea of Li Chen [15] for extracting the bifurcation
structure from retina.
(b) Retinal
vascular tree
(c) Red: Permuted Points and Blue: Transformed Points
Fig 3. Iris Minutiae Extraction and Password Transformation
(c)Thinned and
joined image
(d) Highlighted
Bifurcation feature
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
162
Parameter
Iris
Retina
Total
28
30
58
280
300
580
308
330
638
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
163
Eye Color
Character Code
Used
Amber
Blue
Brown
Gray
Green
Hazel
Purple/violet
A
E
B
G
N
H
P
The
security of the
iris
and
retina
vault is tabulated in Table. IV. In order to decode a
polynomial of degree n, (n+1) points are required. The
security of the fuzzy vault can be increased by increasing
the degree of the vault. Polynomial with lesser degree can
be easily reconstructed by the attacker. Polynomial with
higher degree increases security and requires lot of
computational effort. This makes more memory
consumption and makes the system slow. However they
are hard to reconstruct.
In the case of the vault with polynomial degree n, if the
adversary uses brute force attack, the attacker has to try
total of (t, n+ 1) combinations of n+ 1 element each. Only
(r, n+1) combinations are required to decode the vault.
Hence, for an attacker to decode the vault it takes C(t,
n+1)/C(r, n+1) evaluations. The guessing entropy for an 8
ASCII character password falls in the range of 18 30
bits. Therefore, this entropy is added with the vault
entropy. The security analysis of the combined password
hardened iris fuzzy vault is shown in Table IV.
Providing security [ 9, 10, 11, 12] to biometric template is
very crucial due to the severe attacks targeted against
biometric systems. This work attempts to
provide
multibiometric template security utilizing a hybrid
template protection mechanism against stored biometric
template attacks. Stored biometric template attack is the
worst of all other attacks in a biometric system.
Where
r = number of genuine points in the vault;
c= number of chaff points in the vault
t = the total number of points in the vault (r + c)
n = degree of the polynomial
Table II shows the possible eye colors and Table III shows
the structure of the sample combined user and soft
biometric passwords.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
164
c) PASSWORD : TOKEN170GF
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
165
User
password
(5
character)
(40 Bits)
FUZZY
TOKEN
VAULT
Table IV
Combined
Password
(64 bits)
FUZZY155BM
TOKEN170GF
VAULT146AM
Security Analysis of Combined User and Soft Biometric based Password Hardened Multimodal Fuzzy Vault
Vault Type
Degree of
polynomial
Minentropy
of the
vault
(in
security
bits
Total no: of
combinations
tried
Combinations
Required
to decode the
vault
No: of
Evaluations
To be
performed
to decode
the vault
Min-entropy
+ guessing
entropy of the
password
(in security bit )
Iris
33
6.1088 x 10 16
6.9069 x 10 6
8.8445 x 109
51 to 63
Retina
33
1.1457 x 10 17
1.4307 x 10 7
8.0079 x 10 9
51 to 63
Combined
Iris and
Retina
13
51
1.8395 x 1028
1.0143 x 1013
1.8136 x 1015
68 to 81
5. Conclusions
To resist against identity theft and security attacks it is
very important to have a reliable identity management
system.
Biometrics play vital role in human
authentication. However, biometric based authentication
systems are vulnerable to a variety of attacks. Template
attacks are more serious compared to other attacks. Single
template protection scheme is not sufficient to resist the
attacks. Hybrid scheme provide better security than their
single counterpart.
Fuzzy vault, which is a crypto
biometric scheme, is modified by adding password
hardening idea (salting) to impart more resistance towards
attacks. Multi biometric fuzzy vaults can be implemented
and again can be salted using passwords for achieving
more security in terms of min-entropy. The only
disadvantage of biometrics authentication as compared to
traditional password based authentication is non
revocability. Retina has certain advantage as compared to
other biometrics and it is suitable for high security
applications. Soft biometrics is ancillary information about
a person, when combined with user password gives better
results. It is very difficult for an attacker to gain both the
biometric features, soft biometric components and user
password
at the same time. The user password can be changed for
generating revocable
References
[1] Umat uludag, sharath pankanti, Anil. K.Jain Fuzzy vault for
fingerprints, Proceedings of International conference on
Audio video based person authentication, 2005.
[2] Juels and M.Sudan, A fuzzy vault scheme, Proceedings of
IEEE International symposium Information Theory, 2002.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
166
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
level. She is a life member of many professional organizations like
CSI, ISTE, AACE, WSEAS, ISCA, and UWA. She is currently the
Principal Investigator of 5 major projects under UGC and DRDO.
167
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
167
Abstract
Information retrieval systems (e.g., web search engines)
are critical for overcoming information overload. A major
deficiency of existing retrieval systems is that they
generally lack user modeling and are not adaptive to
individual users, resulting in inherently non-optimal
retrieval performance [1]. Sources of these problems
include the lack of support for query refinement. Web
search engines typically provide search results without
considering user interests or context. This in turn increases
the overhead on the search engine server. To address these
issues we propose a novel interactive guided Online/offline search mechanism. The system allows user to choose
for normal or combinational search [5] of the query string
and allows the user to store the best search results for the
query string. The proposed system also provides option for
off-line search which searches from the bundle of stored
results. Systems which implemented offline search require
downloading and installing the stored bundle of search
results before using it. The proposed system is an
interactive web based search facility both offline and
online. The system doesnt require installing the bundle of
saved search results for offline searching, as the search
results are added to the bundle interactively as chosen by
the user. The system is very likely to return the best
possible result as it uses combinational search. The result
from the combination search can be stored and can be
searched again offline. Experiments revealed that
combination search of keywords in query yields variety of
results. Thus the Bundle of Stored result consists of best
possible results as the user chooses to save in it. This will
enhance the systems searching capabilities offline, which
in turn reduces the burden on the search engine server.
Keywords: Web search Engine, Meta-search engine,
Information retrieval, Google, Retrieval Models, Offline
Search, Combination Search, Google API, JSON
1. Introduction
One of the most pressing issues with today's explosive
growth of the Internet is the so-called resource discovery
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. Related Work
Great deal of work has been done in making available
guided search such as GuideBeam [10], which is the result
of research work carried out by the DSTC (Distributed
Systems Technology Centre) at the University of
Queensland in Brisbane, Australia. GuideBeam works
based on a principle called "rational monotonicity" that
168
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3. Architecture
Bundle
of Saved
Results
User
169
JSP and
Servlet
Container
Google
AJAX
search
Google.
com
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.3 JSON
JSON (an acronym for JavaScript Object Notation) is a
lightweight text-based open standard designed for humanreadable data interchange. It is derived from the JavaScript
programming language for representing simple data
structures and associative arrays, called objects. Despite
its relationship to JavaScript, it is language-independent,
with parsers available for virtually every programming
language.
In JSON the String data structure take on these forms
170
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
171
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
172
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
173
Figure 9: Combination Search (part 2), displaying results for search query
programming java
Later the offline search yields the best results and thus
reduces the repeat searching by the user which again
reduces the burden on the search engine server.
Figure 8: Combination Search (part 1), displaying results for search query
java programming
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1] Xuehua Shen, Bin Tan, ChengXiang Zhai,Implicit User
Modeling for Personalized Search CIKM05, October 31
November 5, 2005, Bremen, Germany.
[2] Orland Hoeber and Xue Dong Yang, Interactive Web
Information Retrieval Using WordBars, Proceedings of the
2006 IEEE/WIC/ACM International Conference on Web
Intelligence
[3] M. Schwartz, A. Emtage, B. M e, and B. Neumann, "A
Comparison of Internet Resource Discovery Approaches,"
Computer Systems,
[4] G. Nunberg. As google goes, so goes the nation. New York
Times, May 2003.
[5] Ding Choon Hoong and Rajkumar Buyya,Guided Google:
A Meta Search Engine and its Implementation using
the Google Distributed Web Services
[6] The Technology behind Google
http://searchenginewatch.com/searchday/02/sd0812googletech.html
[7] http://code.google.com/apis/ajaxsearch/
[8] Google Groups (web-apis)
http://groups.google.com/groups?group=google.public.webapis
[9] www.json.org
[10] Guidebeam - http://www.guidebeam.com/aboutus.html
[11] Peter Bruza and Bernd van Linder, Preferential Models of
Query by Navigation. Chapter 4 in Information Retrieval:
Uncertainty & Logics, The Kluwer International Series on
Information Retrieval. Kluwer Academic Publishers,
1999.http://www.guidebeam.com/preflogic.pdf
[12] Softnik Technologies, Google API Search Toolhttp://
www.searchenginelab.com/common/products/gapis/docs/
[13] Google API Proximity Search (GAPS) http://www.staggernation.com/gaps/readme.html
[14] Sree Harsha Totakura, S Venkata Nagendra Rohit Indukuri
V Vijayasherly, COS: A Frame Work for Clustered off-line
Search
[15] Luiz Andr Barroso, Jeffrey Dean, and Urs Hlzle, Web
Search for a Planet: The Google Cluster Architecture, Google,
2003.
[16] Mitch Wagner, Google Bets The Farm On Linux, June
2000, - http://www.internetwk.com/lead/lead060100.htm
[17] http://json-schema.org
[18] M. B. Rosson and J. M. Carroll. Usability Engineering:
174
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
175
Freescale Semiconductor
Noida, 201301, India
3
USIT, GGSIPU
Delhi, 110006, India
Abstract
This paper presents the different types of analysis such as noise,
voltage, read margin and write margin of Static Random Access
Memory (SRAM) cell for high-speed application. The design is
based upon the 0.18 m CMOS process technology. Static Noise
Margin (SNM) is the most important parameter for memory
design. SNM, which affects both read and write margin, is
related to the threshold voltages of the NMOS and PMOS
devices of the SRAM cell that is why we have analyzed SNM
with the Read Margin, Write Margin and also the Threshold
voltage. For demand of the high-speed application of the SRAM
cell operation, supply voltage scaling is often used that is why
we have done Data Retention Voltage. We took different types
of curve by which straightforwardly we could analyses the size
of the transistor of the SRAM cell for high-speed application.
Keywords: Static Noise Margin, SRAM, VLSI, CMOS.
1. Introduction
This paper is to introduce how the speed of the SRAM cell
depends on the different types of noise analysis. Both cell
ratio and pull-up ratio are important parameters because
these are the only parameters in the hand of the design
engineer [1]. Technology is getting more complex day by
day. Presently in industry, 32 nm CMOS process
technology is used. So it should be carefully selected in
the design of the memory cell. There are number of design
criteria that must be taken into the consideration. The two
basic criteria which we have taken such as one is the data
read operation should not be destructive and another one is
static noise margin should be in the acceptable range [3].
For demand of the high speed SRAM cell operation,
supply voltage scaling is often used. Therefore, the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
176
Fig. 2a Circuit for two cross couple inverter with BL and BLB [1].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. Write Margin
Write margin is defined as the minimum bit line voltage
required to flip the state of an SRAM cell [1]. The write
margin value and variation is a function of the cell design,
SRAM array size and process variation.
Already five existing static approaches for measuring
write margin are available [5]. First we calculated write
margin by the existing bl sweeping method then we
compared that with Static Noise Margin.
Write margin is directly proportional to the pull up ratio.
Write margin increases with the increases value of the pull
up ratio. So carefully you have to design SRAM cell
inverters before calculating the write margin of SRAM
cell during write operation. Pull up ratio also fully
depends on the size of the transistor.
177
5. Read Margin
Based on the VTCs, we define the read margin to
characterize the SRAM cell's read stability. We calculate
the read margin based on the transistor's current model [7].
Experimental results show that the read margin accurately
captures the SRAM's read stability as a function of the
transistor's threshold voltage and the power supply voltage
variations. Below fig.4c shows the read margin of the
SRAM cell during read operation.
Technology
(mm)
CR
SNM(mV)
180 nm
0.8
1.0
1.2
1.4
1.6
205
209
214
218
223
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
178
Technology
180nm
DRV(V)
1.8
1.6
1.4
1.2
1.0
0.8
0.6
SNM(mV)
200
195
191
188
184
180
178
Fig. 5c
The graphical representation of Read Margin vs. SNM of the SRAM cell
Technology
180 nm
Fig. 5b The graphical representation of DRV vs. SNM of the SRAM cell.
180nm
CR
1.0
1.2
1.4
1.6
1.8
2.0
Read Margin
0.393
0.398
0.401
0.404
0.407
0.409
SNM (mV)
205
209
214
218
223
225
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
179
Acknowledgment
Table 4: Threshold Voltage vs. SNM
References
7. Conclusion
In this paper, we have analysed Static Noise Margin and
then we have compared Static Noise Margin with the Data
Retention Voltage, Write Margin, Read Margin and also
threshold voltage during read/write operation. We have
analysed both read margin for read ability and write
margin for SRAM write ability with the Static Noise
Margin. Static Noise Margin affects both read margin and
write margin. Both read margin and write margin depends
on the pull up ratio and cell ratio respectively. The range
of cell ratio should be 1 to 2.5 and also in case of pull up
ratio, the W/L ratio of load transistor should be greater
than the 3-4 times of the access transistor.
This project is based on the reliability of SRAM circuits
and systems. We considered that the four major
parameters (SNM, DRV, RM and WM) of SRAM cell.
Finally, we can say that different types of analysis are
directly proportional to the size of the transistor.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
180
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
181
Abstract
This paper tries to put a new direction for the evaluation of some
techniques for solving data mining tasks such as: Statistics,
Visualization, Clustering, Decision Trees, Association Rules and
Neural Networks. The new approach has succeed in defining
some new criteria for the evaluation process, and it has obtained
valuable results based on what the technique is, the environment
of using each techniques, the advantages and disadvantages of
each technique, the consequences of choosing any of these
techniques to extract hidden predictive information from large
databases, and the methods of implementation of each technique.
Finally, the paper has presented some valuable recommendations
in this field.
1. Introduction
Extracting useful information from data is very far easier
from collecting them. Therefore many sophisticated
techniques, such as those developed in the multidisciplinary field data mining are applied to the analysis of
the datasets. One of the most difficult tasks in data mining
is determining which of the multitude of available data
mining technique is best suited to a given problem.
Clearly, a more generalized approach to information
extraction would improve the accuracy and cost
effectiveness of using data mining techniques. Therefore,
this paper proposes a new direction based on evaluation
techniques for solving data mining tasks, by using six
techniques: Statistics, Visualization, Clustering, Decision
Tree, Association Rule and Neural Networks. The aim of
this new approach is to study those techniques and their
processes and to evaluate data mining techniques on the
basis of: the suitability to a given problem, the advantages
and disadvantages, the consequences of choosing any
technique, and the methods of implementation [5].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
182
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
183
4.1.6 Implementation
Process
4.1.2 The
Technique
Environment
of
Using
Statistical
of
Statistical
Technique
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
184
4.3.6 Implementation
process
of
Clustering Technique
4.3.2 The
Technique
Environment
of
using
Clustering
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4.4.4 The
Technique
Disadvantages
of
Decision
Trees
185
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5. Conclusion
In this paper we described the processes of selected
techniques from the data mining point of view. It has been
realized that all data mining techniques accomplish their
goals perfectly, but each technique has its own
characteristics and specifications that demonstrate their
accuracy, proficiency and preference. We claimed that
new research solutions are needed for the problem of
categorical data mining techniques, and presenting our
ideas for future work. Data mining has proven itself as a
186
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
187
Abstract
The purpose of cartographic generalization is to represent a
particular situation adapted to the needs of its users, with
adequate legibility of the representation and perceptional
congruity with the real situation. In this paper, a simple approach
is presented for the selection process of building ground plans
that are represented as 2D line, square and polygon segments. It
is based on simple selection process from the field of computer
graphics. It is important to preserve the overall characteristics of
the buildings; the lines are simplified with regard to geometric
relations. These characteristics allow for an easy recognition of
buildings even on small displays of mobile devices. Such
equipment has become a tool for our everyday life in the form of
mobile phones, personal digital assistants and GPS assisted
navigation systems. Although the computing performance and
network bandwidth will increase further, such devices will
always be limited by the rather small display area available for
communicating the spatial information. This means that an
appropriate transformation and visualization of building data as
presented in this paper is essential.
Keywords: Cartographic Generalization, GIS, Map, Object,
Selection.
1. Introduction
In natural environment human senses perceive globally,
without details. Only when one has a particular interest he
observes details, and this is a normal process, otherwise
abundance of details would lead to confusion. For similar
reasons, in the process of cartographic generalization
many details are omitted. Generalization is a natural
process present everywhere, as well as in cartography.
Generalization in the compilation of the chart content is
called cartographic generalization and it is one of the most
important determining occurrences in the modern
cartography, included in cartographic representations. The
quantitative and qualitative basis of cartographic
generalization is determined by the purpose of the chart
and scale, symbols, features of the represented objects,
and other factors. Besides in mathematical cartography,
the main chart characteristic is based on the reduction of
2. Related Works
Cartography is a very old scientific discipline, and
generalization dates back to the times of the first map
representations. The paper focuses on the automation of
the generation of scale dependent representations. There
are a lot of research proposal in this domain. Many
different techniques have been proposed: pragmatic
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
188
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
189
Temple
Historical Place
4. Conclusions
The basic object elimination technique which removes
the map objects simply based upon their sizes affects
the maps legibility. A modification over the existing
technique is proposed that takes into consideration the
density of the area surrounding the object to be deleted.
Besides ignorance rules have been defined to retain
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1] Robert Weibel and Christopher B. Jones, Computational
Perspective of Map Generalization, GeoInformatica, vol. 2,
pp 307-314, Springer Netherlands, November/December
1998.
[2] Brassel, K. E. and Robert Weibel, A Review and
framework
of
Automated
Map Generalization,
International Journal of Geographical Information System,
vol. 2, pp 229-224, 1988.
[3] M. Sester, Generalization Based on Least Squares
Adjustment International Archive of Photogrammetry
and Remote Sensing, vol. 33, Netherland, 2000.
[4] Mathias Ortner, Xavier Descombes, and Josiane Zerubia,
Building Outline Extraction from Digital Elevation Models
Using Marked Point Processes, International Journal
Computer vision, Springer, vol. 72, pp: 107-132, 2007.
[5] P. Zingaretti, E. Frontoni, G. Forlani, C. Nardinocchi,
Automatic extraction of LIDAR data classification rules,
14th international conference on Image Analysis and
Processing, pp 273-278, 10-14 Sept.
[6] Anders, K.-H. & Sester, M. [2000], Parameter-Free Cluster
Detection in Spatial Databases and its Application to
Typification, in: IAPRS, Vol. 33, ISPRS, Amsterdam, Holland.
[7] Bobrich, J. [1996], Ein neuer Ansatz zur kartographischen
Verdr angung auf der Grundlage eines mechanischen Federmodells, Vol. C455, Deutsche Geod atische Kommission, M
unchen.
[8] Bundy, G., Jones, C. & Furse, E. [1995], Holistic
generalization of large-scale cartographic data, Taylor & Francis,
pp. 106119.
[9] Burghardt, D. & Meier, S. [1997], Cartographic
Displacement Using the Snakes Concept, in: W. F orstner & L.
Pl umer, eds, Smati 97: SemanticModelling for the
Acquisition of Topographic Information fromImages andMaps,
Birkh auser, Basel, pp. 114120.
[10] Douglas, D. & Peucker, T. [1973], Algorithms for the
reduction of the number of points required to represent a
digitized line or its caricature, The Canadian Cartographer
10(2), 112122.
190
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
191
Abstract
This article presents the design and implementation of a wireless
control system of a robot with help of a computer using LPT
interface in conjunction with Arduino + X-bee, which is an
electronic device that uses the Zigbee protocol that allows a
simple implementation, low power consumption, and allows the
robot to be controlled wirelessly, with freedom of movement. In
the implementation were used two Arduino with wireless
communication using X-bee modules. The first Arduino + X-bee
were connected to the computer, from which received signals
that were sent by the wireless module to the Arduino X-bee that
was in the robot. This last module received and processed signals
to control the movement of the robot. The novelty of this work
lies in the autonomy of the robot, designed to be incorporated
into applications that use mini-robots, which require small size
without compromising the freedom of their movement.
Keywords: Integrated Circuit, Parallel Port, ATmega 168
Microcontroller, Arduino, X-bee, Zigbee.
1. Introduction
It is currently being investigated and put on the market
many wireless products because of the large development
of this type of communication technologies.
Mobile robotics is no stranger to this fact; the benefits of
joining the wireless communication technologies with the
developments in mobile robots are clear as we can
appreciate.
Nowadays, a large number of universities work with these
devices, including many people without knowledge in
electronics who use them, for the simplicity to artistic,
textiles, musical and botanical projects, among others.
This is because its range is very varied, ranging from
robotics, sensors, audio and monitoring to navigation and
action systems. Examples include: TiltMouse (an
accelerometer that transforms Arduino into a pitch
mouse), the Arduino Pong (which is a ping pong game
programmed in Arduino) and the optical tachometer
among others [1].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
192
Power supply:
VIN: It is the input of voltage to the Arduino board
when an external power supply is used (instead of
5v of the USB cable or other regulated power
supply); i.e., voltage can be supplied through this
pin.
5v: Regulated power supply used to power the
microcontroller and other components of the board.
This can get from the VIN input via the integrated
voltage regulator. It can be provided by a USB
cable or any other regulated power supply [8].
Memory: The ATmega168 has 16KB of memory to
storage sketches (the name used for an Arduino
program, it is the unit of code that loads and runs on
an Arduino board, 2KB of the memory is reserved for
the bootloader). It also has 1KB of SRAM (Static
Random Access Memory) and 512 bytes of EEPROM
(Electrically Erasable Programmable Read-Only
Memory).
Inputs and Outputs: Each one of the 14 Diecimila
Arduino digital pins can be configured either as input
or output using the functions pinMode(),
digitalWrite() and digitalRead(), which operate at 5v.
Each pin can provide or receive a maximum current
of 40mA and has internal pull-up resistors (switched
off by default) from 20 to 50 KOhms. In addition,
some of the pins have special functions.
Serie, 0 (RX) and 1 (TX): Used to receive (RX)
and transmit (TX) TTL serial data, which are
connected with the corresponding pin of
conversion from USB to Serial-TTL FTDI chip.
PWM 3, 5, 6, 9, 10 y 11: Generate an 8-bit output
PWM signal with the function analogWrite().
LED 13: There is an on-board LED connected to
pin 13, which if set to HIGH if LED lights and
changes to LOW when the LED turns off.
Reset: When this pin is set to LOW, it resets the
microcontroller. Normally it is used when the reset
button is inaccessible by the use of a shield that hides
it.
Communication: It can communicate from computer
to computer with an Arduino or other
microcontrollers. The ATmega168 has implemented
UART
(Universal
Asynchronous
Receiver
Transmitter) TTL serial communication in the pins 0
(RX) and 1 (TX) and a FT232RL FTDI chip
embedded into the motherboard that converts this
serial communication into USB using the FTDI
drivers to provide us a COM Virtual port to
communicate with the computer. The Arduino
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
PWM
3,5,6,9,10,11
5V
Reset
Chip
FTDI
VIN
Microcontrolador ATmega 168
Fig. 3. Main components of Arduino.
193
Arduino + X-bee
(sender of data)
End
End
End
End
3. Design
This section presents the architecture of the system and the
description of the main components.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
194
Computer
LPT
Power
5V
+ -
Arduino+X-bee
bee
Sender module
Arduino+X-bee Battery 9V
+ Receiver module
L293
Protoboard
Chip 74ls245
PC - Parallel
Port of Data
Port of Control
Port of Status
Port of GND
Fig. 6. Parallel port.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.
4.
5.
195
4. Implementation
Next, the hardware used and how it was implemented are
described in detail.
There were located the data record cables from the parallel
port with the help of a multimeter, and the plastic was
removed, leaving a small tip of copper exposed in order to
make the connection to the chip 74ls245 (figure 13).
Fig. 13. LPT Data Cable and its connection to the chip 74ls245.
Keys
Action
Forward.
Back.
Left.
Right.
Pause.
Exit.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
op2=getch();
switch(op2)
{
case 72 : (oup32)(0x378,1);
Sleep(100);
(oup32)(0x378,tmp_CToff);
break;
case 80 : (oup32)(0x378,2);
Sleep(100);
(oup32)(0x378,tmp_CToff);
break;
}
196
bit1=digitalRead(2);
bit2=digitalRead(3);
bit3=digitalRead(4);
if(bit1==0)
{ if(bit2==0)
{ if(bit3==0)
{ Serial.write('/'); }
else
{ Serial.write('+'); } }
else
{ Serial.write(')'); } }
else
{ if(bit2==0)
{ Serial.write('('); }
else
{ Serial.write('*'); }}}
5. Conclusions
A robot was developed in this paper, which was controlled
through the parallel port of the computer with an
implementation of Arduino + X-bee, resulting that the
robot could be guided with a certain freedom and
autonomy without using wires; this robot can be used to
enter to confined spaces when it is needed, for example.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1] http://electrolabo.com,
http://www.ladyada.net,
http://www.Arduino.cc, http://www.instructables.com.
Ultima consulta Junio 30 de 2010.
[2] Alonso lvarez, Jaime. Desarrollo de un robot mvil
controlado desde Internet: Comunicaciones Wi-Fi.
Universidad Pontificia Comillas ICAI. Madrid. Junio
2005.
[3] Barro Torres, Santiago; Escudero Cascn, Carlos.
Embedded Sensor System for Early Pathology
Detection in Building Construction. Department of
Electronics and Systems, University of A Corua.
Espaa.
[4] Boonsawat, Vongsagon, et all. X-bee Wireless Sensor
Networks for Temperature Monitoring. School of
Information,
Computer,
and
Communication
Technology Sirindhorn International Institute of
Technology, Thammasat University, Pathum-Thani.
Thailand.
[5] Daz del Dedo, Luis; Prez Garca, Luis. Diseno,
Construccion y Desarrollo de la Mecanica, Electronica
y Software de Control del Robot Cuadrupedo R4P v3i.
Universidad Europea de Madrid.Villaviciosa de Odon,
Madrid. Septiembre de 2008.
[6] Nadales Real, Christian. Control de un Quadrotor
mediante la plataforma Arduino. Escuela Politcnica
Superior de Castelldefels. Julio de 2009.
[7] Baik, Kyungjae. BoRam: Balancing Robot using
Arduino and Lego. September, 2008.
[8] Informacin
general
del
Arduino.
http://Arduino.cc/es/Main/ArduinoBoardDiecimila
ltima consulta Julio 2 de 2010.
[9] http://www.digi.com/products/wireless/pointmultipoint/X-bee-series1-module.jsp#specs
ltima
consulta Julio 2 de 2010.
[10]Zigbee standards Organization Zigbee 2007
http://www.zigbee.org/Products/TechnicalDocumentsd
ownload/tabid/237/default.aspx ltima consulta Junio
5 de 2010.
[11]X-bee
primeros
pasos
http://www.scribd.com/doc/13069890/Primeros-PasosCon-Arduino-y-X-bee ltima consulta Mayo 10 de
197
2010.
[12]http://mundobyte.files.wordpress.com/2007/12/lpt.jpg
ltima consulta Mayo 12 de 2010.
[13]http://www.atmel.com/dyn/resources/prod_documents/
doc2545.pdf ltima consulta Abril 20 de 2010.
[14]IEEE 802.15.4-2003 IEEE standard for local and
Metropolitan Area Networks: Specifications for LowRate Wirless Personal Area Networks. 2003
[15]http://www.datasheetcatalog.net/es/datasheets_pdf/L/2
/9/3/L293.shtml ltima consulta Abril 15 de 2010.
[16]http://cfievalladolid2.net/tecno/cyr_01/robotica/sistem
a/motores_servo.htm ltima consulta Abril 3 de 2010.
Christian Hernndez Pool. Currently studying the seventh
semester in computer science at the Autonomous University of
Yucatan (UADY). He participated in the project fair held in the
institution as well as software development projects.
Raciel Omar Poot Jimnez. Received a degree in Computer
Engineer from the Autonomous University of Yucatan (UADY) in
2008. He has been a full time teacher at the Autonomous
University of Yucatan since 2008 at the department of electronics
in Tizimn Mxico. He has participated in development projects of
electronic engineering. Currently is giving courses on electronic
interfaces and programming languages in the professional
programs at the UADY.
Lizzie Edmea Narvez-Daz. Received a degree in Computer
Science from the the Autonomous University of Yucatn (UADY) in
1997. She received a Master of Computer Science degree from
Monterrey Technological Institute (ITESM), Campus Cuernavaca,
in 2007. She has been a full time teacher at the Autonomous
University of Yucatn since 2000 in the Networking department in
Tizimn Mxico. She has participated in software engineering
development projects. Currently she is giving courses on wireless
networks in the professional programs in the UADY.
Erika Rossana Llanes-Castro. Received a degree in Computer
Science from the Autonomous University of Yucatan (UADY) in
2002. She is about to receive a Computer Science degree from
Monterrey Technological Institute (ITESM), Campus Estado de
Mexico. Currently, she is a full time academic technician at the
Autonomous University of Yucatan since 2002 in the department of
computer in Tizimn Mxico. She has participated in software
engineering development projects. Currently is giving courses on
programming mobile devices in the professional programs in the
UADY.
Victor Manuel Chi-Pech. Obtained his degree in Computer
Science from the Autonomous University of Yucatan (UADY) in
1996 and his M. Sc. degree in Wireless Network from Monterrey
Technological Institute (ITESM), Campus Cuernavaca, in 2007.
Victor Chi has worked since 2000 at the Autonomous University of
Yucatan, as a full time professor. He has participated in software
engineering development projects. He is currently giving courses
on wireless networks and software engineering in the professional
programs in the UADY.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
198
Research Scholar, Department of computer Science & Engineering, Suresh Gyan Vihar, University
Jaipur, Rajasthan 302 004, India
2
Abstract
Supply chain consist of various components/ identities like
supplier, manufacturer, factories, warehouses, distributions
agents etc. These identities are involved for supplying raw
materials, components which reassembles in factory to produce a
finished product. With the increasing importance of computerbased communication technologies, communication networks are
becoming crucial in supply chain management. Given the
objectives of the supply chain: to have the right products in the
right quantities, at the right place, at the right moment and at
minimal cost, supply chain management is situated at the
intersection of different professional sectors. This is particularly
the case in construction, since building needs for its fabrication
the incorporation of a number of industrial products. This paper
focuses on an ongoing development and research activities of
MAS (Multi Agent System) for supply chain management and
provides a review of the main approaches to supply chain
communications as used mainly in manufacturing industries.
KEYWORDS: Information exchanges, Multi Agent System,
knowledge sharing and supply chain management
JADE
1. Introduction
Supply chain is a worldwide network of suppliers, factories,
warehouses, distribution centers, and retailers through which raw
materials are acquired, transformed, and delivered to customers.
In recent years, new software architecture for managing the
supply chain at the tactical and operational levels has emerged. It
views the supply chain as composed of a set of intelligent
software agents, each responsible for one or more activities in
the supply chain and each interacting with other agents in the
planning and execution of their responsibilities. Supply Chain
Management is the most effective approach to optimize working
capital levels, streamline accounts receivable processes, and
eliminate excess costs linked to payments.
Content
Ontology
Customer Agent
Content
:
ContentManager
Content_Language:
Codec
Setup(addBehaviour()
)
2. Literature Survey
Analysts estimate that such efforts can improve working Capital
levels, streamline accounts receivable processes, and eliminate
excess costs linked to payments. Analysts estimate that such
efforts can improve working capital levels by 25% [2]. Today,
the best companies in a broad range of industries are
implementing supply chain management solutions to improve
business performance and free cash resources for growth and
innovation. Supply Chain Management is about managing the
physical flow of product and related flows of information from
purchasing through production, distribution and Delivery of the
Codec
Concept
Request_Behavio
Concept : Concept
Predicates : Predicate
Action : AgentAction
Predicate
Agent Action
Action()
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
199
Supplie
Manufacturer
Distributor
Retailer
Statistics Swarm
Supplie
Manufacturer
Distributor
Retailer
Production Agent
Inventory
Management
Production
Agent
Planning
Cost Management
Agent
Work Flow
Agent
Order
Management
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
200
Creates
Information
Blackboard
Problem
Warehouse
Handles
Return
-----Aspect
Client
Agent
Factory
Agent
-----
they found that the trust mechanism reduced the average cycle
time rate and raised the in-time order fulfillment rate as the
premium paying for better quality and shorter cycle time.
Charles M. Macal et al. gave a new approach [5] to modeling
systems comprised of interacting autonomous agents.
&
described the foundations of ABMS, identifies ABMS toolkits
and development methods illustrated through a supply chain
example, and provides thoughts on the appropriate contexts for
ABMS versus conventional modeling techniques. William E.
Walsh et al. highlighted some issue that must be understood to
make progress in modeling supply chain formation [3].
Described some difficulties that arise from resource contention.
They suggested that market-based approaches can be effective in
solving them. Mario Verdicchio et al. considered commitment as
a concept [17] that underlies the whole multi-agent environment,
that is, an inter-agent state, react a business relation between two
companies that make themselves represented by software agents.
Michael N. Huhns et al. found after this research that supply
chain problems cost companies [8] between 9 to 20 percent of
their value over a six month period. The problems range from
part shortages to poorly utilized plant capacity. Qing Zhang et al.
provide a review of coordination of operational information in
supply chain [12] . Then the essentials for information
coordination are indicated.Vivek Kumar et al. gave a solution for
the construction, architecture, coordination and designing of
agents. This paper integrates bilateral negotiation, Order
monitoring system and Production Planning and Scheduling
multiagent system. Ali Fuat- Guneri et al gave the concept of
supply chain management process[16], in which the firm select
best supplier , takes the competitive advantage to other
companies. As supplier selection is an important issue and with
the multiple criteria decision making approach, the supplier
selection problem includes both tangible and intangible factors.
<<enumeration>>
Aspect Scope
-role : String
-role Instance : String
Coordination
Aspect
AGENT SCOPE
PROTOCOL SCOPE
CONVERSATION_SCOPE
Representation Aspect
Distribution
Aspect
Coordin
ate Role
Role
Interation
Protocol
Encoding Format
Component
Acti
on
Component
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
VB/VL
P/L
G/
H
and
P.
important
problem
in
the
commercial world and can be
improved by greater automated
support. The problem is salient to
the
MAS
community
and
deserving of continued research.
Evaluation
of
Modeling
Techniques for
Agent-Based
Systems
Walsh
Michael
Wellman
Uncertain
3.
VG/V
H
Onn
and
Sturm
Fuzzy refraction
201
Shehory
Arnon
0.25
S.No.
1.
supply Chain
Models in Hard
Disk
Drive
Manufacturing
Robert de Souza
and Heng Poh
Khong
2.
Modeling supply
chain Formation
in
Multiagent
System
William
E.
4.
Effects
of
Information
Sharing
on
Supply
Chain
Performance in
Electronic
Commerce
Fu-ren
Lin,
Sheng-hsiu
Huang,
and
Sheng-cheng Lin
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5.
Commitments
for Agent-Based
Supply
Chain
Management
Mario
Verdicchio and
Marco
Colombetti
6.
Building
Holonic Supply
Chain
Management
Systems: An eLogistics
Application for
the Telephone
Manufacturing
Industry
MihaelaUlieru
and
MirceaCobzaru
7.
A
Multiagent
Systems
Approach
for
Managing
Supply-Chain
Problems: new
202
8.
A Multi-Agent
Architecture for
a
Dynamic
Supply
Chain
Management
Jos Alberto R.
P.
Sardinha1,
Marco
S.
Molinaro2,
Patrick
M.
Paranhos2,
Pedro
M.
Cunha2,
Ruy L. Milidi2,
Carlos J. P. de
Lucena2
9.
How to Model
With
Agents
Proceedings of
the 2006 Winter
Simulation
Conference
Charles
M.
Macal
and
Michael J. North
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
10.
11.
New
Multi
Attributes
Procurement
Auction
for
AgentBased
Supply
Chain
Formation
Rasoul Karimi,
Caro Lucas and
Behzad Moshiri
Multi-Agent
Decision
Support System
for
Supply
Chain
Management
Yevgeniya
Kovalchuk
12.
Double-agent
Architecture for
Collaborative
Supply
Chain
Formation
Yang Hang and
Simon Fong
13,
Essentials
for
Information
Coordination in
Supply
Chain
203
Systems
impact
on
supply
chain
performance, and the policy of
information sharing
14.
Effects of Trust
Mechanisms on
Supply
Chain
Performance
Using
Multiagent Simulation
and Analysis
Fu-ren Lin ,Yuwei Song and
Yi-peng Lo
15.
A
Multiagent
Conceptualizatio
n For SupplyChain
Management
Vivek kumar ,
Amit
Kumar
Goel , Prof.
S.Srinivisan
16.
An
integrated
fuzzy-lp
approach for a
supplier
selection
problem
in
supply
chain
management
Ali Fuat Guneri,
Multi-agent
computational
environments are suitable for
studying classes of coordination
issues
involving
multiple
autonomous or semi-autonomous
optimizing
agents
where
knowledge is distributed and
agents communicate through
messages.
The multiagent simulation system
Swarm is employed to simulate
and analyze the buyerseller
correlation in sharing information
among business partners in supply
chains
The deeper the information
sharing level, the higher in-time
order fulfillment rate and the
shorter order cycle time, as
information sharing may reduce
the demand uncertainty that firms
normally encounter. Finally, firms
that share information between
trading partners tend to transact
with a reduced set of suppliers.
Paper present solution for the
construction,
architecture,
coordination and designing of
agents. This paper integrates
bilateral
negotiation,
Order
monitoring system and Production
Planning
and
Scheduling
multiagent System.
The wide adoption of the Internet
as an open environment and the
increasing popularity of machine
independent
programming
languages, such as Java, make the
widespread adoption of multiagent technology a feasible goal
A hierarchy multiple model based
on fuzzy set theory is expressed
and fuzzy positive and negative
ideal solutions are used to find
each
suppliers
closeness
coefficient. Finally, a linear
programming model based on the
coefficients of suppliers, buyers
budgeting, suppliers quality and
capacity constraints is developed
and order quantities assigned to
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Atakan Yucel ,
Gokhan
Ayyildiz
17.
18.
Malaca:
A
component and
aspect-oriented
agent
architecture
Information and
Software
Technology
Mercedes Amor
*, Lidia Fuentes
A
material
supplier
selection model
for
property
developers using
Fuzzy Principal
Component
Analysis
Automation in
Construction
Ka-Chi Lam ,
Ran Tao, Mike
Chun-Kit Lam
3. Conclusion
Multi-agent system is a loosely coupled network of
software agents that interact to solve problems that are
beyond the individual capacities or knowledge of each
problem solver. The general goal of MAS is to create
systems that interconnect separately developed agentsThus
enabling the ensemble to function beyond the capabilities
of any singular agent in the set-up in agent model. This
research can demonstrate that agent technology is suitable
to solve communication concerns for a distributed
environment. Multi-agent systems try to solve the entire
problem by collaboration with each other and result in
preferable answer for complex problems. For further
works, it is recommended for developing this model to
have multi retailer and even multi distributor and apply the
auction mechanism between them.
204
References
[1] Ali Fuat Guneri, Atakan Yucel , Gokhan Ayyildiz An
integrated fuzzy-lp approach for a supplier selection problem in
supply chain management Expert Systems with Applications 36
(2009) 92239228
[2] C. Iglesias, M. Garijo, J. Centeno-Gonzalez, and V. J. R.,
"Analysis and Design of Multiagent Systems using MASCommonKADS," presented at Agent Theories, Architectures,
and Languages, 1998.
[3] Charles M. Macal and Michael J. North Tutorial on AgentBased Modeling And Simulation Part 2: How to Model With
Agents Proceedings of the 2006 Winter Simulation Conference
[4] Fu-ren Lin, Sheng-hsiu Huang, and Sheng-cheng Lin ,
Effects of Information Sharing on Supply Chain Performance in
Electronic Commerce ,IEEE Transactions On Engineering
Management, Vol. 49, No. 3, August 2002.
[5] Fu-ren Lin ,Yu-wei Song and Yi-peng Lo ,Effects of Trust
Mechanisms on Supply Chain Performance Using Multi-agent
Simulation and Analysis , Proccding of the First Workshop on
Knowledge Economy and Electronic Commerce.
[6] Jos Alberto R. P. Sardinha1, Marco S. Molinaro2, Patrick
M. Paranhos2, Pedro M. Cunha2,Ruy L. Milidi2, Carlos J. P. de
Lucena2 , A Multi-Agent Architecture for a Dynamic Supply
Chain Management , American Association for Artificial
Intelligence ,2006.
[7] Ka-Chi Lam , Ran Tao, Mike Chun-Kit Lam A material
supplier selection model for property developers using Fuzzy
Principal Component Analysis Automation in Construction 19
(2010) 608618
[8] Mario Verdicchio and Marco Colombetti , Commitments for
Agent-Based Supply Chain Management , ACM SIGecom
Exchanges, Vol. 3, No. 1, 2002.
[9] Mercedes Amor *, Lidia Fuentes Malaca: A component and
aspect-oriented agent architecture Information and Software
Technology 51 (2009) 10521065
[10] MihaelaUlieru, Senior Member, IEEE, and MirceaCobzaru ,
BuildingHolonic Supply Chain Management Systems: An eLogistics Application for the Telephone Manufacturing Industry
IEEE transactions on industrial informatics, vol. 1, no. 1,
February 2005.
[11] Onn Shehory and Arnon Sturm , Evaluation of Modeling
Techniques for Agent-Based Systems , AGENTS01, February
11-13, 2001, Montral, Quebec, Canada.
[12] Qing Zhang and Wuhan, Essentials for Information
Coordination in Supply Chain Systems, Asian Social Science
Vol. 4, No. 10 ,Oct 2008
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
205
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
206
Abstract
Patients information is an important component of the
patient privacy in any health care system that is based on
the overall quality of each patient in the health care
system. The main commitment for any health care system
is to improve the quality of the patient and privacy of
patients information.
There are many organizational units or departments in the
hospital, from which, it is necessary for them that there
should be good coordination in each others. Even the
available health care automation software also does not
provide such coordination among them. These softwares
are limited up to the hospital works but do not provide the
interconnectivity with other hospitals and blood banks etc.
Thus, these hospitals cannot share information in-spite of
the good facilities and services.
Today, there is a need of such computer environment
where treatment to patients can be given on the basis of
his/her previous medical history at the time of emergency
at any time, on any place and anywhere. Pervasive and
ubiquitous environment and Semantic Web can bring the
boon in this field. For this it is needed to develop the
ubiquitous health care computing environment using the
Semantic Web with traditional hospital environment. This
paper is based on the ubiquitous and pervasive computing
environment and semantic web, in which these problems
has been tried to remove so that hospital may be smart
hospital in the near future. This paper is the effort to
develop the knowledge-base ontological description for
smart hospital.
Keywords: Ontology, Smart Hospital, knowledge-base,
Pervasive and ubiquitous, Semantic Web, RDF.
1. Introduction
Many change and many of the developments in Health
care environment for the last decade are due to new
technologies such as mobile and wireless computing. On
the one hand where the main aim of hospital is to provide
the better services and facilities to the patient, on the other,
his/her proper care makes success to the hospital. Along
with this, hospital also adds many new facilities and
services with existing facilities and services in one place
for their patient. Having all facilities and services to the
same place, hospitals system is not able to provide
sufficient care to the patient at any place and at any time.
The major problems with the health care environments are
related to the information flow and storage of the patients
data and other entities of the health care system. These
problems are further categorized below
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. Concept of Ontology
There are being much development in the ontology
[4][7][9] process for the last decade and many good
thinkers gave its meaning and its various definitions. It is a
set of primitive concepts that can use for representing a
whole domain or part of it that defines a set of objects,
relations between them and subsets of those in the
respective domain. It is also a man-made framework that
supports the modeling process of a domain in such a way a
collection of terms and their semantic interpretation is
provided.
In artificial intelligence [11] the term -Ontology is an
explicit specification of a conceptualization, where
ontology defines:
207
3. Smart Hospital
The smart hospital [14] is a type of hospital that is able to
share the domains knowledge with same or others
domain and fulfill the requirement of the ubiquitous and
pervasive computing [5] environment.
The smart hospital offers a number of advantages Fig. 1 Top/Upper level of the SH ontology
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
208
below-Knowledge
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
209
Guardian
name
address
Patient
MedicalHistory
responsibleFor name
1
address
patientnumber
insurance
dateRange
Outpatient
Inpatient
TreatmentInstance
date
medications
procedures
6. Create instances
The last step is to create the individual instances of classes
in the hierarchy. Defining an individual instance of a class
requires
Choose a class,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Guardian
responsibleFor
name
address
Patient
MedicalGroup
Employee
name
address
*
patient number
name
main location
name
*
Employee no.
address
assignedTo
worksAt
*
Hospital
1*
Outpatient
Inpatient
1...2
primary
Nurse
specialty
assignedTo
MedicalHistory
Building
*
name
location
consulting
insurance
Doctor
specialty
name
address
210
Location
assignedTo
assignedTo
TreatmentInstance
number
size
1
assignedTo
medications
procedures
Patient
name
main location
* assignedTo
name
address
patient number
Employee
Outpatient
name
ss number
address
Inpatient
Hospital
worksAt
name
address
1*
*
*
Building
primary
assignedTo
name
location
Doctor
Specialty
consulting
assignedTo
Nurse
Specialty
*
Location
*
assignedTo* identifier
characteristics
size
assignedTo
8. Implementation
For querying, SPARQL is used, a W3C recommendation
towards a standard query language for the Semantic Web.
It is focus on querying RDF graphs at the triple level. It
can be used to query an RDF Schema to filter out
individuals with specific characteristics.
9. Conclusion
9.1
Result Discussion
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
211
This paper uses the RDF and use the RDF based query and
database as a background. RDF query is simple then other
database query. The reason behind of this is-
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Alani, H., Kim, S., Millard, D. E., Weal, M. J., Hall, W.,
Lewis, P. H., and Shadbolt, N.R., 2003, Automatic
Ontology-Based Knowledge Extraction from Web
Documents, IEEE Intelligent Systems, Vol. 10, No 1,
pp.14-21.
[9]
Future Directions
This paper is basically using the semantic technology
using the open standard- RDF and XML. Semantic web is
the technology which is the extension of web where the
research on this is going on. Actually, the different storage
stores the different type of data on the web which
increases the heavy load on the network. This project is
the effort, based on the RDF as a background database
which saves our time and space and near future this
technology and improvement on this project may be
possible which bring the boon in the health care
environment.
References
[1]
Sowa,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
212
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
213
Abstract
In present times, multimedia protection is becoming increasingly
jeopardized. Therefore numerous ways of protecting information
are being utilized by individuals, businesses, and governments.
In this paper, we survey Salsa20 as a method for protecting the
distribution of digital images in an efficient and secure way. So,
we performed a series of tests and some comparisons to justify
salsa20 efficiency for image encryption. These tests included
visual testing, key space analysis, histogram analysis,
information entropy, encryption quality, correlation analysis,
differential analysis, sensitivity analysis and performance
analysis. Simulation experiment has validated the effectiveness
of the Salsa20 scheme for image encryption.
Keywords: Salsa20, image encryption, test, comparison.
1. Introduction
Along with the fast progression of data exchange in
electronic way, it is important to protect the confidentiality
of image data from unauthorized access. Security breaches
may affect user's privacy and reputation. So, data
encryption is widely used to confirm security in open
networks such as the internet. Due to the substantial
increase in digital data transmission via public channels,
the security of digital images has become more prominent
and attracted much attention in the digital world today.
The extension of multimedia technology in our society has
promoted digital images to play a more significant role
than the traditional texts, which demand serious protection
of users' privacy for all applications. Each type of data has
its own features; therefore, different techniques should be
used to protect confidential image data from unauthorized
access. Most of the available encryption algorithms are
used for text data. However, due to large data size and real
time requirement, it is not reasonable to use traditional
encryption methods.
Thus, a major recent trend is to minimize the
computational requirements for secure multimedia
2. Symmetric Cryptography
Symmetric encryption is the oldest branch in the field of
cryptology, and is still one of the most important ones
today. Symmetric cryptosystems rely on a shared secret
between communicating parties. This secret is used both
as an encryption key and as a decryption key. Generally,
symmetric encryption systems with secure key are divided
into two classes: stream ciphers and block ciphers. Stream
ciphers encrypt individual characters of a plaintext
message, using a time variant encryption function, as
opposed to block ciphers that encrypt groups of characters
of a plaintext message using a fixed encryption function
[6]. Nowadays, the boundaries between block ciphers and
stream ciphers are becoming blurred. So, it is difficult to
tell whether a symmetric cipher is a stream or block cipher.
Stream ciphers are beyond the most important encryption
systems which have major applications in military,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2.1 Salsa20
As a response to the lack of efficient and secure stream
ciphers, ECRYPT manages and coordinates a multiyear
effort called eSTREAM to identify new stream ciphers
suitable for widespread adoption. Salsa20, one of the
214
y0
x 0 y 1 x 1 ((x 0 x 3 ) 7)
y
1 quarterround x 1 y 2 x 2 (( y 1 x 0 ) 9)
y2
x 2 y 3 x 3 (( y 2 y 1 ) 13)
y3
x 3 y 0 x 0 (( y 3 y 2 ) 18)
Rowround Operations
x0
x4
x8
x 12
x1
x2
x5
x6
x9
x 10
x 13
x 14
x1
x2
x5
x9
x 13
x6
x 10
x 14
x 3 ( y 0 , y 1 , y 2 , y 3 ) quarterround (x 0 , x 1 , x 2 , x 3 )
x 7 ( y 5 , y 6 , y 7 , y 4 ) quarterround (x 5 , x 6 , x 7 , x 4 )
x 11 ( y 10 , y 11 , y 8 , y 9 ) quarterround (x 10 , x 11 , x 8 , x 9 )
x 15 ( y 15 , y 12 , y 13 , y 14 ) quarterround (x 15 , x 12 , x 13 , x 14 )
Columnround Operations
x0
x4
x8
x 12
x 3 ( y 0 , y 4 , y 8 , y 12 ) quarterround (x 0 , x 4 , x 8 , x 12 )
x 7 ( y 5 , y 9 , y 13 , y 1 ) quarterround (x 5 , x 9 , x 13 , x 1 )
x 11 ( y 10 , y 14 , y 2 , y 6 ) quarterround (x 10 , x 14 , x 2 , x 6 )
x 15 ( y 15 , y 3 , y 7 , y 11 ) quarterround (x 15 , x 3 , x 7 , x 11 )
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
(a)
(b)
215
2
(c)
(d)
(e)
(f)
256
k 1
(v k 256) 2
,
256
(5)
(h)
Fig. 2. Visual test result: Figures (a) and (b) depicts plain-image and
plain-image histogram, respectively. Figures (c), (e) and (g) show cipher-
H (m )
2 N 1
i 0
P (m i ) log 2
1
,
P (m i )
(6)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
216
255
EQ
| H L (C ) H L (P ) |
L 0
(7)
256
that Salsa20 achieves a better encryption quality in the 8th
ciphering round compared to the other variants.
Salsa20/r
r=8
r = 12
r = 20
Entropy value
7.9969
7.9970
7.9971
rxy
D (x )
Cov (x , y )
1
N
1
N
D (x ) D ( y )
N
j 1
1
N
x j )2,
1
N
j 1
(x j
(8)
1
N
(x j
x j )( y j
j 1
(9)
j 1
y j ),
(10)
j 1
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
217
Vertical
Horizontal
Diagonal
0.9986
1.0000
0.9988
0.0383
0.0430
0.0117
0.0021
0.0348
0.0195
0.0030
0.0204
0.0653
Horizontal
Diagonal
Plain-image
r=8
Salsa20/r r=12
r=20
MAE
1
W H
| C (i , j ) P (i , j ) |.
(11)
j 1 i 1
i , j D (i , j ) 100%,
W H
where D (i , j ) is defined as:
(12)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
0, if C1 (i , j ) C 2 (i , j ),
D (i , j )
(13)
1, if C1 (i , j ) C 2 (i , j ).
[ 1
]100%.
(14)
i,j
255
W H
218
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Method
Salsa20/8
Salsa20/12
Salsa20/20
MAE
73.2289
73.0922
72.6794
NPCR
UACI
0.0015%
0.0006%
Salsa20/8
99.59%
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. Conclusion
In this paper, a successfully efficient implementation of
Salsa20 scheme is introduced for digital image encryption.
The encryption system has different variants according to
number of ciphering rounds. Salsa20 has a large key space
that is resistant to all kinds of brute-force attacks.
Theoretical and experimental Research results showed that
the scheme has resistance to statistical attacks. The
uniformity was justified by the chi-square test. It is shown
that Salsa20 hash function generates uniform cipherimages. Information entropy test results indicate that the
cipher-image histogram distribution of the encryption
scheme is so even that the entropy measured is almost
equal to the ideal value. So, the surveyed encryption
system is secure upon the entropy attack. The measured
encryption quality showed that Salsa20/8 has a better
encryption quality than the other two variants. Correlation
analysis showed that correlation coefficients between
adjacent pixels in the plain-image are significantly
decreased after applying encryption function. Comparison
between correlation coefficients of different cipher rounds
showed that the least correlation occurs at the 12th round
of cipher. To quantify the difference between encrypted
image and corresponding plain-image, three measures
were used: MAE, NPCR and UACI.
The MAE
experiment result showed that Salsa20/8 has the biggest
MAE value among variants of Salsa20. Moreover, the
MAE value decreases as the number of rounds increases.
219
This
research
was
supported
by
the
Iran
Telecommunication Research Center (ITRC) under Grant
no. 18885/500.
References
[1] A. Akhshani, S. Behnia, A. Akhavan, H. Abu Hassan, and Z.
Hassan, A Novel Scheme for Image Encryption Based on
2D Piecewise Chaotic Maps, Optics Communications 283,
pp. 32593266, 2010.
[2] A. Jolfaei and A. Mirghadri, An Applied Imagery
Encryption Algorithm Based on Shuffling and Baker's Map,
Proceedings of the 2010 International Conference on
Artificial Intelligence and Pattern Recognition (AIPR-10),
Florida, USA, pp. 279285, 2010.
[3] A. Jolfaei and A. Mirghadri, A Novel Image Encryption
Scheme Using Pixel Shuffler and A5/1, Proceedings of The
2010 International Conference on Artificial Intelligence and
Computational Intelligence (AICI10), Sanya, China, 2010.
[4] A. Jolfaei and A. Mirghadri, An Image Encryption
Approach Using Chaos and Stream Cipher, Journal of
Theoretical and Applied Information Technology, vol. 19,
no. 2, 2010.
[5] M. Sharma and M. K. Kowar, Image Encryption
Techniques Using Chaotic Schemes: a Review,
International Journal of Engineering Science and
Technology, vol. 2, no. 6, pp. 23592363, 2010.
[6] R. A. Rueppel, Stream Ciphers, Contemporary
Cryptology: the Science of Information Integrity, vol. 2, pp.
65134, 1992.
[7] D. J. Bernstein, the Salsa20 Stream Cipher, Proceedings
of Symmetric Key Encryption Workshop (SKEW 2005),
Workshop Record, 2005.
[8] D. J. Bernstein, Salsa20 Design, 2005, http://cr.yp.to/
snuffle/design.pdf.
[9] D. J. Bernstein, Salsa20 Specification, 2005, http://
cr.yp.to/ snuffle/design.pdf.
[10] P. L'ecuyer and R. Simard, TestU01: A C Library for
Empirical Testing of Random Number Generators, ACM
Transactions on Mathematical Software, vol. 33, no. 4,
Article 22, 2007.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
220
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
221
Computer Science Department, Faculty of Sciences Dhar el mahraz, University Sidi Mohammed Ben Abdellah
Fez, 30000, Morocco
Computer Science Department, Faculty of Sciences Dhar el mahraz, University Sidi Mohammed Ben Abdellah
Fez, 30000, Morocco
Abstract
A crucial requirement for the context-aware service provisioning
is the dynamic retrieval and interaction with local resources, i.e.,
resource discovery. The high degree of dynamicity and
heterogeneity of mobile environments requires to rethink and/or
extend traditional discovery solutions to support more intelligent
service search and retrieval, personalized to user context
conditions. Several research efforts have recently emerged in the
field of service discovery that, based on semantic data
representation and technologies, allow flexible matching
between user requirements and service capabilities in open and
dynamic deployment scenarios. Our research work aims at
providing suitable answering mechanisms of mobile requests by
taking into account user contexts (preferences, profiles, physical
location, temporal information).
This paper proposes an ontology, culled ONeurolog, to capture
semantic knowledge a valuable in Neurology domain in order to
assist users (doctor, patient, administration ) when querying
Neurology knowledge bases in mobile environment. The
increasing diffusion of portable devices with wireless
connectivity enables new pervasive scenarios, where users
require tailored service access according to their needs, position,
and execution/environment conditions.
Keywords: Neurology, ontology, context-aware, semantic web,
query answering, mobile environment.
1. Introduction
1.1 General context
In context-aware information provisioning scenarios, it
is crucial to enable the dynamic retrieval of available
knowledges in the nearby of the users current point of
attachment, while minimizing user involvement in
information selection. Data and knowledge discovery in
pervasive environments, however, is a complex task as it
requires to face several technical challenges at the state of
the art, such as user/device mobility, variations (possibly
unpredictable) in service availability and environment
conditions, and terminal heterogeneity.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
222
The patient does not walk, does not stand nor sitting
does not his head. He/she did not speak, but knows how to
understand, she established contacts with people who are
very familiar.
2. Related work
There are several research streams in personalized
recommendation. One stream aims at improving the
accuracy of algorithms used in recommendation systems
[5]. The second stream is focused on the interaction
between a recommendation system and its customers. For
instance, some studies investigated the persuasive effect of
recommendation messages [7], developed better data
collection mechanisms [8], and enhanced awareness about
privacy issues [9]. Furthermore, a few studies focused on
the effect of moderating factors such as user characteristics
and product features on the performance of
recommendation [10].
A typical personalization process includes three steps:
understanding customers through profile building,
delivering personalized offering based on the knowledge
about the product and the customer, and measuring
personalization impact [11]. Montaner, et al. [12]
simplified the process into two stages after analyzing 37
different systems: profile generation and maintenance, and
profile exploitation.
One key to the performance of personalized
recommendation is the nature of the mechanism it uses to
build customer profiles. In previous research, a number of
different algorithms have been proposed. These systems are
classified based on various characteristics. For example,
Beyah et al. [13] divided recommendation systems into
four types: collaborative filtering (people to people
correlations),
social
filtering,
non-personalized
recommendation
systems,
and
attribute-based
recommendations.
Wei
et
al.
[14]
classified
recommendation systems into six approaches based on the
type of data and technique used to arrive at
recommendation decisions. These approaches are
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
223
3. General architecture
3.1 Framework architecture
In this section, we further describe the components in
the application are listed as follows (Fig.1).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
224
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
225
4. Preliminaries
4.1 The semantic model and Formalization of
User contexts
The semantic model is based on the use of ONeurolog,
an Ontology of Neurology science, and a formal
specification of user contexts. Consequently, two different
kinds of knowledge are to be managed by such mobile
system:
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
226
4.2 Neurology
The field of computer consultation has passed through
three historical phases. In the first, attempts were made to
improve on human diagnostic performance by rejecting the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Cerebral palsy
Epilepsy
Sleep disorders
227
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
228
Request additional
orientation CLQ
etiological treatment
Monitoring
tests
depending
on
the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
229
References
[1]
[2]
[3]
[4]
Figure 6. Example of cathegory Emergency
Recovery position
Cannula Mayo
gastric probe
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
http://www.acgme.org/acWebsite/downloads/RRC_progReq/180ne
urology07012007.pdf . Last see 11/04/2010.
http://en.wikipedia.org/wiki/Neurology. Last see 11/04/2010.
Robu I, Robu V, Thirion B. An introduction to the Semantic Web
for health sciences librarians. J Med Libr Assoc 2006;94(2):198
205.
Berners-Lee T, Hall W, Hendler J, Shadbolt N, Weitzner DJ.
Computer science. Creating a science of the Web. Science
2006;313(5788):76971.
H.J. Ahn, Utilizing popularity characteristics for product
recommendation, International Journal of Electronic Commerce 11
(2) (2006) s5980.
S.S. Weng, M.J. Liu, Feature-based recommendation for one-toone
marketing, Expert Systems with Applications 26 (4) (2004) 493
508.
S.N. Singh, N.P. Dalal, Web home pages as advertisements,
Communications of the ACM 42 (8) (1999) 9198.
B.P.S. Murthi, S. Sarkar, The role of the management sciences in
research on personalization, Management Science 49 (10) (2003)
13441362.
N.F. Awad, M.S. Krishnan, The personalization privacy paradox: an
empirical evaluation of information transparency and the
willingness to be profiled online for personalization, MIS Quarterly
30 (1) (2006) 1328.
L.P. Hung, A personalized recommendation system based on
product taxonomy for one-to-one marketing online, Expert Systems
with Applications 29 (2) (2005) 383392.
G. Adomavicius, A. Tuzhilin, Personalization techniques: a
process-oriented perspective, Communications of the ACM 48 (10)
(2005) 8390.
M. Montaner, B. Lopez, L.D.A. Mosa, A taxonomy of
recommender agents and the Internet, Artificial Intelligence Review
19 (2003) 285330.
G. Beyah, P. Xu, H.G. Woo, K. Mohan, D. Straub, Development of
an instrument to study the use of recommendation systems,
Proceedings of the Ninth Americas Conference on Information
Systems, Tampa, FL, USA, 2003, pp. 269279.
C.P. Wei, M.J. Shaw, R.F. Easley, A survey of recommendation
systems in electronic commerce, in: R.T. Rust, P.K. Kannan (Eds.),
e-Service: New Directions in Theory and Practice, M.E. Sharpe
Publisher, 2002.
J.B. Schafer, J.A. Konstan, J. Riedl, E-commerce recommendation
applications, Data Mining and Knowledge Discovery 5 (1) (2001)
115153.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[16] H. Sakagami, T. Kamba, Learning personal preferences on online
newspaper articles from user behaviors, Computer Networks and
ISDN Systems 29 (1997) 14471455.
[17] K. Lang, et al., NewsWeeder: learning to filter netnews,
Proceedings of the 12th International Conference on Machine
Learning, Tahoe City, California, 1995, pp. 331339.
[18] D.W. Oard, J. Kim, Implicit feedback for recommender systems,
Proceedings of AAAI Workshops on Recommender Systems,
Madison, WI, 1998, pp. 8183.
[19] T.P. Liang, H.J. Lai, Discovering user interests from web browsing
behavior: an application to Internet news services, Proceedings of
the 35th Annual Hawaii International Conference on System
Sciences, Big Island, Hawaii, USA, 2002, pp. 203212.
[20] M. Balabanovi, Y. Shoham, Fab: content-based, collaborative
recommendation, Communications of the ACM 40 (3) (March
1997) 6672.
[21] M.J. Pazzani, A framework for collaborative, content-based and
demographic filtering, Artificial Intelligence Review 13 (56)
(1999) 393408.
[22] R.J. Mooney, L. Roy, Content-based book recommending using
learning for text categorization, Proceedings of the fifth ACM
conference on Digital libraries, San Antonio, Texas, United States,
2000, pp. 195204.
[23] G. Linden, B. Smith, J. York, Amazon.com recommendations: itemto-item collaborative filtering, IEEE Internet Computing 7 (1)
(2003) 7680.
[24] J.W. Kim, K.K. Lee, M.J. Shaw, H.L. Chang, M. Nelson, R.F.
Easley, A preference scoring technique for personalized
advertisements on Internet storefronts, Mathematical and Computer
Modeling 44 (2006) 315.
[25] R. Mukherjee, N. Sajja, S. Sen, A movie recommendation system
an application of voting theory in user modeling, User Modeling
and User-Adapted Interaction 13 (2003) 533.
[26] R. Garfinkel, R. Gopal, A. Tripathi, F. Yin, Design of a shopbot and
recommender system for bundle purchases, Decision Support
Systems 42 (2006) 19741986.
[27] K.J. Mock, V.R. Vemuri, Information filtering via hill climbing,
WordNet, and index patterns, Information Processing &
Management 33 (5) (1997) 633644.
[28] D. Billsus, M.J. Pazzani, A personal news agent that talks, learns
and explains, Proceedings of the Third Annual Conference on
Autonomous Agents, Seattle, Washington, USA, 1999, pp. 268
275.
[29] H.J. Lai, T.P. Liang, Y.C. Ku, Customized Internet News Services
Based on Customer Profiles, Proceedings of the Fifth International
Conference on Electronic Commerce, Pittsburgh, PA, USA, 2003,
pp. 225229.
[30] W. Fan, M.D. Gordon, P. Pathak, Effective profiling of consumer
information retrieval needs: a unified framework and empirical
comparison, Decision Support Systems 40 (2005) 213233.
[31] T.P. Liang, H.J. Lai, Y.C. Ku, Personalized content
recommendation and user satisfaction: theoretical synthesis and
empirical findings, Journal of Management Information Systems 23
(3) (2006) 4570.
[32] Studer, R., Benjamins, V. R., & Fensel, D. (1998). Knowledge
engineering: Principles and methods. Data and Knowledge
Engineering, 25(12), 161197.
[33] Baader, F., Calvanese, D., McGuinness, D., Nardi, D., & PatelSchneider, P. F. (2003). The Description Logic handbook: Theory,
implementation, and applications. Cambridge University Press.
Semantic'Discoveries
[34] W. Kiessling, Foundations of preferences in database systems, in:
Proceedings of the 29th International Conference on Very Large
Data Bases, Hong Kong, China, 2002.
[35] IHMC CMaps, http://cmap.ihmc.us/. Last see 2009.
230
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
231
Abstract
The Multi Agent Systems are a paradigm of the most promising
technology in the development of distributed software systems.
They include the mechanisms and functions required to support
interaction, communication and coordination between the various
components of such a system. In the context of distributed test
the activity of the test is performed by a set of parallel testers
called PTCs (Parallel Test Components). The difficulty is in
writing communication procedures of coordination and
cooperation between the PTCS. In this context, we combine in
this paper adaptable mobile agent with multi-agent systems
technology to enhance the distributed test. The objective of this
work is to eventually have a platform for compliance testing of
distributed applications.
Keywords: Distributed Testing, SMA, Mobile Agent, Actor,
Mobile Actor.
1. Introduction
Agent-based software engineering has become the key
issues in modern system design. It provides a high-level
abstraction and services for developing, integrating and
system managing of distributed system applications. The
component-based software engineering has promised, and
indeed delivered significant improvements in software
development.
Last years, products, models, architecture and frameworks
suggest several key issues that will contribute to the
success of open distributed systems. However, in practice,
the development of distributed systems is more complex.
The design process must take into account: the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
232
2. Distributed Test
The principle of the test is to apply input events to the
IUT 1 and compare the observed outputs with expected
results. A set of input events and planned outputs is
commonly called a test case and it is generated from the
specification of the IUT. We consider a test as a conform
test if its execution is conform to its specification.
2.1 Architecture
The basic idea of the proposed architecture is to
coordinate parallel testers called PTCS (Parallel Test
Components) using a communication service in
conjunction with the IUT. Each tester interacts with the
IUT through a port PCO2, and communicates with other
testers through a multicast channel (Figure1).
1
2
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
233
(3)
Running wf1, wf2 and wf3 should give the result shown in
Figure 3(a) but the execution of our prototype provides an
incorrect result given in Figure 3 (b).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
234
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.2 JavAct
JavAct [20] is an environment allowing the development
of programs Java competitors and distributed according to
the model of the actors. Compared with the traditional
mechanisms of low level like the sockets, the RMI and
services Web or SOAP, it has a high level of abstraction.
It provides primitives of creation of actors, of change of
their behaviors, localization, and communication by
sending of messages. JavAct has been designed in order to
be minimal in terms of code and so maintainable at low
cost. Applications run on a domain constituted by a set of
places which can dynamically change. A place is a Java
virtual machine (Figure 4).
JavAct library provides tools for creating actors, changing
their own interface, and also for distribution and mobility,
static and dynamic (self-) adaptation, and (local or distant)
communications.
235
2.
3.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
236
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
end Algorithm 1
If IUT has n ports, Algorithm 1 is dedicated to compute
the n related local test sequences and List-Of-Sender by
order of send of n testers from a complete test sequence of
IUT. Function Port give the port corresponding to a given
message. Function Ports is defined by : Ports(y) = {k/ a
y s.t.k=Port(a))} for a set y of messages .
The local test sequences are basically projections of the
complete test sequence over the port alphabets. In fact,
Line 1 and 2 adds respectively !xi to the sequence
wport(xi) and tester port(xi) to Liste-of-Sender..The Loop
of Line 3 adds the reception of messages belonging to yi
to the appropriate sequences.
Coordination messages are added to the projections to get
the same controllability and observability when using the
complete test sequence in centralized method. In this case,
we added ?C et !C to the appropriate local test
sequences.?C is added to wh and List-of-sender, where h
is the tester sending xi+1.!C is added to the sequence of a
tester receiving a message belonging to yi,if yi (Lines 5
and 6 ),if not !C is added to the sequence of the tester
sending xi(Line4).
4. Agents Testers: Each tester executes its local test
sequence in the following way:
For a communication with the IUT
- If the message is a reception, the tester waits until its
AET informs it of the reception.
- If the message is a sending, the tester awaits the arrival
of the AMRI and tests:
1- If the sending is like !M{} : the Agent tester Inform
Execution tester to sends the message and sends the
AMRI to collect the messages which will be
observed following this sending. Before each new
turn, box of collection of the AMRI is initialized.
2- If the sending is like !M{xi1,yi1} (the message could
be sent only if the messages xi1 and yi1 were well
observed) the tester will search among information
collected by the AMRI during the last turn:
a- If xi1, yi1 is in box of collection of the AMRI:
the tester sends the message and initializes the
AMRI for a new turn.
237
5. Conclusion
This article, presents architecture, model and method of
distributed test guaranteeing the principles of coordination
and synchronization between the various components of
the application of distributed testing. We exploited the
concepts of mobiles agents and actors agents which make
it possible to propose software architectures able to
support the dynamic adaptation and to reduce the number
of messages between the various components of the
distributed test. The implementation , the introduction of
the notion of time into the test sequences and the test of
the applications like Web services are the prospects for
our approach.
References
[1] M.Benattou and J.M. Bruel, '' Active Objects for
Coordination in Distributed Testing, '' International conference
on object-oriented information systems, Montpellier, FRANCE,
2002.
[2] O. Rafiq, L. Cacciari, M. Benattou, ''Coordination Issues in
Distributed Testing'', Proceeding of the fifth International
Conference on Parallel and Distributed Processing Techniques
and Applications (PDPTA'99) pp: 793-799, USA, 1999, CSREA
Press .
[3] A.Khoumsi , '' A Temporal Approach for Testing Distributed
Systems, '' IEEE Transactions on Software Engineering, '' vol .28
no.11, pp.1085-1103, November 2002.
[4] B.Chena, H. Chengb, J.Palen, Integrating mobile agent
technology with multi-agent systems for distributed traffic
detection and management systems'',Transportation Research
Part
C:
Emerging
Technologies
Volume 17, Issue 1, February 2009, pp.1-10.
[5] G. Bernard , L. Ismail , ''Apport des agents mobiles
lexcution rpartie''. Technique et science informatiques,
Herms, Paris, 2002, Vol. 21, No 6/2002, pp. 771-796.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Lab.IRIT,University
2008,france .
of
Toulouse,
February
5th,
238
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
239
Abstract
The main objective of the design methodology
of a Fuzzy knowledgebase System is to
predict the risk of Diabetic Nephropathy in
terms of Glomeruler Filtration Rate (GFR). In
this paper, the controllable risk factors
Hyperglycemia, Insulin, Ketones, Lipids,
Obesity,
Blood
Pressure
and
Protein/Creatinine ratio are considered as
input parameters and the stages of renal
disorder is the output parameter. The input
triangular membership functions are Low,
Normal, High and Very High and the output
triangular membership functions are s1, s2, s3,
s4 and s5. As the renal complications are now
the leading causes of diabetes-related
morbidity and mortality, a FKBS is designed
to perform the optimum control on high risk
controllable risk factors by acquiring and
interpreting the medical experts knowledge.
Fuzzy logic is used to incorporate the
available knowledge into intelligent control
system based on the medical experts
knowledge and clinical observations. The
proposed FKBS is validated with MatLab, and
is used as a tracking system with accuracy and
robustness. The FKBS captures the existence
of uncertainty in the risk factors of Diabetic
Nephropathy, resolves the renal failure with
optimum result and protects the patients from
End Stage Renal Disorder (ESRD).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
And Fuzzy
240
Mellitus on Renal
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
241
3.2
3.2.2 Knowledgebase
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
242
LLLN
LLLH
LLLVH
Ins
uli
n
1.
2.
3.
4.
5.
S2
6.
S1
7.
S2
8.
VH
S3
9.
S3
10. L
S2
11. L
S3
12. L
VH
S4
13. L
VH
S3
14. L
VH
S2
15. L
VH
S4
16. L
VH
VH
S4
Sno
O
b
e
s
i
t
y
L
BP
(sy
s/di
as)
Lip
ids
(L
DL
/H
DL
)
L
S3
S2
S3
VH
S4
Ket
one
s
P/C
Stages of
Renal/
GFR
LLL
S3
S2
S3
S4
LLN
S2
S1
S2
S3
LLH
S3
S2
S3
S4
LLVH
S3
S2
S4
S4
agg(b)=max{min(1/4,s3(b)),
min(1/2.41,s2(b)), min(1/2.41,s1(b)) }
3.2.4 Defuzzification Interface
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4.
Results
243
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
244
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
245
5. Conclusion
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1] Sarah Wild, Gojka Roglic, Anders
Green, Richard Sicree, and Hilary
King, Global Prevalence of Diabetes
- Estimates for the Year 2000 and
Projections for 2030, Diabetes Care
27, pp 1047-1053, 2004.
[2] Constantinos Koutsojannis and Ioannis
Hatzilygeroudis, FESMI: A Fuzzy
Expert System for Diagnosis and
Treatment of Male Impotence,
Knowledge-Based Intelligent Systems
for Health Care, pp 11061113, 2004.
[3] Rahim F, Deshpande A, Hosseini A,
Fuzzy Expert System For Fluid
Management In General Anaesthesia,
Journal of Clinical and Diagnostic
Research, pp 256-267,2007.
[4] Muhammad zubair Asghar, Abdur
Rashid Khan, Muhammad Junaid
Asghar: Computer Assisted Diagnosis
for Red Eye, International Journal on
Computer Science and Engineering,
Volume 1(3), pp 163-170, 2009.
[5] Nazmi Etik, Novruz Allahverdi,
Ibrahim Unal Sert and Ismail Saritas,
Fuzzy expert system design for
operating room air-condition control
systems, Expert Systems with
Applications An International
Journal, Volume 36, Issue 6, pp 97539758, 2009
[6] Hamidreza Badeli, Mehrdad Sadeghi,
Elias Khalili Pour, Abtin Heidarzadeh,
Glomerular Filtration Rate Estimates
246
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
247
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
248
Abstract
This paper is based on reengineering of Education
institutes[1,2] in such a way that coupling risk
should be less as compared to existing systems.
Here, we will measure the complexity (based on
coupling factor) of modules during reengineering
of modules design. As we know that the
coupling[3,4] is one of the properties with most
influence on maintenance, as it has a direct effect
on maintainability.
In general while any module is reengineered [5],
one of the goals of OO software designers is to
keep the coupling of system as low as possible.
Classes of the system that are strongly coupled are
most likely to be affected by changes and bugs
from other classes. As the coupling between the
classes of the system is increased, it result in
increased error density.
The work described in this paper measure the
coupling not only through classes of the system,
but also through the Packages[6] that are included
during reengineering of the module design.
Coupling between packages is the manner and degree
of interdependence between them. Theoretically, every
package is a stand-alone unit, but in reality packages
depend on many other packages as either they require
services from other packages or provide services to
other packages. Thus, coupling between packages
cannot be completely avoided but can only be
controlled. The coupling between packages is an
important factor that effects the quality or other
external attributes of software, e.g., reliability,
maintainability, reusability, fault-tolerance etc.
In this paper, some measures are proposed for
measurement of coupling at the package-level[7] in
order to achieve good quality software systems.
Keyword
Education Institute, Coupling, Package,
Metrics ,System Representation and Definition
Introduction
Due to increasing demand of software
maintenance, today reengineering techniques is
one of the best choice, to full-fill the requirement
of the public sector/private sector. While
reengineering takes place in existing system, the
complexity of modules is determined in early
stage ( at design time), due to this, software
designer, easily determine, about resources that is
required during reengineering of any project.
This paper is based on the measurement of
complexity of modules, by using different types of
coupling metrics, at design time.
The backbone of any software system is its design.
It is skeleton where the flash (code) will be
supported.
And
while
determining
the
complexity[9] of the modules, here will use OO
paradigm, which is very popular concepts in
todays software development environment. They
are often heralded as the silver bullet for solving
software problem.
Literature Overview
Hamper and Champy[10] (1993) defines the
business process reengineering as the
fundamental rethinking and radical redesign of the
business processes to achieve dramatic
improvement in critical, contemporary measures
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
249
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
250
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
251
3.
4.
5.
6.
7.
8.
9.
10.
11.
Conclusion
The goal of this paper, provided certain
types of package level metrics, that is used to
determine coupling of modules during
reengineering of modules design, and help the
designer to reduces the coupling among
modules, due to this increasing user
productivity and scalability, improve vendor
independence, enhance scalability, increase
manageability and more. As we know that, due
to the productivity, it will create in the more
economic value for each unit of cost. This is
different from cost reduction. It specifically
defines different types of sub-modules
complexity and once the coupling is clearly
determined, then it is very beneficial in future
requirement of reengineering of modules,
because requirement obviously changes, after
a specific time period. And whenever needed,
further to reengineering the modules it is
easily takes place.
12.
13.
14.
15.
16.
17.
18.
19.
References
1.
2.
20.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
21. riand, L., Wust, J., and Louinis, H., "Using Coupling
Measurement for Impact Analysis in Object-Oriented
Systems", in Proc. of IEEE International Conf. on
Software Maintenance, Aug. 30 - Sept. 03 1999, pp.
475-482.
22. Briand, L. C., Devanbu, P., and Melo, W. L., "An
investigation into coupling measures for C++", in
Proc. Of International Conference on Software
engineering (ICSE'97), Boston, MA, May 17-23
1997, pp. 412 - 421.
23. Chidamber, S. R. and Kemerer, C. F., "Towards a
Metrics Suite for Object Oriented Design", in
Proceedings of OOPSLA'91, 1991, pp. 197-211
24. D.L. Parnas, "On the Criteria To Be Used in
Decomposing
Systems
into
Modules,"
Communications of the ACM, Vol. 15, No. 12, Dec.
1972, pp 1053-1058.
252
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
253
Lecturer, Department of Information Science and Engineering, PES Institute of Technology, 100-ft Ring Road,
BSK-III stage, Bangalore-560085, Karnataka-INDIA
2
Prof and Head of Department of Information Science and Engineering, New Horizon College of Engineering,
Panathur post, marathahalli, Bangalore, Karnataka-INDIA
Abstract
To cater the needs of diverse application domains, three
basic feature sets called profiles are established in H.264
standard: the Baseline, Main, and Extended profiles. The
Baseline profile is designed to minimize complexity and
provide high robustness and flexibility for use over a
broad range of network environments and conditions; the
Main profile is designed with an emphasis on
compression coding efficiency capability; and the
Extended profile is designed to combine the robustness
of the Baseline profile with a higher degree of coding
efficiency and greater network robustness. The H.264
bitstream is organized as a series of NAL units, which
are first, processed by the NAL unit decoder module.
NAL unit decoder does the job of NAL unit separation
and parsing of the header information. The output data
elements of NAL unit decoder are entropy decoded and
reordered to produce a set of quantized coefficients by
the variable length decoder module. The quantized coefficients are then rescaled and inverse transformed to
give difference macroblock using the transformation and
quantization module. Using the header information
decoded from the bitstream, the motion compensation
and picture construction module produces distorted
macroblocks which are then filtered using the deblocking
filter to create decoded macroblocks. The output is
stored as a yuv file, which can be played using any YUV
player/viewer. Thus the visual input from the sensors
KeyWords:
Deblocking
filter,
Motion
Compensation, Macroblock modes, NAL units
Extended
Profile
Data partition
1. Introduction
CABAC
B slice
Weighted prediction
SI slice
I slice
SP slice
P slice
CAVLC
ASO
FMO
Redundant slice
Baseline
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
254
4. Macroblock Layer
Each slice consists of macroblocks (or, when using
interlaced encoding, macroblock pairs) of 16x16
pixels. The encoder may choose between a
multitude of encoding modes for each macroblock.
4.1 Macroblock modes for I slices
In H.264, I slice uses a prediction/residual scheme:
Already decoded macroblocks of the same frame
may be used as references for intra prediction
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
255
M=
1 1 1 1
2 1 -1 -2
1 -1 -1 1
1 -2 2 -1
B=MBMT
(1)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
M =
1 1 1 1
1 1 -1 -1
1 -1 -1 1
1 -1 1 -1
(2)
Chroma residuals are always transformed in one
group per macroblock. Thus, there are 4 chroma
blocks per macroblock and channel. A separable
2x2 transform is used:
M =
1 1
1 -1
256
7. SYSTEM DESIGN
..(3)
7.1 Introduction
Design is the first step in moving from problem
domain towards the solution domain. A design can
be object oriented or function oriented. In this
paper function oriented method is followed which
consists of module definition with each module
supporting a function abstraction. As shown in the
below Fig2 sensor image output can be compressed
by the H.264 standard before delivered to the
actuators on the scene.
Sensor Input
of Robot
Compress
-ed
memory
model
Actuators of
Robot
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
H.264
Baseline
Decoder
H.264
media
stream
1
2
3
Output
File in
YUV
Decoded
Macrobl
ocks
Varia
ble
Lengt
h
Deco
der
NAL
Unit
Deco
der
Debl
ocking
Filte
r
Distor
ted
Macr
oblock
MC
and
Pictu
re
Const
ru-
Quantiz
ed Coefficient
Inver
se
Quan
tizatio
n
Differen
ce
Macro
H.264
File
257
Inve
rse
Tran
sformat
i
Edges
of
the
decoded
macroblocks/frames should be filtered
out to eliminate blocking artifacts and
give an impression of finer quality video.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
258
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
259
START
START
Open decoder
configuration file
and read input
If DECODE
Is End of Stream ?
Create decoder
configuration file and write
user data into this file
Yes
No
Write to status
file
Close decoder
configuration file
END
Launch decoder
No
Is decoding finished
Yes
Read status
file
PLAY
Delete decoder
configuration file and
status file
START
No
Yes
Launch decoder output
viewer
No
STOP
Type of Coding
VLC /UVLC
Decode VLC /
UVLC coded data
elements
CAVLC
Decode CAVLC
coded data
elements
CABAC
Decode CABAC
coded data
elements
END
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
260
Start
Start
Read quantized
samples from varia ble
length decoder
Is luma sample ?
No
Yes
END
No
Yes
No
Yes
Decode slice
Scanning of 4x4 luma DC co-efficie nts
END
Decode slice
Go to decode one MB
No
Decode one MB
Is end of slice ?
END
Yes
Go to Start
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Decode one MB
Is current MB
IPCM coded ?
Yes
261
No
References
Is intra Coded ?
Yes
No
9. Conclusion
H.264 standard is found to increase coding
efficiency compared to any previous video coding
standards (like MPEG-4 part 2 and MPEG-2) for a
wide range of bit rates and resolutions. Thus the
visual input from the sensors of the robots can be
compressed by this technique before delivered to
the actuators on the scene.
H.264 because of its higher compression is
definitely going to be the prominent standard in the
near future. It will make possible new applications
that could have never been dreamed of before. The
higher performance of the H.264 is mainly due to
the tree structured motion compensation, which has
also made the implementation of the standard
simpler that many previous standards.
The increased compression performance of H.264
will enable existing applications like video
conferencing, streaming video over the internet,
and digital television on satellite and cable to offer
better quality video at lower cost, and will allow
new video applications that were previously
impractical because of economics or technology.
High-Definition television on DVD, Video on
mobile phones and video conferencing over low
bandwidth connections will become practical and it
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
1
Former Sc G (DRDO), Prof. and Dean R and D,
Dept. of CSE,Nitte Meenakshi Institute of Technology, Bangalore, India
Abstract
Image registration is an important process in high-level image
interpretation systems developed for civilian and defense
applications. Registration is carried out in two phases :
namely feature extraction and feature correspondence. The
basic building block of feature based image registration
scheme involves matching feature points that are extracted
from a sensed image to their counter parts in the reference
image. Features may be control points, corners, junctions or
interest points. The objective of this study is to develop a
methodology for iterative convergence of transformation
parameters automatically between two successive pairs of
aerial or satellite images. In this paper we propose an iterative
image registration approach to compute accurate and stable
transformation parameters that would handle image sequences
acquired under varying environmental conditions. The
iterative registration procedure was initially tested using
satellite images with known transformation parameters like
translation, rotation, scaling and the same method is further
tested with images obtained from aerial platform.
Key
Corner
Detector,
2 Methodology
1. Introduction
Image registration is one of the important components
for the image application like image fusion, panoramic
mosaicing, high resolution reconstruction, change
detection etc. The accomplishment of these high level
tasks rely on the image registration method that is used
to geometrically align the sequence of aerial images [12].
262
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3 Registration Process
263
fx2
f x f y
wG(r;)
fx fy
2 where fx
f y
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Fig. 2(a) (d) Input images and feature detected images using
KLT corner detector
f x2
wG(r;)
fx fy
fx fy
f y2
264
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
l
0
0
1
In the above homography matrix, Hl (0, 0) is sCos and
Hl (1, 0) is sSin, where s and represent the scaling
and rotation between the sequence of images (I1 and I2).
Hl (2, 0) and Hl (3, 0) represent translation along x (Tx)
and y direction (Ty) respectively. An example of image
matching is shown in Fig. 5(a)-(b). The transformation
parameters for the above images are Tx = -68.71, Ty = 14.73, R = 3.59, S = 1.03.
265
fabs
where
i 1 j 1
i 1 j 1
i 1 j 1
2
2
l m
l m
2
2
l mx(i, j) x(i, j) l m y(i, j) y(i, j)
i1 j1
i1 j1
Iterat
ions
1
2
3
4
5
Transformation parameters
Translation
in x direction
(Tx)
-58.26
-65.86
-65.86
-65.86
-65.86
Translation
in y direction
(Ty)
-14.22
5.11
5.11
5.11
5.11
Rotation
(R)
Scaling
(S)
3.36
5.51
5.51
5.51
5.51
1.01
1.01
1.01
1.01
1.01
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
266
4 Experimental Study
The iterative registration procedure was tested on
satellite images with known transformation parameters.
The transformation parameters finally converge and
were found stable.
Fig. 8 shows all the intermediate results generated using
the satellite images with known transformation
parameters. In the iteration procedure, the
transformation parameters are found to remain
stabilized after a few iterations. Finally the module was
tested with aerial images obtained from aerial platform.
The results obtained were found to be satisfactory. The
experimental results for all the iterations are tabulated
in table 2.
Translation
in x direction
(Tx)
-0.65
-1.70
1.085
1.085
1.085
Translation
in y direction
(Ty)
88.50
88.09
89.31
89.31
89.31
Rotation
(R)
Scaling
(S)
19.97
20.18
20.16
20.16
20.16
1.00
1.00
0.993
0.993
0.993
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
267
6 Acknowledgement
7 References
Translation
in y direction
(Ty)
74.09
72.83
73.51
Rotatio
n(R)
Scalin
g(S)
1
2
3
Translation
in x direction
(Tx)
-6.19
-7.10
-5.61
1.10
1.35
1.46
1.01
1.02
1.01
-5.44
74.68
1.32
1.01
-5.20
74.29
1.38
1.01
5 Conclusion
The paper suggests a new unique approach to compute
a stable homography between a pair of aerial images
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Department of Electronics and Communication Engineering, DAV Institute of Engineering and Technology, Jalandhar,
Punjab, India
3
Abstract
Gigabit Ethernet passive optical network GEPON is becoming
technology of choice for meeting increasing bandwidth demands
in optical networks.This study establishes analysis, simulation
and performance evaluation of GEPON FTTH system(with and
without dispersion compensation) with triple play with video
broadcast at 1550nm and voice over IP and high speed internet at
1490nm at 10 Gbps data link downstream configuration with
1:16 splitting and distance of 20km and 30km.
Keywords: FTTH, PON, DCF,GEPON
1. Introduction
In todays information age, there is a rapidly growing
demand for transporting information from one place to
another.Optical communication systems have proven to be
suitable for moving massive amounts of information over
long distances at low cost. As internet traffic capacity is
increasing,this increase has lead to future capacity
upgrades.Fiber optic cables are made of glass fiber that
can carry data at speeds exceeding 2.5 gigabits per second
(Gbps). Today almost all long haul high capacity
information transport needs are fulfilled by optical
communication systems[1-2]. For next generation of
optical communication systems,fiber-to-the-home(FTTH)
using passive optical network(PON) system design is
required to improve transmission performance. GEPON is
a perfect combination of Ethernet technology and passive
optical network technology.
1.1 Passive Optical Networks(PON)
PON is classified into APON (ATM PON), EPON
(Ethernet PON) and GPON (Gigabit Capable PON) on the
basis of protocol method. APON provides transmission at
622Mbps and uses ATM(Asynchronous Transfer Mode)
protocol. Later, APON has been renamed as
BPON(Broadband PON) since it is misunderstood that
APON provides only ATM service. Passive optical
268
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
269
Data/Voice
Figure2
1) With DCF
Eye Diagram
(a)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
270
(b)
Figure 3(a) Data/Voice eye diagram with DCF at 20km
(b)Data/Voice eye diagram with DCF at 30km
(a)
BER Values
Table 1 BER value with DCF
DIST
ANCE
BER WITH
(RS255,239)
BER
WITHO
UT
FEC
2.9369e
20KM
1.0000e-122
BER
WITH
(RS_CONCAT
_CODE)
1.0000e-122
5.5379e-018
1.0000e-122
-022
(b)
30KM
4.3978e
-005
BER Values
Table 3 BER value without DCF
Q Factor
Table 2 Q factor with DCF
DISTANCE=20KM
DISTANCE=30KM
9.6317e+000
3.9216e+000
2) Without DCF
5.4987e-
BER
WITH
(RS255,
239)
9.4557e-
BER WITH
(RS_CONC
AT
_CODE)
1.7541e-
007
035
076
7.1805e-
1.0599e-
1.3067e-
003
002
002
DISTANC
E
BER
WITHOUT
FEC
20KM
30KM
Eye Diagram
Q Factor
DISTANCE=20KM
DISTANCE=30KM
4.8729e+000
1.4625e+000
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
271
CThR4.
Jong-Won Kim, Broadband Communications Department,
An Optimized ATM-PON Based FTTH Access Network,
International Conference on Information, Communications
and Signal Processing ICICS '09.
[10] Jaehyoung Park, Geun Young Kim, Hyung Jin Park, and
Jin Hee Kim, FTTH deployment status & strategy in
Korea, FTTH & U-City Research Team, Network Infra
Laboratory.
[11] Ethernet in the First Mile, IEEE Standard 802.3ah, 2004.
[9]
Video
Monika Gupta,
Derabassi
Pursuing MTech
Figure5
lecturer(ECE),
SSIET,
4. Conclusion
From simulation results, we conclude that BER value and
Q factor are much better with the use of dispersion
compensation. Also as distance varies from 20 to 30 km,
BER value and Q factor variation is high ,BER increases
and Q factor decreases.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
272
Abstract
This research is having purpose to find the alternatives to
the solution of complex medical diagnosis in detection of
heart disease where human knowledge is apprehended in a
general fashion. Successful application examples shown
previously that human diagnostic capabilities are
significantly worse than the neural diagnostic system. This
paper describes a new system for detection of heart disease
based on feed forward neural network architecture and
genetic algorithm. Hybridization has applied to train the
neural network using Genetic algorithm and proved
experimentally, proposed learning is more stable compare
to back propagation. Detail analysis has given with respect
to genetic algorithm behavior and its relationship with
learning performance of neural network. Affect of
tournament selection has analyzed to get more detailed
knowledge what is happening internally. With the proposed
system we hope that, design of diagnosis system for heart
disease detection will become easy, cost effective, reliable
and efficient.
Keywords: Heart disease, Neural network, Genetic
algorithm, Tournament selection.
1. Introduction
Coronary artery disease cause severe disability and
more death than any other disease including cancer.
Coronary artery disease is due to athermanous
narrowing and subsequent occlusion of the coronary
vessel. It manifests as angina, silent ischemia,
unstable angina, myocardial infraction, arrhythmias,
heart failure and sudden death.
Corresponding authors:
1
K.S.Kavitha, 3 Manoj Kumar Singh
This research is completed at Manuro Tech. Research
Bangalore, India.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
273
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3. Previous work
Numerous works in literature related with heart
disease diagnosis have motivated our research Work.
The necessity of effective identification of
information -contextual data - non obvious and
valuable for decision making from a large collection
of data has been on a steady increase recently. This is
an interactive and iterative process encompassing
several subtasks and decisions and is known as
Knowledge Discovery from Data. The central process
274
Complicationin
diagnosis
Doctors
Advantage
Activeintelligent
diagnosis
Timedependent
performance
Replicationofexpertsnot
possible
Knowledgeupgradation
isdifficult
Difficulttoestablish
multivariablesrelation
Notpossibletodeliver
quantitativeresult
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
275
5. Genetic Algorithms
5.1 Fundamental concept
Basic ideas on Genetic algorithms were first
developed by John Holland, and are mainly used as
search and optimization methods. Given a large
solution space, one would like to pick out the point
which optimizes an object function while still
fulfilling a set of constraints. In network planning, a
solution point could be a specific link topology, a
routing path structure, or a detailed capacity
assignment with minimum costs. Genetic algorithms
are based on the idea of natural selection. In nature,
the properties of an organism are determined by its
genes. Starting from a random first generation with
all kinds of possible gene structures, natural selection
suggests that over the time, individuals with "good"
genes survive whereas "bad" ones are rejected.
Genetic algorithms try to copy this principle by
coding the possible solution alternatives of a problem
as a genetic string. The genes can be bits, integers, or
any other type from which a specific solution can be
deduced. It is required that all solution points can be
represented by at least one string. On the other hand,
a specific gene string leads to exactly one solution. A
set of a constant number of gene strings, each
characterizing one individual, is called a generation.
Since the different strings have to be evaluated and
compared to each other, the notion of fitness is
introduced. The fitness value correlates to the quality
of a particular solution. Instead of working with the
actual solution itself, genetic algorithms operate on
the respective string representation. The following
three basic operators are applied :(i) Reproduction
(ii)Crossover (iii) Mutation. The reproduction
process creates a new generation, starting from an
existing generation; strings are reproduced with a
probability respective to their fitness value. Strings
which represent solutions with good properties have a
higher chance to survive than strings depicting
solution points with bad characteristics. This
principle is also known as "survival of the fittest.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
276
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Fitness = 1/error;
277
7. Experimental setup
7.1 Data set
Data set has taken from publically available UCI
repository [49]. Data set contains 270 patients record
and each patient condition defined by 13 parameters,
150 patients record taken for training data set and
remaining 120 for test data set.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
OVERALL
TR
FR
95.3333 4.6667
95.3333 4.6667
95.3333 4.6667
53.3333 46.666
94.0000 6.0000
53.3333 46.666
53.3333 46.666
94.0000 6.0000
95.3333 4.6667
94.6667 5.3333
(+VE)RESULT
TPV
FPV
8.5714
91.428
7.1429
92.857
7.1429
92.857
0
100.0
7.1429
92.857
0
100.0
0
100.0
7.1429
92.857
8.5714
91.428
8.5714
91.428
(-VE)RESULT
TNV
FNV
98.75 1.250
97.500
2.500
97.50 2.500
100.0
0
95.00 5.000
100.0
0
100.0
0
95.000
5.000
98.750
1.250
97.500
2.500
278
(+VE)RESULT
TPV
FP
90.000
10.000
88.571
11.428
88.571
11.428
8.5714
91.428
88.571
11.428
8.5714
91.428
8.5714
91.428
90.000
10.000
8.5714
91.428
90.000
10.000
(-VE)RESULT
TNV
FN
95.000
5.000
96.250
3.750
91.250
8.750
97.500
2.500
92.500
7.500
97.500
2.500
96.250
3.750
97.500
2.500
97.500
2.500
97.500
2.500
(+VE)RESULT
TPV
FPV
74.000
26.000
78.000
22.000
78.000
22.000
72.000
28.000
78.000
22.000
74.000
26.000
78.000
22.000
68.000
32.000
78.000
22.000
76.000
24.000
(-VE)RESULT
TNV
FNV
14.2857
85.714
17.1429
82.857
21.4286
78.571
12.8571
87.142
21.4286
78.571
12.8571
87.142
15.7143
84.285
10.0000
90.000
20.0000
80.000
12.8571
87.142
(+VE)RESULT
TPV
FPV
70.00
30.00
82.00
18.00
82.00
18.00
0
100.0
76.00
24.00
0
100.0
0
100.0
80.00
20.00
70.00
30.00
72.0
28.00
(-VE)RESULT
TNV
FNV
8.5714
91.428
17.1429
82.857
14.2857
85.714
100.0
0
12.8571
87.142
100.00
0
100.00
0
12.8571
87.142
10.0
90.000
90.00
10.00
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
279
NO. of Best
parent
235
222
485
181
778
138
225
236
156
736
Chromos.
offspring
265
201
476
164
744
123
248
224
154
702
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
280
w1aw1bw1cw1d
w11w12w13w14
W2aw2bw2cw2d
W3aw3bw3cw3d
W31w32w33 w34
W41w42w43
w11w12w13w14
w1aw1bw1cw1d
W21w22w23w24
W2aw2bw2cw2d
W51w52w53w54
W5aw5bw5cw5d
CROSSOVER
W31w32w33w34
W3aw3bw3cw3d
W61w62w63w64
W6aw6bw6cw6d
W41w42w43
W7aw7bw7cw7d
W71w72w73w74
W85w86w87
W81w82w83
Offspring
w1aw1bw1cw1d
W6aw6bw6cw6d
W5aw5bw5cw5d
W3aw3bw3cw3d
W41w42w43
W7aw7bw7cw7d
W85w86w87
Fig.(7)Crossoveroperation
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
281
MUTATION(OFFSPRING1)
w11w12w13w14
W61w62w63
w64
W21w22w23w24
W31w32w33w34
MUTATION(OFFSPRING2)
W51w52w53w54
W41w42w43
W21w22w23w24
W71w72w73w74
W81w82w83
Fig(8):mutationoperation
9. Conclusions
A new insight about problem and requirement
associated with health care solution has presented. To
develop the intelligent solutions, support for artificial
neural network has shown. Most of the developed
solution utilized the feed forward architecture and
back propagation as a learning algorithm. Because of
trapping tendency in local minima, problem may
appear at the time of up gradation and in result no
consistency in result. To overcome this problem a
new way, genetic algorithm has applied for training
purpose. Untold and unseen side of tournament
selection has discovered. This will give a new
meaning to understand the selection method. The
primary intent of our research is to design and
develop a model and design efficient approach for
detection of heart disease, which can be utilized for
real world applications as a computer aided
diagnostic tool. With the presented research, in future
we are going to develop an expert system for heart
disease detection
References:
[1] "Hospitalization for Heart Attack, Stroke, or
Congestive Heart Failure among Persons with Diabetes",
Special report:2001 2003, New Mexico.
[2] Chen, J., Greiner, R.: Comparing Bayesian Network
Classifiers. In Proc. of UAI-99, pp.101108 ,1999.
[3] "Heart Disease" from http://chineseschool.
netfirms.com/heart-disease-causes.html
[4]. Reddy, K. S., India wakes up to the threat of
cardiovascular diseases.
J. Am. Coll. Cardiol., 2007, 50, 13701372.
[5]. WHO India, http://www.whoindia.org/EN/Index.htm
(accessed on 11 February 2008).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[18]. Joshi, R. et al., Chronic diseases now a leading cause
of death in rural India mortality data from the Andhra
Pradesh Rural Health
Initiative. Int. J. Epidemiol., 2006, 35, 15221529.
[19] Sally Jo Cunningham and Geoffrey Holmes,
"Developing innovative applications in agriculture using
data mining", In the Proceedings of the Southeast Asia
Regional Computer Confederation Conference, 1999.
[20] Frank Lemke and Johann-Adolf Mueller, "Medical
data analysis using self-organizing data mining
technologies,"Systems Analysis Modelling Simulation ,
Vol. 43 , no. 10 ,pp: 1399 - 1408, 2003.
[21] Tzung-I Tang, Gang Zheng, Yalou Huang, Guangfu
Shu,Pengtao Wang, "A Comparative Study of Medical
Data Classification Methods Based on Decision Tree and
System Reconstruction Analysis", IEMS Vol. 4, No. 1, pp.
102-108,June 2005.
[22] Frawley and Piatetsky-Shapiro, Knowledge Discovery
in Databases: An Overview. The AAAI/MIT Press, Menlo
Park, C.A, 1996.
[23] Hsinchun Chen, Sherrilynne S. Fuller, Carol
Friedman, and William Hersh, "Knowledge Management,
Data Mining,and Text Mining In Medical Informatics",
Chapter 1, pgs 3-34
[24] S Stilou, P D Bamidis, N Maglaveras, C Pappas ,
Mining association rules from clinical databases: an
intelligent diagnostic process in healthcare. Stud Health
Technol Inform 84: Pt 2. 1399-1403, 2001.
[25] T Syeda-Mahmood, F Wang, D Beymer, A Amir, M
Richmond, SN Hashmi, "AALIM: Multimodal Mining for
Cardiac
Decision
Support",
Computers
in
Cardiology,pages:209-212, Sept. 30 2007-Oct. 3 2007
[26] Anamika Gupta, Naveen Kumar, and Vasudha
Bhatnagar,"Analysis of Medical Data using Data Mining
and Formal Concept Analysis", Proceedings Of World
Academy Of Science, Engineering And Technology
,Volume 6, June 2005,.
[27] Sellappan Palaniappan, Rafiah Awang, "Intelligent
Heart Disease Prediction System Using Data Mining
Techniques",IJCSNS International Journal of Computer
Science andNetwork Security, Vol.8 No.8, August 2008
[28]
Andreeva P., M. Dimitrova and A. Gegov,
Information
Representation in Cardiological Knowledge Based System,
SAER06, pp: 23-25 Sept, 2006.
[29] Latha Parthiban and R.Subramanian, "Intelligent Heart
Disease Prediction System using CANFIS and Genetic
Algorithm", International Journal of Biological, Biomedical
and Medical Sciences 3; 3, 2008
[30] Hian Chye Koh and Gerald Tan ,"Data Mining
Applications in Healthcare", Journal of healthcare
information management, Vol. 19, Issue 2, Pages 64-72,
2005.
[31] L. Goodwin, M. VanDyne, S. Lin, S. Talbert ,Data
mining issues and opportunities for building nursing
knowledgeJournal of Biomedical Informatics, vol:36, pp:
379-388,2003.
[32] Heon Gyu Lee, Ki Yong Noh, Keun Ho Ryu, Mining
Biosignal Data: Coronary Artery Disease Diagnosis using
Linear and Nonlinear Features of HRV, LNAI 4819:
Emerging Technologies in Knowledge Discovery and Data
Mining, pp. 56-66, May 2007.
282
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[48] Margaret R. Kraft, Kevin C. Desouza, Ida Androwich
(2002).Data Mining in Healthcare Information Systems:
Case Study of a Veterans Administration Spinal Cord
Injury Population, Proceedings of the 36th Hawaii
International Conference on System Sciences (HICSS03).
[49] Heart attack dataset from
http://archive.ics.uci.edu/ml/datasets/Heart+Disease.
[50] J. M. Garibaldi, E. C. Ifeachor, Application of
Simulated Annealing Fuzzy Model Tuning to Umbilical
Cord AcidBase Interpretation,IEEE Transactions on
Fuzzy Systems, Vol.7, No.1, pp.7284, 1999
[51] J. W. Huang, Y. Lu, A. Nayak and R. J. Roy, Depth
of Anesthesia Estimation and Control, IEEE Transactions
on Biomedical Engineering.Vol.46, No.1, pp.7181, 1999.
[52] D. A. Cairns, J. H. L. Hansen and J. E. Riski, A
Noninvasive Technique for Detecting Hypernasal Speech
Using a Nonlinear Operator,IEEE Transactions on
Biomedical Engineering, vol.43, no.1,pp.3545, 1996.
[53] K. P. Adlassnig and W. Scheithauer, Performance
evaluation of medical expert systems using ROC curves,
Computers and Biomedical Research, Vol.22, No.4,
pp.297313, 1989.
[54] L. G. Koss, M. E. Sherman, M. B. Cohen, A. R. Anes,
T. M. Darragh,L. B. Lemos, B. J. McClellan, D. L.
Rosenthal, S. KeyhaniRofagha,K. Schreiber, P. T.
Valente, Significant Reduction in the Rate of False
Negative Cervical Smears With Neural NetworkBased
Technology (PAPNET Testing System), Human Pathology,
Vol 28, No 10,pp.11961203, 1997.
[55] S. Andreassen, A. Rosenfalck, B. Falck, K. G. Olesen,
S. K. Andersen,Evaluation of the diagnostic performance
of
the
expert
EMGassistant
MUNIN,
Electroencephalography and clinical Neurophysiology,
Vol 101, pp.129144, 1996.
[56] B. V. Ambrosiadou, D. G. Goulis, C. Pappas, Clinical
evaluation of the DIABETES expert system for decision
support by multiple regimen insulin dose adjustment,
Computer Methods and Programs in Biomedicine, Vol.49,
pp.105115, 1996.
[57] H. D. Cheng, Y. M. Lui, R.I. Freimanis, A novel
approach to microcalcification detection using fuzzy logic
technique, IEEE Transactions on Medical Imaging,
Vol.17, No.3, pp.442450, 1998.
[58] R. D. F. Keith, S. Beckley, J.M. Garibaldi, J.A.
Westgate, E.C. Ifeachor,K. R. Green, A multicentre
comparative study of 17 experts and an intelligent
computer system for managing labour using the
cardiotocogram,British Journal Obstrtrics Gynaecology,
Vol.102,pp.688700, 1995.
[59] J. A. Swets, Measuring the Accuracy of Diagnostic
Systems,Science, Vol.240, pp.12851293, 1988.21
[60] K. Clarke, R. OMoore, R. Smeets, J. Talmon, J.
Brender, P. McNair,P. Nykanen, J. Grimson, B. Barber, A
methodology for evaluation of Knowledgebased systems
in medicine, Artifical Intelligence in Medicine, Vol.6,
pp.107121, 1994.
[61] R. Engelbrecht, A. Rector, W. Moser, Verification
and Validation, inAssessment and Evaluation of
Information Technologies, E. M. S. J.van Gennip, J. L.
Talmon, Eds, IOS Press, pp.5166, 1995.
283
K.V.Ramakrishnan presentlyworkingasaprofessor&dean
ofC.M.R.Instituteoftechnology,Bangalore,India
Manojkr.Singh,Currentlyholdingthe
post of director in Manuro tech.
research.Heisalsoactivelyassociated
withindustryastechnologyconsultant
&guidingseveralresearchscholars.He
is having background of R&D in
Nanotechnology,
VLSICAD,
Evolutionary computation, Neural
network, Advanced signal processing
etc.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
284
Abstract
The HSV give more accurate and more robust tracking
results compared to grayscale and RGB images. A
simple HSV histogram-based color model is used to
develop our system. First, a background registration
technique is used to construct a reliable background
image. The moving object region is then separated from
the background region by comparing the current frame
with the constructed background image. This paper
presents a novel human motion detection algorithm that
based regions. This approach first obtains a motion
image through the acquisition and segmentation of video
sequences. In the situations where object shadows
appear in the background region, a pre-processing
median filter is applied on the input image to reduce the
shadow effect, before major blobs are identified. The
second step is generating the set of blobs from detected
varied regions in the each image sequence.
Key words: Median filter, object tracking, background
subtraction, rgb2hsv
1. Introduction
Moving object tracking is one of the challenging task in
computer vision problem such as visual surveillance,
human computer interactions etc. The act of tracking in
video surveillance is becoming an important task
especially in monitoring large scale environments such
as public and security sensitive areas. In the field of
video surveillance, an object of interest would be
identified and then monitored or tracked. People are
typically the object of interest in video surveillance
applications, for example when walking through a
secluded or security sensitive area. There is now
increasing interest in monitoring people in public areas,
e.g. shopping malls. When tracking objects of interest in
a wide or public area, additional parameters are required
to improve performance such as color of clothing [1],
path and velocity of tracked object [2,3] and modeling
set colors for tracked persons [4]. To obtain robust
tracking of a target, a number of tracking methods are
typically employed in order to overcome problems such
as occlusion [1, 4, 5] and noise in surveillance videos.
Tracking objects is performed in a sequence of video
frames and it consists of two main stages: isolation of
objects from background in each frames and association
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
285
Frame Separation
3. Proposed system
The proposed algorithm consists of five stages image
acquisition, RGB to HSV conversion, BitXOR
operation, preprocessing and/ blob identification. Figure
1 shows the process flow of the proposed human motion
detection algorithm. Each of these stages will be
described in detail .We extract features in the RGB color
space. Two feature variables, chromaticity and
brightness distortions are used to classify the foreground
Current Frame
Reference Frame
RGB to HSV
RGB to HSV
Round off
Round off
Bit XOR
Median filter
Area Removal
Blop Identifier
Object Count
Frame segmentation
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
if B G
360 0 - if B > G
H=
286
(1)
Where = cos -1 ( R G ) + ( R _ B)
[(RG) 2 + ( RB) ( GB ) ]1/2
S = 1 [ 3/(R+G+B)] [min (R,G,B)]
(2)
V = 1/3 ( R+G+B)
(3)
(4)
(5)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
287
No of objects
TP
TN
FP
FN
Sensitivity
Specificity
6716
407169
821
821
0.8911
0.9980
5143
409147
614
614
0.8933
0.9985
10060
401764
1365
1365
0.8805
0.9966
S.I
.N
O
5. Conclusions
c
d
Fig 4 (a) Original image (b) Foreground image
(c) Tracked image (d) Ground truth
References
[1] Bird, N.D., et al. Detection of loitering Individuals in
public transportation areas in IEEE Transactions on
Intelligent Transportation Systems. 2005.
[2] Bodor, R., B. Jackson, and N.P. Papanikolopoulos.
Vision-Based Human
Tracking and Activity
Recognition. in 11th Mediterranean Conference on
Control and Automation. 2003.
[3] Niu, W., et al. Human Activity detection
andrecognition for video surveillance. In IEEE
International Conference on Multimedia and Expo
2004.
[4] Iocchi, L. and R.C. Bollees. Integrated PlanViewTracking and Color-based person Models for
Multiple People Tracking. in International
Conference on Image Processing 2005.
[5] Li, J., C.S. Chua, and Y.K. Ho. Color based
multiple people tracking. in 7th International
Conference on Control Automation, Robotics
andVision 2002. ICARCV 2002.
[6] C. Yunqiang and R. Yong, Real time object
tracking invideo sequences, Signals and
Communications Technologies, Interactive Video,
2006, Part II: pp. 67 88
[7] K. Nummiaro, E. Koller-Meier, and L. Van Gool,
A color-based particle filter, In Proc. First
International Workshop on Generative-ModelBased Vision, 2002, pp. 53 60
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
288
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
289
290
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
--- (1)
291
1 T
1 N
w w v i
2
N i 1
--- (3)
yi ( wT ( xi ) b) i and i 0, i 1,..., N ; 0
--- (4)
Self Organizing Maps: A self organizing map (SOM) is a
type of artificial neural network that is trained using
unsupervised learning to produce a low dimensional
(typically two dimensional), discredited representation of the
input space of the training samples, called a map. Self
organizing maps are different than other artificial neural
networks in the sense that they use a neighborhood function
to preserve the topological properties of the input space. This
makes SOM useful for visualizing low dimensional views of
high dimensional data, akin to multidimensional scaling.
SOMs operate in two modes: training and mapping. Training
builds the map using input examples. It is a competitive
process, also called vector quantization. Mapping
automatically classifies a new input vector. The self
organizing map consists of components called nodes or
neurons. Associated with each node is a weight vector of the
same dimension as the input data vectors and a position in the
map space. The usual arrangement of nodes is a regular
spacing in a hexagonal or rectangular grid. The self
organizing map describes a mapping from a higher
dimensional input space to a lower dimensional map space.
Algorithm for Kohonons SOM : (1) Assume output nodes
are connected in an array, (2) Assume that the network is
fully connected all nodes in input layer are connected to all
nodes in output layer. (3) Use the competitive learning
algorithm.
| i x || x |
--- (5)
--- (6)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
292
ak 1 / L xme jk ( 2 / L ) m
--- (7)
bk 1 / L yme jk ( 2 / L ) m
--- (8)
m 1
L
m 1
n a n 2 b n 2
1/ 2
--- (9)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
1
yi
1 exp( yi wij )
--- (11)
120
100
SVM
80
SOM
60
FNN
40
RBN
20
RCS
0
1
--- (12)
where (n+l), (n) and (n-1) index next, present and previous,
respectively. The parameter a is a learning rate similar to step
size in gradient search algorithms, between 0 and 1 which
determines the effect of past weight changes on the current
direction of movement in weight space. Sj is an error term for
node j.
i (d j yi ) yi (1 yi )
--- (13)
If node j is an internal hidden node, then :
j y j (1 y j ) k wk
--- (14)
293
Fonts
1
more
1
more
1
more
1
more
1
more
Error
0.001
0.2
0.02
0.1
0.06
0.2
0.04
0.1
0
0.0001
Efficiency
91%
88%
91%
88%
90%
87%
88%
90%
97%
95%
294
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
ACKNOWLEDGEMENT
The researchers would like to thank Yasodha for his
assistance in the data collection and manuscript preparation
of this article. Ms. Yasodha is currently a student in the
department of Computer science at the Anna University of
Technology, Chennai.
REFERENCES
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[12]
[13]
[14]
[15]
[16]
AUTHORS PROFILE
C.Sureshkumar received the M.E. degree in Computer
Science and Engineering from K.S.R College of Technology,
Thiruchengode, Tamilnadu, India in 2006. He is pursuing the
Ph.D degree in Anna University Coimbatore, and going to
submit his thesis in Handwritten Tamil Character recognition
using Neural Network. Currently working as HOD and
Professor in the Department of Information Technology, in
JKKN College of Engineering, Tamil Nadu, India. His
current research interest includes document analysis, optical
character recognition, pattern recognition and network
security. He is a life member of ISTE.
Dr. T. Ravichandran received a Ph.D in Computer Science
and Engineering in 2007, from the University of Periyar,
Tamilnadu, India. He is working as a Principal at Hindustan
Institute of Technology, Coimbatore, Tamilnadu, India,
specialised in the field of Computer Science. He published
many papers on computer vision applied to automation,
motion analysis, image matching, image classification and
view-based object recognition and management oriented
empirical and conceptual papers in leading journals and
magazines. His present research focuses on statistical
learning and its application to computer vision and image
understanding and problem recognition
295
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
296
A.Halim ZAM
M.Ali AYDIN
.Can TURNA
Abstract
Nowadays, fiber optic networks that make transmission possible
in high capacities become widespread. There is a partial decrease
in network problems with the capabilities provided by these
networks such as telephone, wide area network, Internet, video
conference on a single fiber line. Also in optical networks,
optical burst switching that is a new technology stands out by its
some partial benefits. Optical burst switching (OBS) is a
promising solution for all optical networks. In this paper, a
program is developed which simulates signaling channel of an
OBS switch, for signaling messages that uses signaling protocols
while going from source node to destination node through
intermediate OBS switches. In this study, some models for interarrival time of the signaling messages and processing time in the
service are used and a comparison of these models with the other
well known models is done (M/M/1/K queuing model and a
model using self-similar traffic as arrival process).
Keywords: Optical Networks, Optical Burst Switching,
Queuing Models, OBS Switches.
1. Introduction
Optical Burst Switches (OBSs) are designed to meet the
increasing bandwidth demands [1,2]. This increase in
bandwidth demands has led to the development of optical
fibers which gives the opportunity to carry high bit-rates.
Even the high bit-rates obtained with optical fibers are not
capable to provide enough bandwidth for future network
requirements; dense wavelength division multiplexing
(dWDM) became the solution for providing higher
bandwidth.
OBS seems to be the answer to the increasing bandwidth
demand problem. OBS is designed as an intermediate
solution between optical circuit switching and packet
switching and looks like the best candidate to carry IP
traffic over dWDM networks.
In this study, we concentrate on the signaling and control
plane of the OBS switch. The data plane is out of scope of
2. Problem Definition
In this paper, we analyze an OBS switch that contains
input and output buffers. The messages traversing on that
OBS switch are signaling and control messages.
Therefore, they are apt to OEO conversions at each switch
along their path. The rate of data channel is assumed to be
2.4 Gbps. The rate of signaling channel is on the other
hand 155 Mbps. Signaling message size is 1 Kbps and
burst size is variable taking values among 32 Kbps, 64
Kbps and 128 Kbps. The aim of this study is to calculate
the dropping probabilities at input and output buffers with
fixed buffer sizes. Once we model the system with this
approach, we can also analyze input and output buffer
sizes with fixed dropping probabilities.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
297
RD
SS
SB
RS
(1)
unit time.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
298
4. Comparing Results
In this section, we give three different graphics for each
model. These three different graphics are for each burst
size. At each graph, output buffer sizes vary from 10 to
700 and input buffer sizes are fixed.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
299
Fig. 4.1 Packet dropping probability in input and output buffers for
load=0.4 (M/M/1/K)
Fig. 4.4 Packet dropping probability in input and output buffers for
load=0.4 (Self-Similar)
Fig. 4.2 Packet dropping probability in input and output buffers for
load=0.24 (M/M/1/K)
Fig. 4.5 Packet dropping probability in input and output buffers for
load=0.24 (Self-Similar)
Fig. 4.3 Packet dropping probability in input and output buffers for
load=0.12 (M/M/1/K)
Fig. 4.6 Packet dropping probability in input and output buffers for
load=0.12 (Self-Similar)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
300
Fig. 4.7 Packet dropping probability in input and output buffers for
load=0.4(MMPP/C2/1/K)
Fig. 4.9 Packet dropping probability in input and output buffers for
load=0.12(MMPP/C2/1/K)
5. Conclusions
Fig. 4.8 Packet dropping probability in input and output buffers for
load=0.24(MMPP/C2/1/K)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
301
Fig.5.1 Comparing buffer sizes for each model under the 0.3 dropping
probability
References
[1] I. Baldine, G.N. Rouskas, H.G. Perros, and D. Stevenson,
Jumpstart: a just-in-time signaling architecture for WDM burstswitched networks, IEEE Communications Magazine, 2002,
Volume:40 Issue: 2, pp.82-89.
[2] A.H. Zaim, I. Baldine, M. Cassada, G. Rouskas, H. Perros,
and D. Stevenson, JumpStart just-in-time signaling protocol: a
formal description using extended finite state machines, Optical
Engineering, 2003, Volume:42 pp. 568-585.
[3] L. XU, H.G. Perros, and G. Rouskas, Techniques for Optical
Packet Switching and Optical Burst Switching, IEEE
Communications Magazine, 2001, pp.136-142.
[4] J. Wei, and R. Mcfarland, Just-in-time Signalling for WDM
Optical Burst Switching Networks, Journal of Lightwave Tech.,
2000, Vol. 18, No. 12, pp. 2019-2037.
[5] C. Qiao, and M. Yoo, Optical Burst Switching (OBS) A
New Paradigm for an Optical Internet, Journal of High Speed
Nets., 1999, Vol. 8, No. 1, pp. 69-84.
[6] I. Iliadis, and W.E. Denzel, Performance of Packet Switches
With Input and Output Queueing, in Proc. ICC '90, Atlanta, GA,
Apr. 1990, pp. 747-753.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Digital watermarking technique has been presented and widely
researched to solve some important issues in the digital world,
such as copyright protection, copy protection and content
authentication. Several robust watermarking schemes based on
vector quantization (VQ) have been presented. In this paper, we
present a new digital image watermarking method based on
SOFM vector quantizer for color images. This method utilizes
the codebook partition technique in which the watermark bit is
embedded into the selected VQ encoded block. The main feature
of this scheme is that the watermark exists both in VQ
compressed image and in the reconstructed image.
The
watermark extraction can be performed without the original
image. The watermark is hidden inside the compressed image, so
much transmission time and storage space can be saved when the
compressed data are transmitted over the Internet. Simulation
results demonstrate that the proposed method has robustness
against various image processing operations without sacrificing
compression performance and the computational speed.
Keywords: self organizing feature map, digital watermarking,
vector quantization, codebook partition, color image
compression.
1. Introduction
With the rapid development of multimedia and the fast
growth of the Internet, the need for copyright protection,
ownership verification, and other issues for digital data are
getting more and more attention nowadays. Among the
solutions for these issues, digital watermarking techniques
are the most popular ones lately [1][5]. Digital
watermarking techniques provide effective image
authentication to protect intellectual property rights. The
digital watermark represents the copyright information of
the protected image.
Digital watermarking is the process of embedding a secret
information (i.e., a watermark) into a digital data (namely
audio, video or digital image), which enables one to
establish ownership or identify a buyer. Digital watermark
can be a logo, label, or a random sequence. In general,
there are two types of digital watermarks, visible and
302
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2. SOFM based VQ
Codebook plays an important role on every compression
technique. This codebook is used for both compression
and a decompression stage, which is generated by means
of SOFM based VQ [14], [15]. A vector quantizer Q of
dimension k and size S can be defined as a mapping from
data vectors in k-dimensional Euclidean space, Rk into a
finite subset C of Rk . Thus
Q : Rk C
Where C={y1,y2, ., ys} is the set of S reconstruction
vectors, called a codebook of size S, and each yi C is
called a code vector or codeword. For each yi , i
I,{1,2,.,S} is called the index of the code vector and I is
303
3. Codebook Partition
Our watermarking technique is mainly depends on the
codebook partition to complete watermark embedding
process. Let the initial codebook trained by SOFM be CB
such that each codeword CW in CB contains a*a elements.
After the codebook partition process, in each division,
there will be two codewords, numbered 0 and 1,
respectively, so that codeword 0 is used to embed
watermark bit 0. The codeword 1 corresponds to
watermark bit 1. The two codewords most similar to each
other are classified into the same division with their index
values recorded in the codebook. The degree of similarity
between one codeword and the others is determined by
calculating the mean square error (MSE). The MSE of two
codewords CWx and CWy is defined by Eq. (1).
(1)
For example, if a codebook containing 256 codewords,
after performing the codebook partition, totally there are
128 divisions, every division with two codewords.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4. Watermark Embedding
After the codebook partition, we can start the watermark
embedding process, which contains the following steps:
Step 1: Segment the base image (BI) into non-overlapping
blocks. For each block, search the CB generated by SOFM
algorithm for the closest codeword and record the index
value of this codeword.
Step 2: Use K as the seed of the pseudorandom number
generator (PRNG) and randomly pick out M*M (size of
watermark image (WI)) indices generated in Step 1. Map
each watermark pixel to a chosen index in sequence.
Step 3: In the classified codebook divisions, for every
index value selected in the previous step, search for the
corresponding division and codeword number and record
them. When doing the recording, use S bits to keep track
of the division number and use one bit to write down the
codeword number, where S is set to be log n. For
example, if a codebook have 256 codewords where every
division has two codewords, we need 7 bits (i.e., log
(256/2)=log 128=7) to locate the division and just
one bit to indicate which one of the two is the codeword.
That is, in this example, S is 7. To be more specific, the
index for the codeword numbered 1 in the first division is
00000001, where the first part 0000000 locates the
division and the last bit 1 indicates that the codeword is
numbered 1.
Step 4: For each recorded item, replace its corresponding
number bit with the pixel value of the watermark. For
example, suppose the watermark bit 1 is embedded in the
recorded item 00000010, the embedding result will be
00000011 with the last bit to be replaced by the watermark
bit.
Step 5: Find the index values corresponding to the new
recorded items, and then compress the image with all of
the indices. Let the compressed watermarked image be
BI.
The size of the digital watermark that can be embedded in
base image depends on the size of base image and the size
of the divided block.
5. Watermark Retrieval
To perform the watermark retrieval procedure, both the
sender and the receiver possess the same codebook for the
transmitted watermarked image. The receiver can recover
the compressed image with the indices and retrieve the
embedded watermark using the codebook. Watermark
retrieval is done through the following steps:
Step 1: If the watermark is extracted from the VQ
decoded image go to Step 2, otherwise go to Step 3.
Step 2: Segment the decompressed watermarked image
into blocks of the same size with a*a pixels. With the help
of the secret key K to perform PRNG(K), select the blocks
where the watermark pixels were embedded and pick out
304
6. Performance Analysis
In this paper, the digital watermarking technique
completes the watermark embedding process in the color
image through R channel during VQ encoding. However,
the embedded watermark still exists in the VQ decoded
image. Subjectively, human eyes can evaluate the image
quality. However, the judgment is influenced easily by the
factors like expertise of the viewers, experimental
environments, and so on. To evaluate the image fidelity
objectively, some famous measurements are adopted in
this paper and described as follows.
The quality of the watermark embedded compressed
image with the original base image is calculated through
the peak signal-to-noise ratio (PSNR) defined as Eq. (2).
(2)
where MSE is the mean-square error between the original
grayscale image and the distorted one. Basically, the
higher the PSNR is, the less distortion there is to the host
image and the distorted one.
Normalized correlation (NC) is used to judge the
similarity between the original watermark WI and
extracted watermark WI. The NC is defined as Eq. (3):
(3)
In principle, if the NC value is closer to 1, the extracted
watermark is getting more similar to the embedded one.
The bit-correct rate (BCR) is computed as Eq. (4):
(4)
The mean absolute error (MAE) is computed as Eq. (5):
(5)
where m denote the length of the signature. Note that the
quantitative index, MAE, is exploited to measure the
similarity between two binary images.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
7. Experimental Results
Rice
256X256
Owl
256X256
Mattface
256X256
Bird
256X256
Peppers
256X256
PSNR
(dB)
CR(bpp)
Time
(min)
PSNR
(dB)
CR(bpp)
Time
(min)
33.73
4.462
14.88
31.92
5.544
22.62
26.59
4.482
14.45
25.95
5.806
22.41
32.22
4.448
13.89
31.61
5.731
24.72
34.22
4.451
14.01
32.96
5.556
22.63
32.50
4.462
14.06
31.78
5.695
26.04
PSNR(dB)
305
35
33
31
29
27
25
SOFM
LBG
Rice
Mattface
Peppers
Images
(a)
(b)
Fig 2(a). Performance of SOFM over LBG in terms of PSNR
CR(bpp)
9
7
5
SOFM
LBG
1
Rice
(c)
(d)
Mattface
Peppers
Images
SOFM
LBG
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
25
20
SOFM
15
LBG
10
Rice
Mattface
Peppers
Images
Attack
Method
Wiener
(3x3)
Wiener
(5x5)
Wiener
(7x7)
Median
(3x3)
Median
(5x5)
Median
(7x7)
34.5
34
PSNR(dB)
NC
MAE
BCR (%)
Filter
0.9474
0.1411
85.89
Filter
0.8026
0.3384
66.16
Filter
0.7541
0.4197
58.03
Filter
0.9281
0.2280
77.19
Filter
0.7995
0.3694
63.06
Filter
0.7352
0.4436
55.64
NC
Time(mins)
30
306
1
0.9
0.8
0.7
0.6
0.5
WienerFilter
3x3
33.5
5x5
7x7
Masksize
33
32.5
1
0.9
128
256
512
1024
CodebookSize
NC
32
0.8
Median
Filter
0.7
0.6
0.5
3x3
5x5
7x7
Masksize
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
307
Attack Method
Gaussian Filter
Sharpening
Blurring
Salt & pepper noise
Gaussian noise
25 rows & columns
cropped in all sides
Interception
and
deleted part filled with
gray
Enhancement
(a)
NC
0.9934
0.8347
0.8036
0.9413
0.9673
0.8235
MAE
0.0127
0.1729
0.4043
0.0508
0.0325
0.0935
BCR (%)
98.7305
82.7148
59.5703
94.9219
96.7529
90.6494
0.0198
98.0225
0.7245
0.3406
65.9424
(c)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Quantization
level
NC
BCR (%)
PSNR
(with
JPEG
attack)
128
64
32
8
4
1
1
0.9291
0.9107
0.8061
100
96.0449
83.3740
64.4287
51.3184
31.7808
31.7164
31.3043
30.1549
28.0327
8. Conclusion
A watermarking scheme based on SOFM in color image
has been proposed in this paper for enhancing the VQ
system of the watermarking ability. The simulation results
illustrate that, our technique possesses the advantages of
better imperceptibility, stronger robustness, faster
encoding time, and easy implementation. This technique
embeds the watermark in an image that has already been
compressed, which saves time and space when the image
is transmitted over network. Experimental results
demonstrate that the proposed method is suitable for
intellectual property protection applications on the
Internet.
References
[1] A. Bros and I. Pitas, Image watermarking using DCT
domain constraint, IEEE Int. Conf. Image Processing
(ICIP96), Vol.3, pp. 231 234, 1996.
[2] R. Barnett, Digital watermarking; applications,
techniques and challenges, IEE Electronics &
Communication Engineering Journal, Vol. 11, No. 4, pp.
173 183, 1999.
308
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
J. Anitha was born in Nagercoil, Tamilnadu, India, on July 14,
1983. She obtained her B.E degree in Information Technology
from Mononmaniam Sundarnar University in 2004, Tirunelveli,
Tamilnadu. She received her Masters degree in Computer Science
and Engineering from Manonmaniam Sundarnar University in
2006, Tirunelveli, Tamilnadu. Currently she is pursuing Ph.D in the
area of Image processing. Her area of interest is in the field of
compression, neural network and watermarking. She is the life
member of Computer Society of India.
S. Immanuel Alex Pandian was born in Tirunelveli, Tamilnadu,
India, on July 23, 1977. He obtained his B.E degree in Electronics
and Communication Engineering from Madurai Kamaraj University
in 1998, Madurai, Tamilnadu. He received his Masters degree in
Applied Electronics from Anna University in 2005, Chennai,
Tamilnadu. Currently he is pursuing Ph.D in the area of Video
processing. His area of interest is in the field of image processing,
computer communication and neural network.
309
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
310
2
Faculty of Management Sciences,
International Islamic University, Islamabad, PAKISTAN
Abstract
A longitudinal study has been conducted to explore
challenges confronting E-Government implementation in
public sector organizations of Pakistan. The tremendous
advancement of Information and Communication
Technologies (ICT) strongly influenced the work process
that brought change into the administrative setup of
government / bureaucracy. The basic information about
government organizations, but not all its relevant
business / processes are made available online to the
stakeholders concerned. However implementation of EGovernment is a big challenge as it needs tremendous
change in the existing work processes. Case study
encompasses the challenges in area such as technological
infrastructure, organizational aspect, and collaboration
with other organizations. The study was conducted in
one of public sector organization in Pakistan during the
implementation of E-Government and second study of
the same organization focused on how the challenges
were overcome by introducing various strategies and to
what extent such strategies found to be fruitful. The
finding of the study shows that the implementation of EGovernment is quite difficult where as basic ICT
infrastructure and financial resources are not available in
organizations. It has been recommended E-government
could not be managed properly until the challenges
should be addressed and managed well.
1. Introduction
E-Government has emerged as a revolutionary
mechanism of the management of public sector
organizations on global basis. It incorporates high
level services; accelerated processing, increased
transparency and low cost output are the mega
products of E-Government. These objectives can be
can be meat through the adoption of Information
and Communication Technologies in various
functional units. Wimmer, Codagnone, & Ma [22]
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
311
2. Literature Review
The term E-Government refers to the use of
information and communication technology (ICT)
to enhance the range and quality of public services
to citizens and business while making government
more efficient, accountable, and transparent [19].
E-government means the services available to the
citizens electronically. It may provide opportunity
to citizen to interact with the government for the
services that they required from government. ICT
plays an important role to providing the easy
services by the government to the citizens. The
government should treat their citizen as customers
and provide services though internet and networks.
E-Government is concerned with not only
providing public services but also value added
information to the citizens. It also enables
government organizations to work together
efficiently.
Internet use and benefits gained by advance
countries pressurizing the government of
developing countries to bring their information
online. This may require the government to
transform them self and start using the modern
practices opted by the developed countries [20].
one author describe the challenges as Many
governments faced the challenge of transformation
and the need to modernize administrative practices
and management systems cited in [16].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3. Research Methodology
To conduct this research, Case Study method has
been employed to explore challenges confronting
E-Government implementations in public sector of
Pakistan. Time span of this longitudinal study is
three years 2007-2010. During the period it is
studied that how organizations overcome these
challenges. The relevant authorities were contacted
and gathered the information on challenges faced
by organizations.
312
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
313
4.5 Collaboration
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
314
5.5 Collaboration
Collaboration is necessary for government to
achieve full integration of e-services across
administrative boundaries, we are lacking in
collaboration with other organizations to share
information with each other, we have discussed the
matter with other organizations but they are not
taking interest. I think collaboration is possible if
government
took
initiative
to
establish
collaborating
governing
body.
(Manager
Application)
Collaboration helps the organizations to share
infrastructure,
manpower,
resources,
and
knowledge with each other
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
315
6.5 Collaboration
Integration is the basis of the creation of the world
which is the necessity for the implementations of
E-Government. Collaboration among different
organizations is compulsory for government to
share resources with each other. The findings of
first study shows that there is not collaboration
exist among the organizations. The organization
overcomes this challenge through the discussion
with other organizations to share knowledge,
resources and infrastructure. To consume fruitful
results of E-Government it is recommended that a
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Conclusion
Organizations are facing pressure to improve the
quality of services to citizens. Quality of services
can be improved through the successful
implementation
of
E-Government.
This
implementations is a challenge for organizations
and could be solved through external fundings in
all stages of the project i.e. before, during and after
implementation of the project. This conclusion
could be given in following judgemental points.
Organizations are facing pressure to
improve the quality of services to citizens.
Error free services may increase the
confident of citizens.
The level of services could be enhanced
through external funding.
Financal and Technical backup would
always be required in all stages of project
implementaion.
Information and resources sharing is
mandatory to get seamless integration.
References
[1]Allen, A. B., Juillet, L., Paquet, G., & Roy, J.
(2001). Government and Government
Online in Canada:Partnership, People and
Prospects.
Government
Information
Quarterly , 93-104.
[2]Baoling, L. (2005). On the barriers to the
development of E-Government in China. 7th
International conference of Electronic
Commerce. ACM.
316
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Nasim Qaisar
Qaisar is a Lecturer in Department of Business
Administration, Federal Urdu University of Arts,
Science & Technology, Islamabad, PAKISTAN.
His area of interest is Business Intelligence and
Information Systems. He has masters in
Information Technology from International Islamic
University, Islmabad. PAKISTAN.
317
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
This paper presents the strategies for optimizing planting areas.
The three strategies considered for preparing field lining; 1) 600
line-direction 2) selecting the best line-direction for single block
and 3) selecting the best line-directions for many separate blocks,
might lead to different numbers of trees. Thus, an application
named Lining-Layout Planning by Intelligent Computerized
System (LLP-ICS) is introduced to choose the best strategy.
Because there are many possible solutions with ambiguous
results, a novelty of Genetic Algorithm (GA) for lining-layout
with focusing on the two approaches which are 1) assigning the
determined random values to the genes of chromosome, 2)
avoiding the same solution of optimal blocks occurs, was applied
to suggest the optimal solution intelligently. The aim of this
study was to suggest the best strategy among the various area
coordinates tested. In addition, the capability of the application
by novel GA was also examined. The results indicated that the
LLP-ICS produces a consistent solution with feasible results and
less number of repetition processes led to reduce the computation
time.
Keywords: Optimization, Genetic algorithm, Lining-layout,
Optimal solution
1. Introduction
In implementing the strategy of lining layout planning
(LLP) planting area optimization as discussed in section 2,
the four criteria in land used planning as describe by
Steward [2], involvement of the stakeholder, complexity
of the problem, use of a geographical information system
and use of an interactive support system are considered as
part of the challenge. In LLP optimization, the decisions
of the managerial department depend on the demands for
tree quality and the ease of managing the field;
consequently the optimal solution is not simply the one
with the maximum number of trees. However, a huge
number of possible solutions are generated for both
determining the location of the blocks and selecting the
318
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
319
2. Optimization Strategy
A strategy called Lining Layout Planning (LLP) attempts
to optimize land use by dividing an area into blocks and
then assigning line-direction within the determined blocks.
Figure 1 shows that the two main tasks in the LLP strategy
are determining blocks division and followed by
identifying the best line-direction for every determined
block. The unpredictable tree density in an area produced
by the different line-directions, various areas coordinates
and variety of shapes coordinates make requires a
computerized system to find the best lining layout.
Preliminary observation revealed that the current practice
(CP) of 600 line-direction in preparing field lining does not
necessarily produce the optimal number of trees. The
optimization strategy in figure 1 shows an area that could
be one block or many blocks. The block division requires
GA to find optimal combination of blocks with no unused
spaces of area as discussed in section 3.1. The divisions
with both one and many blocks will be assigned with the
best line-direction that the calculation process is initially
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
320
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
321
Initialize Population
Selection
Crossover
Mutation
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
322
Control
Mechanism
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
323
Number
of trees
Area
coordinate
(x4, y4)
Scal
e
(m)
Area
size
(Hect
)
Number
of
blocks
(CP)
(LP)
4, 4
25
133
138
4, 4
50
550
567
7, 7
50
12.25
1695
1747
7, 8
30
5.04
690
718
7, 8
100
56
7876
8054
Exp
num
Exp
Num
Shape
Coordinate
Block
Coordinate
3, 4
1, 1
1, 3
75, 100
25, 25
25, 75
3, 3
3, 1
1, 4
3, 3
1, 3
4, 1
2
Fig.9. Line-direction with Tree Number Interface
3, 4
1, 1
1, 3
75, 75
75, 25
25, 100
75, 75
25, 75
100, 25
150, 200
50, 50
50, 150
4, 2
1, 2
3, 2
4, 3
3, 1
1, 1
6, 5
6, 2
1, 7
200, 100
50, 100
150, 100
200, 150
150, 50
50, 50
300, 250
300, 100
50, 350
Time
(sec)
Repetition
Num /
Iteration
Num
427
31
106
138X
0.58
1 / 1515
77
27
32
136 X
0.46
2 / 1122
33
28
39
144
1.2
3 / 3282
427
31
106
564 X
2.56
1 / 2601
229
70
203
552 X
0.22
2 / 142
429
108
31
268
0.12
3 / 154
1072
416
243
1731 X
6.46
1 / 7406
Number
of Tree
(LLP)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
6, 6
6, 1
1, 7
300, 300
300, 50
50, 350
1287
223
243
1753
7, 4
6, 4
1, 4
210, 120
180, 120
30, 120
7, 6
2, 2
5, 2
7, 7
6, 1
1,1
210, 180
60, 60
150, 60
210, 210
180, 30
30, 30
4.28
2 / 1753
359
303
48
710 X
2.42
1 / 2378
467
175
75
717 X
1.9
2 / 2112
624
75
10
709 X
2.18
3 / 4809
6.34
4/ 7242
4.5
5 / 4325
4, 6
3, 6
7, 2
120, 180
90, 180
210,60
306
222
175 X
6, 6
6, 2
1, 8
180, 180
180, 60
30, 240
467
151
101
719
6, 8
1, 1
1, 7
7, 4
4, 4
3, 4
600, 800
100, 100
100, 700
700, 400
400, 400
300, 400
6917
133
1001
8051 X
6.2
1 / 16053
4003
2262
1700
7965 X
5.6
2/ 14586
.
.
.
.
.
.
.
.
.
7, 7
6, 1
1, 1
700, 700
600, 100
100, 100
7031
851
133
8015 X
(
2.6
50 / 9012
324
bS=1
only 9,203 (
bS
bS=N
bS=1
trLD ) iterations.
bS
Optimal
Block
Solution
(bS)
Shape
Coordinates
(Genes)
341113
Tree
Number and
LineDirection
Iteration
(trLD)
2842
333114
3186
331341
3175
Repetition of
Same Optimal
Solution
( OS)
1
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
7. Conclusion
The Novel GA is capable of reducing computational time
in analyzing the optimal strategy of lining-layout. This
Novel GA can be a basic algorithmic approach for
optimizing the other industries of planting areas. The
decision of land optimization in lining-layout planning
strategy is not always easily reconciled with users
perceptions. For this reason, the LLP-ICS for
implementation lining-layout optimization developed here
will create extensive use of coordinates represent an area
as an interface between the computer model and the users.
Therefore, this study promotes several contributions as
follows:
The LLP-ICS is the first attempt at application
development to facilitate tree-planting planners. The
LLP-ICS assists planners in deciding the best
implementation quickly.
2. Promoting a new optimization strategy by focusing
on blocks division and line-direction.
3. The proposed strategy gives an indicator to improve
number of trees.
This study refers to the square and rectangular shape
areas, but the task becomes more complicated for
trapezoidal areas in which a variety of areas coordinates
must be determined early in the process. Thus, a study on
this matter should be conducted incorporating
mathematical formulation and area coordinates
representation, so that the use of the various types of land
area will give a more significant contribution.
Acknowledgments
This research is registered under the Fundamental
Research Grant Scheme (FRGS) with grant number is
FRGS/03-04-10-873FR/F1 and it is fully funded by the
Ministry of Higher Education (MOHE), Malaysia. The
authors express our great appreciation and thanks to
MOHE, Malaysia and Universiti Teknologi MARA
(UiTM), Malaysia for sponsoring one of the authors in
PhD research. The authors also wish to thank the
Universiti Putra Malaysia (UPM) which provided the
facilities and appropriate environments for carrying out
this research. Last but not least, the authors would like to
express our appreciation to Federal Land Development
Authority (FELDA), Malaysia and University Agriculture
Park Department of UPM for giving us very good support
and corporation in acquiring information.
References
[1a]
[1b]
[2]
[3]
[4]
[5]
[6]
1.
325
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
326
[20]
[7]
[21]
[8]
[22]
[23]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
IJCSI International Journal of Computer Science Issue, Vol.7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
327
Abstract
Neural networks have been in use in
numerous
meteorological
applications
including weather forecasting. They are found
to be more powerful than any traditional
expert system in the classification of
meteorological patterns, in performing pattern
classification tasks as they learn from
examples without explicitly stating the rules
and being non linear they solve complex
problems more than linear techniques. A
weather forecasting problem rain fall
estimation has been experimented using
different neural network architectures namely
Electronic Neural Network (ENN) model and
opto-electronic neural network model. The
percentage of correctness of the rainfall
estimation of the neural network models and
the meteorological experts are compared. The
results of the ENN are compared with the
results of the opto-electronic neural network
for the estimation of rainfall.
Keywords - Back propagation, convergence,
neural network, opto-electronic neural
network, rainfall estimation.
1. Introduction
Rain is one of the nature's greatest gifts
for countries. It is a major concern to identify
any trends for rainfall to deviate from its
periodicity, which would disrupt the economy
of the country. In the present study rainfall is
estimated based on the temperature, air
pressure, humidity, cloudiness, precipitation,
wind direction, wind speed, etc., consolidated
from meteorological experts and documents
[1,2].
IJCSI International Journal of Computer Science Issue, Vol.7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
328
(3.8)
(3.1)
3. Methodology
(3.3)
Sigmoid output:
(3.4)
Current output:
(3.5)
Light output:
Loutputj = j x Ioutputj
(3.6)
IJCSI International Journal of Computer Science Issue, Vol.7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
329
100
90
80
Expert I
70
Expert
60
50
II
40
Opto-electronic Neural
30
20
10
0
No rain
M oderate
Rain
Heavy rain
No. of
patterns
Expert I
Expert
II
Electronic
Neural
Network
Optoelectronic
Neural
Network
No rain
12
83
75
83
75
Moderate
Rain
12
75
58
58
76
Heavy rain
12
75
92
92
92
77
75
78
81
5000
Series1
4000
3000
2000
1000
0
ANN
OPTO
IJCSI International Journal of Computer Science Issue, Vol.7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5. Conclusion
The
weather
forecasting
has
been
experimented successfully using ENN and
opto-electronic neural network models. The
accuracy of the results, obtained using ENN
and opto-electronic neural network models, is
compared with two metrological experts. The
performance of opto-electronic neural
network, which is better than the performance
of ENN, is reported. This study will encourage
researchers to use neural network models for
weather forecasting. Efforts are in progress to
reduce the time consumed by back
propagation training algorithm of neural
networks. As new models emerge and their
sophistication increase, it is expected that
optical implementations of these models will
continue to show advantages over other
approaches. Dynamic inclusion of new factors
may be incorporated to improve its
adaptability.
References
[1] Dabberdt, W., Weather for Outdoorsmen:
A complete guide to understanding and
predicting weather in mountains and valleys,
on the water, and in the woods. Scribner, New
York, 1981.
[2] C. Ronald Kahn, Gordan C. Weir Joslins,
Weather forecasting, A Waverly Company.
[3] Prasath,S., and Gupta,R.K., Weather
Forecasting using Artificial Neural Network,
Neural Networks and Fuzzy Systems.
Proceedings of the National Conference, Anna
University, Madras, pp 81-88, 1995.
[4] Prema K.V., A Multi Layer Neural
Network Classifier, Journal of Computer
Society of India, Volume 35, Issue no: 1, JanMar 2005.
[5] Philip D. Wasserman, Neural Computing
Theory and Practice, Van nostrand reinhold,
New York.
[6] Yegnanarayana, B., Artificial neural
Networks, Prentice Hall Inc.
330
University,
St.
Tiruchirappalli,
Bibliography
A.C. Subhajini received her M.S degree in
Computer
Science
and
Information
Technology from Bharathidasan University,
Trichy, India and M.Phil. in Azhakappa
University,madurai,India. She is now working
as a Lecturer in the Department of Software
Engineering, NIU, Tamilnadu, India. She is
having a teaching experience of 7.5 years and
is doing her doctorate in Mother Theresa
Womens University, India. Her research
interest is in neural network.
V. Joseph Raj is an Associate Professor in the
Department of Computer Engineering,
European University of Lefke, TRNC, Turkey.
He obtained his post graduation from Anna
University, Madras, India and PhD Degree in
Computer Science from Manonmaniam
Sundaranar University, Tirunelveli, India. He
is guiding PhD scholars of various Indian
Universities. His research interests include
neural network, image processing, networks,
and biometrics. He has great flair for teaching
and has twenty years of teaching experience.
He has published 25 research papers at
International and National levels.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
331
Abstract
In this paper we propose the design of single inset fed printed
antenna based on a simple modified transmission line model. The
developed model is simple, accurate and takes into account all
antenna characteristics and their feed system. To test the
proposed model, the obtained results are compared to those
obtained by the moments method (Agilent Momentum software).
Using this transmission line approach the resonant frequency,
input impedance, return loss can be determined simultaneously.
The paper reports several simulation results that confirm the
validity of the developed model. The obtained results are then
presented and discussed.
Keywords: Printed inset fed antenna, transmission line model,
moments method (Momentum).
1. Introduction
Microstrip antennas received considerable attention in the
1970s, although the first designs and theoretical models
appeared in the 1950s. They are suitable for many mobile
applications: handheld devices, aircraft, satellite, missile,
etc. They have been extensively investigated in the
literature [1-5].
Conventional microstrip antennas in general have a
conducting patch printed on a grounded microwave
substrate, and have the attractive features of low profile,
light weight, easy fabrication, and conformability to
mounting hosts [10]. Traditional feeding techniques
include the use of directly, electromagnetically, or
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
332
2G1a
G1a 2 B1a 2
sin 2 g La
cos g La
2
1
Yc
G12 a B a
sin2 g La
Yc
(1)
Rinb
G1b B1b
sin 2 g Lb
cos g Lb
2
1
Yc
2G1b G12b Bb
sin 2 g Lb
Yc
Rinc
2
G1c B1c
sin 2 g Lc
cos g Lc
2
1
Yc
2G1c G12c Bc
sin2 g Lc
Yc
Wb
La
(Lb,Wb)
Lm
(La, Wa)
Prob feed
Wf
Wa
S
Wc
(Lc,Wc)
lb
(Lb,Wb)
Rinb
(Lm,Wf)
(La,Wa)
Input 50
la
lc
Rina
(Lc,Wc)
(3)
The expressions of G1 and B1 are given by the relations
below [9]:
W
1
G1a ,b ,c a ,b ,c 1
k 0 h 2
(4)
120 0 24
Wa ,b ,c
1 0.636 ln k 0 h
(5)
120 0
The conductance of a single slot can also be obtained by
using the expression field derivative from model cavity. In
general, the conductance is defined by:
2P
(6)
G1 rad2
V0
By using the electric field one can calculate the radiated
power:
Rinc
(2)
B1a ,b ,c
(a)
L
Lb
K 0W cos
V0
2
sin 3 d
Prad
(7)
20 0
cos
(b)
Fig. 1 (a) Inset fed antenna. (b) Equivalent circuit of the proposed
antenna.
2 sin
k 0W
cos
sin
2
sin 3 d
I1
cos
(9)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
La
00
00
00
Lb
(10)
Lc
v0
1
fra
2 La r 0 0 2La La r
v0
1
frb
2 Lb r 0 0 2Lb Lb r
v0
1
fr
2 Lc r 0 0 2Lc Lc r
0
-2
-4
S11 [dB]
1
L a
2 f r a reff
1
Lb
2 f r b reff
1
L
c 2f
r c reff
333
-6
-8
-10
-12
(11)
-16
-18
3.5
3.7
3.9
4.1
4.3
4.5
4.7
4.9
5.1
5.3
5.5
Frequency [GHz]
Fig. 3 Simulated input antenna return loss.
TLM model
Momentum
-14
TLM model
Momentum
21.00
15.75
10.50
5.25
0.00
4.130
4.355
4.580
4.805
5.030
5.250
Frequency [GHz]
Fig. 4 Simulated input antenna VSWR.
50
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
0
-50
-100
TLM model
Momentum
-150
-5
-200
3.4
3.6
3.8
4.0
4.2
4.4
4.6
4.8
5.0
5.2
5.4
5.6
S11 [dB]
Phase []
334
Frequency [GHz]
TLM model
Momentum
-10
-15
-20
-25
4.0
4.5
5.0
5.5
6.0
6.5
7.0
Frequency [GHz]
One notices very well that the phase is null by the two
models in spite of the shift observed. The impedance locus
of the antennas array from 3.3 to 5.4 GHz is illustrated on
Smith's chart in Fig. 6.
TLM model
Momentum
VSWR
40
20
TLM model
Momentum
0
4.0
4.5
5.0
5.5
6.0
6.5
7.0
Frequency [GHz]
Fig. 9 Computed VSWR.
TLM model
Momentum
100
Phase []
-100
-200
4.2
4.7
5.2
5.7
6.2
Frequency [GHz]
50
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
TLM model
Momentum
5. Conclusion
In this paper, a highly-flexible and computation-efficient
transmission line model is developed to analyze the inset
fed antenna. The results so far show that the transmission
line model can be successfully used to design the inset fed
antennas array and even though the model is conceptually
simple, it still produces accurate results in a relatively
short period of computing time. The results obtained
highlighted an excellent agreement between the
transmission line model and the moments method. A
comparison of the results produced by the final model
with the moments method data showed the validity of the
proposed model. This allows the analysis of very large
arrays even on rather small computer. Based on these
characteristics, the proposed antennas array can be useful
for EMC applications.
References
[1] R. A. Sainati, CAD of Microstrip Antenna for Wireless
Applications, Artech House: Icc., 1996.
[2] S. Gao and A. Sambell, "A Simple Broadband Printed
Antenna", progress in electromagnetics research, PIER 60,
2006, pp. 119130.
[3] P. M. Mendes, M. Bartek, J. N. Burghartz and J. H. Correia,
"Design and Analysis of a 6 GHz Chip Antenna on Glass
Substrates for Integration with RF/Wireless Microsystems",
IEEE proceeding, 2003.
[4] Wi. Sang-Hyuk, "Wideband microstrip patch antenna with Ushaped parasitic", IEEE Trans Antennas Propag., 55(4):11969, 2007.
[5] D. M. Pozar and D.H. Schaubert, Microstrip Antennas, the
Analysis and Design of Microstrip Antennas and Arrays,
New York : IEEE Press, 1995.
[6] M. Abri, Boukli-hacene. N, Bendimerad. F. T. &
Cambiaggio. E, "Design of a Dual Band Ring Printed
335
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Multilevel image thresholding is an important operation in many
analyses which is used in many applications. Selecting correct
thresholds is a critical issue. In this paper, Bacterial Foraging
(BF) algorithm based on Tsallis objective function is presented
for multilevel thresholding in image segmentation. Experiments
to verify the efficiency of the proposed method and comparison
to Genetic Algorithm (GA) is presented. The experiment results
show that the proposed method gives the best performance in
multilevel thresholding. The method is also computationally
efficient, more stable and can be applied to a wide class of
computer vision applications, such as character recognition,
watermarking technique and segmentation of wide variety of
medical images.
Keywords: Multilevel thresholding, Bacterial foraging
algorithm, Tsallis objective function, image segmentation.
1. Introduction
Image segmentation is a process of dividing an image into
different regions such that each region is nearly
homogeneous, where the union of any two regions is not.
It serves as a key in image analysis and pattern recognition
and is a fundamental step toward low-level vision, which
is significant for object recognition and tracking, image
retrieval, face detection, and other computer-vision-related
applications [1]. Many segmentation techniques have been
proposed in the literature. Among all the existing
techniques, thresholding technique is one of the most
popular one due to its simplicity, robustness and accuracy
[1-3].
Otsu and Kapur methods were proved to be two best
thresholding methods for the uniformity and shape
measures [4, 5]. However, it is required to determine
threshold levels depending on the scene to obtain
consistent segmentation results in many cases. Multilevel
thresholding techniques were therefore developed. Most
336
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
337
B
S q (t)
thresholding method
In this section, a new thresholding method is proposed
based on the entropy concept. This method is similar to
the maximum entropy sum method of Kapur et al [3];
however the Tsallis non-extensive entropy concept is used
for customizing information theory.
Let there be L gray levels in a given image and these gray
levels are in the range {0, 1, 2,,(L-1)}. Then one
can define Pi = h(i)/N, (0 i (L-1)) where h(i) denotes
number of pixels for the corresponding gray-level L and N
denotes total number of pixels in the image which is equal
to iL01 h(i).
q 1
, P
L1
Pi .
it
B
C
m
(t) S q (t) S q (t) ... S q (t)
(1 q).S q
B
C
m
(t).S q (t).S q (t)....S q (t)] (2)
where
Sq
L1 Pi q
1 (
)
i t P B
(t)
q
t 1
1
Pi
1 (
)
i 0 P A
q 1
, P
t 1
1
Pi
i 0
q
t 1
2
Pi
1 (
)
t 1
i t P B
2
B
B
1
, P Pi
S q (t)
i t
q 1
1
q
t 1
3
Pi
1 (
)
t 1
it P C
3
C
C
2
S q (t)
, P Pi ....
i t
q 1
2
L1 Pi q
1 (
)
m
i t
L1
P
m
m
m
S q (t)
, P Pi .
it
q 1
m
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
338
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Else n = Ns
g) Go to the next bacterium (i+1) till all the bacteria
undergo chemotaxis.
7. Elimination
number (rand)
bacterium gets
location, else
location.
5. Reproduction:
a) For the given k and l, and for each i = 1, 2, 3, ........ S, let
N
i
J health
339
j 1
(a)
(b)
(f)
(g)
(a)
(c)
(d)
(h)
(i)
(e)
(j)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
340
Table 1: Objective values and their optimal threshold values by using BF, PSO and GA methods
Test Images
LENNA
PEPPER
BABOON
HUNTER
CAMERA
MAN
AIRPLANE
MAP
LIVING
ROOM
HOUSE
BUTTERFLY
(a)
m
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
BF
0.8889
1.296278
1.654271
1.995787
0.8889
1.296278
1.654264
1.995771
0.8889
1.296284
1.654266
1.995744
0.8889
1.296270
1.654258
1.995766
0.8889
1.296189
1.654190
1.995674
0.8889
1.296223
1.654277
1.995795
0.881206
1.273982
1.587902
1.828422
0.888881
1.296281
1.654263
1.995743
0.888761
1.296092
1.653630
1.994217
0.888825
1.296202
1.653424
1.994823
Objective values
PSO
0.8889
1.296268
1.654255
1.995773
0.8889
1.296274
1.654248
1.995766
0.8889
1.296274
1.654262
1.995737
0.8889
1.296267
1.654255
1.995720
0.8889
1.296180
1.654183
1.995669
0.8889
1.296204
1.654262
1.995784
0.881206
1.267481
1.585544
1.818369
0.888881
1.296275
1.654247
1.995701
0.888761
1.296090
1.653586
1.993744
0.888825
1.296190
1.652617
1.991453
(b)
GA
0.8889
1.296247
1.654208
1.995717
0.8889
1.296262
1.654225
1.995739
0.8889
1.296202
1.654241
1.995708
0.8889
1.296227
1.654240
1.995713
0.8889
1.296141
1.654177
1.995663
0.8889
1.296180
1.654243
1.995768
0.881206
1.232429
1.579716
1.788800
0.888881
1.296255
1.654244
1.995627
0.888761
1.296052
1.653581
1.993426
0.888825
1.296168
1.652564
1.989359
BF
120,164
81,124,178
85,124,161,193
76,108,136,164,193
82,154
86,118,190
71,121,161,197
70,109,139,169,197
91,147
111,148,188
75,114,146,175
78,106,136,157,179
94,137
82,118,171
71,110,142,182
65,93,123,150,182
120,154
78,128,178
91,123,156,211
70,107,134,158,200
72,153
99,143,193
68,103,135,182
61,94,121,150,185
114,176
84,142,198
73,113,156,203
75,112,147,174,206
81,144
89,143,197
67,107,145,186
72,111,139,164,199
87,145
88,133,199
67,105,146,189
66,95,121,155,200
97,136
99,135,197
95,120,144,189
89,114,141,170,213
(c)
(d)
(e)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
(f)
(g)
341
(h)
(i)
(j)
Table 2: PSNR value, CPU time and standard deviation value obtained by BF, PSO and GA methods
Test Images
LENNA
PEPPER
BABOON
HUNTER
CAMERAMAN
AIRPLANE
MAP
LIVING ROOM
HOUSE
BUTTERFLY
m
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
2
3
4
5
BF
15.2419
17.4715
19.5070
20.9916
12.9108
16.6563
19.2433
20.4910
13.1404
18.1076
17.5204
18.7616
11.3848
14.5772
16.2874
17.3380
10.6258
15.6856
16.7835
17.8802
13.7290
15.8742
16.3276
17.6049
16.6045
18.4286
20.6499
22.1638
13.1208
17.1198
19.2320
21.3385
12.9865
14.0213
16.8884
17.5635
13.0516
18.1337
20.0356
21.9096
PSNR (db)
PSO
15.2419
17.1425
19.4324
20.5637
12.9108
16.0269
16.7109
20.2089
13.1404
17.0809
17.1462
18.2718
11.3848
14.5135
15.4496
16.6426
10.6258
14.9951
15.9187
17.2393
13.7290
15.5913
15.6294
17.6077
16.6045
18.0419
19.7997
21.8968
13.1208
16.9810
18.8655
20.9931
12.9865
13.8104
16.4428
16.7719
13.0516
17.8316
18.9792
21.4406
GA
15.2419
16.9455
19.0207
19.8703
12.9108
15.5628
16.3735
19.7642
13.1404
16.7728
17.1583
17.2903
11.3848
14.0724
14.1926
15.6197
10.6258
14.5900
14.9756
16.6026
13.7290
14.6681
14.9701
16.1579
16.6045
16.2161
19.7340
21.5746
13.1208
16.5873
18.5189
20.5597
12.9865
13.6918
16.1794
16.5772
13.0516
17.2964
18.8382
20.2055
BF
3.0218
3.5327
4.0310
4.5275
3.0531
3.2310
4.1089
4.5213
3.1028
3.7452
3.9303
4.8614
3.0106
3.5624
4.1200
4.4226
2.5690
3.1250
3.9253
4.3906
2.9632
3.3310
3.9259
4.7410
2.7942
3.2771
3.6104
3.9885
3.1406
3.5769
3.9139
4.0251
2.9117
3.3437
3.8074
4.5114
3.1406
3.5746
4.0356
4.5154
CPU Time
PSO
3.6810
4.0357
4.7523
4.9900
3.5394
3.5473
4.4063
4.8484
3.5021
4.2591
4.3365
5.4188
3.6970
4.0130
4.6875
5.0009
3.0021
3.7658
4.6188
5.1343
3.3159
3.7625
4.8750
5.2813
3.3221
3.7969
4.0213
4.5873
3.6250
3.9139
4.3964
4.6421
3.2563
3.8884
4.4620
4.9437
3.7344
4.1980
4.6370
5.0291
GA
3.9219
4.3906
4.8438
5.2854
3.9844
3.9919
5.0938
5.2314
3.8906
4.4422
4.5156
5.8281
3.9797
4.3906
4.7031
5.4688
3.6482
4.3906
4.8594
5.6026
3.8921
4.1358
5.2656
5.6077
3.6563
4.1563
4.5744
4.9810
3.9531
4.3417
4.7602
5.1715
3.7656
4.2736
4.8655
5.4353
4.1406
4.5607
5.0254
5.5607
BF
0.0000
1.6827e-006
3.4304e-006
4.5355e-006
0.0000
2.8014e-006
1.6217e-005
2.0208e-004
0.0000
2.9078e-006
3.4997e-006
9.7325e-006
0.0000
4.6660e-007
1.8203e-006
5.4613e-005
0.0000
4.7916e-006
3.6715e-005
6.6163e-005
0.0000
8.3154e-007
9.5166e-007
5.1122e-006
0.0000
5.6090e-007
5.0556e-004
6.5988e-004
0.0000
1.6980e-006
4.3245e-006
4.3515e-005
0.0000
2.5025e-006
3.7689e-006
7.5181e-005
0.0000
1.4899e-006
1.9529e-005
6.4439e-005
Standard Deviation
PSO
GA
0.0000
0.0000
2.5418e-006
3.8999e-006
1.3306e-005
1.9104e-005
1.6797e-005
2.7208e-005
0.0000
0.0000
7.3578e-006
2.0199e-005
7.0094e-005
1.7406e-004
6.3010e-004
1.1678e-003
0.0000
0.0000
9.3397e-006
1.2993e-005
7.2225e-006
1.3714e-005
1.1321e-005
1.8993e-005
0.0000
0.0000
1.8965e-006
1.0060e-005
4.2172e-006
1.0886e-005
1.2255e-004
9.3619e-004
0.0000
0.0000
5.4543e-006
8.4892e-006
7.5181e-005
1.1024e-004
1.0319e-004
7.7199e-004
0.0000
0.0000
3.1114e-006
6.9412e-006
2.6305e-006
9.2004e-006
3.3007e-005
6.3861e-005
0.0000
0.0000
1.0167e-006
4.6714e-006
1.1493e-003
3.9730e-003
8.1623e-003
1.6169e-002
0.0000
0.0000
6.9103e-005
7.0160e-004
8.4404e-006
2.2951e-005
9.3293e-005
1.8187e-004
0.0000
0.0000
4.3646e-005
6.9786e-005
8.7702e-005
1.1385e-004
9.5166e-005
1.2255e-004
0.0000
0.0000
4.8520e-005
8.5774e-004
6.7992e-004
1.3908e-005
9.1016e-004
5.1122e-003
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
(a)
(f)
(b)
(g)
342
(c)
(d)
(h)
(e)
(i)
(j)
PSNR 20log10 (
255
RMSE
where
RMSE
1 MN
2
[I(i, j) I (i, j)]
MN i 1j1
5. Conclusion
Segmented images of Tsallis-BF by m = 3 and 5 are
shown in Figures 2 and 3 respectively. The segmentation
is better when m = 5 is chosen than by choosing m = 3.
The PSNR value, CPU time and the standard deviation
value obtained by different methods are listed in Table 2.
PSNR value is calculated as follows:
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1] N. Pal, and S. Pal, A review on image segmentation
[2]
[3]
[4]
[5]
343
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
344
Abstract
1. Introduction
Machine learning is a domain intensively developed in last
decades. One of its main sub-domain is supervised learning
that form decision theories or functions to accurately assign
unlabeled (test) instances into different pre-defined classes.
Depending on how a learner reacts to the test instances, we
have eager learning and lazy learning (Friedman et al.
1996). Eager learning methods construct a general, explicit
description of the target function when training examples
are provided. Instance-based learning methods simply store
the training examples, and generalizing beyond these
examples is postponed until a new instance must be
classified. Each time a new query instance is encountered,
its relationship to the previously stored examples is
One of the best approaches that work under instance-baselearning in ensemble classifiers is lazy bagging (LB), by
Xingquan Zhu et.al(2008), that builds bootstrap replicate
bags based on the characteristics of test instances.
Although lazy bagging has a great success in getting more
accurate classifier, diversity would be reduce, because lazy
learning suffer from reducing diversity. In order to make the
ensemble more effective, there should be some sort of
diversity between the classifiers (Kuncheva, 2005). Two
classifiers are diverse if they make different errors on new
data points.
Stacking is a simple yet useful approach to this problem in
order to achieve classifier diversity (Wolpert 1992). Under
stacking we use different individual classifier in each bag
for classifying the test instance. Stacking learns a function
that combines the predictions of the individual classifiers.
So different type of base classifiers is used in order to get
more accuracy in ensembles (Seewald 2003).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
345
Fig. 1 LS Diagram; LS waits until a test instance arrived and make a set
with k instances out of the NN subset and also aNother set with N k
instances out of the original dataset. Afterwards the class label of the test
instance is defined by the majority vote from the output of the base
learners.
The idea is that for each test instance x , that we add a small
i
number of its kNN into the bootstrap bags, from which the
base classifiers are trained. Different type of base
classifiers for labeling new instance is used in ensemble
stacking. We name this method lazy stacking (LS), a
method towards more diverse ensemble classifier. By doing
so, we expect to increase the base classifiers classification
diversity, leading to more accurate classification of x .
i
2. Related work
Friedman et al. (1996) proposed a lazy decision tree
algorithm which built a decision tree for each test instance,
and their results indicated that lazy decision trees performed
better than traditional C4.5 decision trees on average, and
most importantly, significant improvements could be
observed occasionally. Friedman et al. (1996) further
concluded building a single classifier that is good for all
predictions may not take advantage of special characteristics
of the given test instance. In short, while eager learners try
to build an optimal theory for all test instances, lazy learners
endeavor to finding local optimal solutions for each
particular test instance.
K-nearest neighbors algorithm is a key element in lazy
learning. The kNN is one of the most thoroughly analyzed
algorithms in machine learning, due in part to its age and in
part to its simplicity. Cover and Hart (1967) present early
theoretical results, and Duda and Hart (1973) provide a
good overview.
3. Lazy Stacking
In this paper, we proposed LS, a stacking framework with
lazy local learning for building an ensemble of lazy
classifiers.
Lazy Stacking applies lazy local learning to the nearest
neighbors of a test instance, which produces more accurate
base classifies than applying the global learner. Lazy
learners are suffering from reducing diversity, because they
forms a decision theory that is especially tailored for the test
instance, so by choosing different classifier in stacking to
the whole training set, the performance of the joint lazy and
stacked learners can be increased accuracy. The increase in
performance of LS can mainly attributed to the diversity of
our model to be outlined in the section.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
346
Procedure LazyStacking()
Learning:
1.
2.
3.
4.
For i from 1 to L
a)
b)
c)
Classification:
yi .
d)
try to find the kNN of x from the training set T, and uses the
i
each bag and thus help LS build base classifiers with less
variance when classifying x .
i
5.
End For
6.
7.
Return Y
arg
log
(1)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Dis(xi,xl)=
).(
(2)
347
#of
#of
ofClasses
Attributes
Instances
Audiology
24
70
126
Balance
625
Dataset
4. Experimental Results
To further assess the algorithm performance, we compare
LS and several benchmark methods including C4.5, kNN,
TB, and LB accuracies on 12 real-world datasets from the
UCI data repository (Blake & Merz 1998). We implement
C4.5, kNN, TB, LB and LS predictors by using WEKA data
mining tool.
In order to measure the performance of the proposed
algorithms in this work, we employed 10-time 5-fold cross
validation for each dataset, and assess their performance,
based on the average accuracy over 10 trials.
Bupa
345
Car
1,728
Ecoli
336
Glass
10
214
Hayes
132
Horse
23
368
Krvskp
37
3,196
Labor
17
57
Sonar
61
208
To compare LS other benchmark methods we use 12 realworld datasets from the UCI data repository [13]. The main
characteristics of these datasets are summarized in Table 1.
Table 2: Classification accuracy (and standard deviation) on 12 datasets selected from the UCI data repository. For each dataset, the accuracy of the method
with the highest mean accuracy is marked in bold face.
Dataset
KNN
C4.5
TB
Lazy Bagging
Lazy Stacking
Audiology
54.422.47
76.152.04
78.542.71
81.751.98
83.301.76
Labor
84.452.57
78.953.88
84.044.25
86.643.51
90.562.71
Balance
89.280.98
65.740.85
74.660.72
77.231.04
86.240.48
Pima
75.001.05
73.651.21
75.170.81
76.230.68
76.321.08
Bupa
63.472.21
64.202.90
69.171.88
70.232.04
70.281.38
Car
78.670.64
91.370.86
92.580.80
93.210.48
94.780.27
Hayes
64.362.68
71.323.1
73.502.37
75.761.18
81.072.52
Horse
81.611.26
85.120.67
84.100.94
84.960.63
84.351.01
Glass
64.951.87
66.922.65
72.621.88
74.311.62
74.551.77
Sonar
73.186.75
73.083.63
75.912.21
80.051.78
84.661.87
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
348
Ecoli
86.011.07
82.561.17
83.721.06
83.301.11
85.570.69
Kr-vs-kp
89.640.15
99.290.12
99.360.08
99.690.08
99.710.12
Lazy Bagging
Lazy Stacking
LS_LB
p-value
Balance
77.231.04
86.240.48
9.01
<0.001
Sonar
80.051.78
84.661.87
4.61
<0.001
Lymph
80.412.02
85.011.04
4.60
<0.001
Labor
86.643.51
90.562.71
3.92
0.018
Ecoli
83.301.11
85.570.69
2.27
0.048
Hayes
75.761.18
81.072.52
5.31
0.001
Bupa
70.232.04
70.281.38
0.05
0.045
Audiology
81.751.98
83.301.76
1.55
0.014
Glass
74.311.62
74.551.77
0.24
0.042
Pima
76.230.68
76.321.08
0.09
0.015
Kr-vs-kp
99.690.08
99.710.12
0.02
0.218
Horse
84.960.63
84.351.01
-0.61
0.440
References
[1] D. Aha, D. Kibler, and M.Albert, Instance-based learning
algorithms. Machine learning, Vol.6, 1991, pp. 37-66.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
349
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
Testing is an important activity in software development.
Unfortunately till today testing is done manually by most of
the industry due to high cost and complexity of automation.
Automated testing can reduce the cost of software
significantly. Automated Software Test Data Generation is
an activity that in the course of software testing
automatically generates test data for the software under
test. Most of the automated test data generation uses
constraint solver to generate test data. But it cannot
generate test data when the constraints are not solvable.
Although method can be found to generate test data even if
the constraints are unsolvable, but it is poor in terms of
code coverage.
In this paper, we propose a test data generation method to
improve test coverage and to avoid the unsolvable
constraints problem. Our method is based on the individual
constraints and same or dependent variable to create the
path table which holds the information about the path
traversed by various input test data. For generating unique
test data for all the linearly independent feasible path we
created equivalence class from the path table on the basis
of path traversed by the various input test data. The input
data is taken based on individual constraints or boundary
value. Our results are compared with cyclomatic
complexity and number of possible infeasible paths. The
comparison shows the effectiveness of our method.
Keywords: Independent feasible path, scalability,
equivalence class.
1. Introduction
Automated testing is a good way to cut down time and cost
of software development. It is seen that for large software
projects 40% to 70% of development time is spent on
testing. Therefore automation is very much necessary. Test
automation is a process of writing computer programs that
350
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
351
3. Our Approach
3.1 Steps of Our Approach
To improve the coverage of the sample program which is
the major drawback in the method as suggested by [5], we
propose a method based on the variables involved in the
constraint. The flow graph of our method is shown in
figure 1.
Source Program
Constraint Collector
Constraints
Dependent Variable Collector
Dependent Variables
Variables Equivalence Classs Generator
Equivalence Class of Variables
Input Data Generator
Input Data
Path Table Algorithm
Path for Test Data
Test Data
Fig. 1 Steps of our approach.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.1.4
352
1
a, b
w1! =5
2
c
3
d
w1==5
w1<5
w1>5
4
e
5
g
w1+w2>=8
w1+w2<8
7
i
6
k
Class
w1+w2+w3>=12
9
l
Constraint
w1 == 5 , w1 > 5
2
3
w1 + w2 >= 8
w1 + w2 + w3 >= 12
8
j
n>=30
10
m
n<30
11
n
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
353
w1==5
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
FALSE
FALSE
w1+w2>=8
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
FALSE
FALSE
w1+w2+w3>=12
TRUE
FALSE
FALSE
TRUE
TRUE
FALSE
TRUE
FALSE
2
e
Marks<=100
3
f
Marks>100
Marks<50
w1 > 5
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
FALSE
FALSE
w1 + w2 >= 8
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
FALSE
FALSE
w1 + w2 w3 >=12
TRUE
FALSE
FALSE
TRUE
TRUE
FALSE
TRUE
FALSE
1
a,b,c,d
4
g,h
5
i,j
6
k
grade!=
grade=
7
l
8
m
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
354
5. else go to step 13
6. Path = Path + Current node
7. if Current node has only one child node then
8. Current node = Child node
9. Otherwise, if constraint at the node is true then
10. Current node = Left Child node
11. Else Current node = Right child node
12. go to step 3
b > c&a == b
Table 4: Equivalence class of sample program 3
Class
1
Constraint
Marks>100, marks >=50 and grade! =
a
1
a>b
a<=b
c
3
b
2
a>c
a<=c
e
5
1
a-e
d
4
b>c
b<=c
gb<=
7
2
f
f
6
x!=y
a==b
a!=b
h
8
l
11
b=c
b=c
b!=c
m
12
b! =c
j,k
10
i
9
x>y
5
J,k
x=y
6
l
7
m
1. Path = NULL
x<y
4
h,i
n
13
3
g
4. Experimental Results
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
355
Thus the final test data's are the data taken from each
equivalence class
w1
5
5
5
5
4
4
4
4
6
6
6
6
4
4
4
4
w2
3
3
2
2
4
4
3
3
2
2
1
1
4
4
3
3
w3
4
3
6
3
4
3
6
4
4
3
5
4
4
3
6
4
Path covered
a,b,e,l,n
a,b,e,l,n
a,b,e,l,n
a,b,e,l,n
a,b,d,g,k,l,m,n
a,b,d,g,k,l,m,n
a,b,d,g,i,j,l,m,n
a,b,d,g,i,l,m,n
a,b,d,e,l,m,n
a,b,d,e,l,m,n
a,b,d,e,l,m,n
a,b,d,e,l,m,n
a,b,d,g,k,l,m,n
a,b,d,g,k,l,m,n
a,b,d,g,i,l,m,n
a,b,d,g,l,m,n
Class
1
2
3
4
5
Path covered
a,b,c,d,e,l,n
a,b,c,d,g,k,l,m,n
a,b,d,g,i,j,l,m,n
a,b,d,g,i,l,n
a,b,d,e,l,m,n
value
w1=5, w2=3, w3=4
w1=4, w2=4, w3=4
w1=4, w2=3, w3=6
w1=4, w2=3, w3=4
w1=6, w2=2, w3=4
x
4
4
8
10
5
10
y
4
8
4
5
10
10
Path covered
a,b,c,d,e,f,m
a,b,c,d,e,f,g,h,i,l,f
a,b,c,d,e,f,g,j,k,l,f
a,b,c,d,e,f,g,h,i,l,f
a,b,c,d,e,f,g,j,k,l,f
a,b,c,d,e,f,m
Class
1
2
3
Path covered
a,b,c,d,e,f,m
a,b,c,d,e,f,g,h,i,l,f
a,b,c,d,e,f,g,j,k,l,f
Value
x=4, y=4
x=8, y=4
x=4, y=8
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Path Covered
a,b,c,d,e,f,i,j,l,m
100
a,b,c,d,e,f,i,j,k,l,m
a,b,c,d,e,k,m
a,b,c,d,e,f,g,h,k,l,m
a,b,c,d,e,f,i,j,l,m
a,b,c,d,e,f,i,j,k,l,m
a,b,c,d,e,f,i,j,k,l,m
a,b,c,d,e,k,m
a,b,c,d,e,f,g,h,k,l,
m
1
2
3
356
99
101
49
A
B
C
D
E
F
G
w1==5 n>=30
w1==5 n<30
w1>5 n>=30
w1>5 n<30
w1! =5 w1<5
w1+w2>=8,
n>=30
w1! =5 w1<5
w1+w2>=8, n<30
w1! =5 w1<5
w1+w2<8,
w1+w2+w3>=12,
n>=30
w1+w2<8,
w1+w2+w3>=12,
n<30
w1! =5 w1<5
w1+w2<8,
w1+w2+w3<12,
n>=30
w1! =5 w1<5
w1+w2<8,
w1+w2+w3<12,
n<30
a,b,d,e,l,m,n
a,b,d,e,l,n
a,b,d,e,l,m,n
a,b,d,e,l,,n
a,b,d,g,k,l,m,n
a,b,d,g,k,l,,n
a,b,d,g,i,j,l,m,n
a,b,d,g,i,j,l,,n
a,b,d,g,i,l,m,n
a,b,d,g,i,l,n
Class
A
Condition
Path covered
x=y
a,b,c,d,e,f,m
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
B
C
x! =y x>y
x! =y x<y
a,b,c,d,e,f,g,h,i,l,f
a,b,c,d,e,f,g,j,k,l,f
357
path. But our method generates test data for all the feasible
linearly independent path. Hence our method gives more
accurate result in compare to the cyclomatic complexity.
We describe an approach for test data generation with
lower computation cost. The main issue of test data
generation using this method is how to take input for
finding the paths. We tested our input test data algorithm
with 5 different types of programs. Our experimental
results shows that this method can generate reliable test
data at a lower cost but it is not optimum. The disadvantage
of our method is reliability of input while extracting the
paths. Since a program may have many numbers of
infeasible paths therefore we cannot determine whether
number of equivalence class cover minimum number of test
to be covered like cyclomatic complexity. Our algorithm
for solving path constraints is encouraging. Because the
number of equivalence class is less than equal to number of
paths. In future we will further research for finding
methods of input selection so that it covers every feasible
paths of our program. The method should be tested with
more examples for accuracy. The method should improve
the scalability by including more data types like dynamic data
structure and string.
References
[1] Shahid Mahmood,"A Systematic Review of
Automated Test Data Generation Techniques", School of
Engineering, Blekinge Institute of Technology Box 520
SE-372 25 Ronneby, Sweden, October 2007.
[2] David Godwin Jason Racicot Mechelle Gittens, Keri
Romanufa.. All code coverage is not created equal A
case study in prioritized code coverage." Technical report,
IBM Toronto Lab, 2006.
[3] J. Edvardsson,, Survey on Automatic Test Data
Generation," In Proceedings of the Second Conference On
Computer
Science
and
Systems
Engineering
(CCSSE'99),Linkoping, pp. 21-28 10/1999.
[4] Antonia Bertolino, Software testing research: Achivements, challenges, dreams," In Future of software
Engineering, 2007.
[5] J. Jenny Li Xio Ma and David m. Weiss., Pri- oritized
constraints with data sampling scores for automatic test
data generation." In Eight ACIS In- ternational Confe ence
on Software Engineering, articial Intelligence, Network,
2007.
[6] Jon Edvardsson, A survey on automatic test data generation," In In Proceedings of the Second Con- ference on
Computer Science and Engineering in Linkoping, pages
21-28, October 1999.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Teaching Automated test Case Generation," In In Proceedings of the Fifth International Con ference on Quality
Software(QSIC'05, IEEE, 2005.
[9] Chen Xu Jian Zhang and Xiaoliang Wang., Pathented test data generation using symbolic execution
constraint solving techniques," In Proceedings of
Interna- tional Conference on Software Engineering
Formal Meth- ods, 2004.
oriand
the
and
358
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
f. while(x! =y){
g. if(x>y)
h. {
i. x=x-y;
j. else
k. y=y-x;
l.}
m. gcd=x;
}
Sample Program-3
{
void main()
a.int marks;
b.char grade[4];
c.scanf("%d, &marks);
d.grad= ;
e.if(marks<=100)
f.if(marks<50)
g.{
h.grade="Fail";
i.else
j. grade="Pass";
k. if (grade!="")
l.UPDATE THE STUDENT WITH STUDENT ID
RECORD;
m.printf("%s, grade);
}
Sample Program-4
int tritype (int a, int b, int c)
{
a. if (a > b)
b. swap (a , b);
c. if (a > c)
d. swap (a , c);
e. if (b > c)
f. swap (b , c);
g. if (a = = b)
h. if (b = = c)
i. type = EQUILATERAL;
j. else
k. type = ISOSCELES;
l. else if (b = = c)
m. type = ISOSCELES;
n. return type;
}
H. Tahbildar Received his B. E. degree in Computer science and
Engineering from Jorhat Engineering College, Dibrugarh university
in 1993 and M. Tech degree in Computer and Information
Technology from Indian Institute of Technology, Kharagpur in 2000.
Presently he is doing Phd and his current research interest is
Automated Software Test data generation, Program Analysis. He is
359
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
360
Abstract:
This paper shows how to design efficient quantum
multiplexer circuit borrowed from classical computer design.
The design will show that it is composed of some Toffole gates
or C2NOT gate and some two input CNOT gates. Every
C2NOT gate is synthesized and optimized by applying the
genetic algorithm to get the best possible combination for the
design of these gate circuits.
gate. It is a three input gate. The first two inputs are the
controlled inputs and the third one is the target input.
This gate has a 3-bit input and output. If the first two
bits are set, it flips the third bit. Following is a table over
the input and output bits:
Keywords:
Qubit, Toffole gate, Quantum Multiplexer Circuit, Circuit
Synthesis, Quantum Half adder Circuit,Genetic Algorithm.
1. INTRODUCTION
Input 1
Input
2
Input
3
Output
1
Output
2
Output 3
Introduction:
As the ever-shrinking transistor
approaches atomic proportions, Moores law must
confront the small-scale granularity of the world: we
cant build wires thinner than atoms. Theoretically,
quantum computers could outperform their classical
counterparts when solving certain discrete problems [15,
8].
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
1
0
Input
A
0
0
1
1
Output
B
0
1
0
1
Sum
0
1
1
0
Cout
0
0
0
1
The sum output is the XOR of the two inputs, and the
carry output is the AND of the same two inputs. The
quantum equivalent is a 3-input, 3-output device with
the following truth table and equations:
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Input
B
0
0
1
1
0
0
1
1
A
0
1
0
1
0
1
0
1
S = XOR (A, B)
K = XOR(C, AND (A, B))
361
K
0
0
0
1
1
1
1
0
Output
S
0
1
1
0
0
1
1
0
A
0
1
0
1
0
1
0
1
(EQ 1)
(EQ 2)
Fig 2:
4.
Proposed Design of 4*1 Quantum
multiplexer circuit:
Our design as in fig 3 below engages the quantum
circuit like quantum half adder and two controlled not
gate like C2NOT and CNOT gate. The half adder [16]
output is giving 3 output lines. The output line A is
shorted from the input. The 2nd output line, S is given by
the expression S = XOR (A, B) and third output line, K
is given by the expression K = XOR(C, AND (A, B)).
In a classical multiplexer circuit, depending on the
selection line, only one input will move to the output.
Based on this classical logic we have designed the
quantum multiplexer circuit.
The first two inputs of every half adder should be treated
as the two selection lines and the third line is one out of
four inputs of 4 * 1 multiplexer.
When S00=0 and S01=0, ket I0 will be selected, this input
will move to the third output line of the first QHA and
finally will move to the final QMUL output.
When S10=0 and S11=1, ket I1 will be selected, this input
will move to the third output line of the second QHA
and finally will move to the final QMUL output.
When S20=1 and S21=0, ket I2 will be selected, this input
will move to the third output line of the third QHA and
finally will move to the final QMUL output.
V
+
S0
S1
S2
S3
S4
S5
S6
In the above decomposition, each circuit stage (actually
four types stage present) has a fundamental gate that is
two inputs but the circuit is of three inputs, then other
single input is replaced by using a unitary identity gate.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
S1
S2
S0
S3
S4
S5
362
I
S6
10000000
01000000
00100000
00090000
00001000
00000100
00000010
00000009
1000
0100
0010
0009
10
01
v
S0 & S2
1000
0010
0100
0001
10
01
10000000
01000000
00001000
00000100
00100000
00010000
00000010
00000001
S1 & S3
1000
0100
00 01
00 10
10
0
10000000
01000000
00100000
00010000
00000010
00000001
00001000
00000100
S4 & S6
10
0
1000
0100
0010
0 0 0-9
10000000
01000000
00100000
000-90000
00001000
00000100
00000010
0000000-9
S5
10000000
01000000
00100000
00090000
00001000
00000100
00000010
00000009
10000000
01000000
00001000
00000100
00100000
00010000
00000010
V+
10000000
01000000
00100000
00090000
00001000
00000100
00000010
10000000
01000000
00001000
00000100
00100000
00010000
10000000
01000000
00100000
00010000
00000010
00000001
10000000
01000000
00100000
000-90000
00001000
00000100
00000010
10000000
01000000
00100000
00010000
00000010
00000001
00001000
10
0 1
10000000
01000000
00100000
00010000
00000010
00000001
00001000
00000100
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Input
State
Output
5.
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
2.
3.
4.
S3
S5
S6
S1
S0
S4
chromosome [1] S4
S6
S1
S5
S2
S0
S3
chromosome [2] S3
S5
S1
S0
S4
S2
S6
chromosome [3] S6
S0
S3
S5
S1
S2
S4
chromosome [4] S4
S0
S2
S6
S1
S5
S3
chromosome [5] S3
S4
S6
S1
S0
S5
S2
S3
S4
S5
S0
S1
S6
Read
of
on
chromosome [0] S2
chromosome [6] S2
7 . DESCRIPTION OF GENETIC
ALGORITHM WITH EXAMPLE
10000000
01000000
00100000
00010000
00000010
00000001
00001000
00000100
363
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Example:
The fitness value of the of the chromosome
[0] can be calculated as follows
364
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
Resultant Matrix
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
(3 5) (2 6) (2 4) (3 5) (2 4) (0 5) (1 5)
chromosome [0] S6
S5
S0
S1 S4
S3
S2
chromosome [1] S6 S0
S2
S5
S1
S3
S4
chromosome [2] S2
S4
S0
S1
S6
S5
S3
chromosome [3] S2
S1
S5
S4
S3
S0
S6
chromosome [4] S1
S3
S5
S2
S6
S0
S4
chromosome [5] S1
S6
S0
S5
S2
S4
S3
chromosome [6] S1
S0
S3
S4
S5
Required Matrix
S3
S4
S0
S1
S6
S5
chromosome [1] S4
S3
S0
S2
S5
S1
S6
chromosome [2] S3
S5
S1
S6
S2
S4
S0
chromosome [3] S6
S0
S3
S5
S4
S2
S1
chromosome [4] S4
S0
S2
S6
S1
S3
S5
chromosome [5] S3
S4
S2
S5
S0
S1
S6
chromosome [6] S2
S3
S4
S5
S6
S1
S0
S6 S2
CONCLUSION
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
365
|S0>
|K0>
|S1>
|K1>
|I0>
|K0>
|K1>
|I1>
|K0>
|K1>
|I2>
|K0>
|K1>
|I3>
|0>
O/p
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
9.
REFERENCES
366
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
367
Abstract
Face detection, plays an important role in many
applications such as face recognition, face tracking,
human computer interface and video surveillance. In this
paper we propose a Hybrid face detection algorithm that
could detect faces in color images with different complex
backgrounds and lights. Our method, first detect face
regions using HAAR classifier over an entire image and
generate candidates for next classifier. HAAR classifier,
usually detect all the faces in image but also miss
classified some none-face object as face. So we first use
HAAR classifier to detect all possible regions for face
candidate and after that we increase accuracy by using
simple feature based method named HSV color model to
eliminate miss classified none-faces.
1. Introduction
Image is one of the main parts of human life and a
single image is more understandable than
thousands of words. So, human-vision is the most
important sense. As we know human brain can
detect and recognize different objects with the least
known character of it. Face detection is a subbranch of object detection. The human face is a
dynamic object and has a high degree of variability
in its appearance, which makes face detection a
difficult problem in computer vision [3]. We can
divide face processing into face detection, face
recognition and face tracking. Face recognition and
face tracking have been used in many applications
such as wrongdoer recognition, security systems,
controlling systems [4]. We should consider that
accuracy of these processing is directly related to
accuracy of face detection because face detection is
the first stage of these processing. This makes face
detection an important part in image processing.
Face detection should pass a long way, this caused
by differences, variety and complexity of faces.
Although many different algorithms exist to
perform face detection, each has its own strengths
and weaknesses [3]. Some use flesh tones, some
use contours, and other are even more complex
involving templates, neural networks, or filters.
2. HAAR Classifier
The main part of HAAR classifiers is the HAARlike features. These features use change in contrast
values between adjacent rectangular groups of
pixels instead of pixels intensity values. HAAR
features can easily be scaled by increasing or
decreasing the size of the pixel group being
examined. This allows features to be used to detect
sizes. Notice Figure 1.
Figure 1: part (a) shows edge features, part (b) shows line
features and part (c) shows center features
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
368
(4)
(5)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
369
Experimental results
We have evaluated on several face image
databases, containing different photo collection.
Face database named CMU usually contain
grayscale images and it is not useful for colorful
algorithms. And other databases such as grimace,
FERET, face94 and face95 only have images with
single face and it is more proper for face
recognition and trains our classifier. So we search
for a database that contains colorful and also
complex images to run our program. We find a
database named BaoDatabase and it has 221
images with face and none face objects. So it is a
proper database to test our hybrid algorithm. Our
algorithm can detect multiple faces of different
size. It use for complex images. In figure 6 we see
the results of converting image from RGB space to
HSV space. Figure 7 shows detected faces using
HSV color model. As we can see, it has detected all
the faces but it detects 3 none-faces as human face
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
370
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
371
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
372
References
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
373
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
374
AnEvolvableClusteringBasedAlgorithmtoLearn
DistanceFunctionforSupervisedEnvironment
ZeinabKhorshidpour,SattarHashemi,AliHamzeh
Dept.ofComputerScience,ShirazUniversity,Iran
Abstract
This paper introduces a novel weightbased approach to
learn distance function to find the weights that induce a
clusteringbymeetingbestobjectivefunction.Ourmethod
combines clustering and evolutionary algorithms for
learning weights of distance function. Evolutionary
algorithms, are proved to be good techniques for finding
optimalsolutionsinalargesolutionspaceandtobestable
in the presence of noise. Experiments with UCI datasets
show that employing EA to learn the distance function
improves the accuracy of the popular nearest neighbor
classifier.
Keywords:distancefunctionlearning;evolutionary
algorithm;clusteringalgorithm;nearestneighbor.
1 Introduction
Almost all learning tasks, like casebased
reasoning [1], cluster analysis and
nearestneighbor classification, mainly
dependonassessingthesimilaritybetween
objects. Unfortunately, however, defining
object similarity measures is a difficult and
non trivial task, say, they are often
sensitive to irrelevant, redundant, or noisy
features. Many proposed methods attemp
toreducethissensitivitybyparameterizing
KNN's similarity function using feature
weighting.
The idea behind feature weighting is that
realworld applications involve with many
features; however, the objective function
depends on few of them. The presence of
noisy objects or irrelevant features in a
dataset degrades the performance of
machine learning algorithms; for many
cases, such in the case of knearest
neighbor machine learning algorithm
(KNN). Thus feature weighting technique
may improve the algorithms performance.
Thispaperintroducesanovelweightbased
distance function learning to find the
weightsthatinduceaclusteringbymeeting
bestobjectivefunction.
In the recent years, different approach
proposed for learning distance function
fromtrainingobjects.SteinandNiggemann
use a neural network approach to learn
weights of distance functions based on
training objects [2]. Eick et.al introduce an
approach to learn distance functions that
maximizes the clustering of objects
belonging to the same class [3]. Objects
belonging to a dataset are clustered with
respecttoagivendistancefunctionandthe
local class density information of each
clusteristhenusedbyaweightadjustment
heuristic to modify the distance function.
Another approach, introduced by Kira and
Rendell and Salzberg , relies on an
interactive system architecture in which
users are asked to rate a given similarity
prediction,andthenusingaReinforcement
Learning(RL) basedtechniques to enhance
the distance function based on the user
feedback [4], [5]. Kononenko proposes an
extension to the work by Kira and Rendell
for updating attribute weights based on
intracluster weights [6]. Bagherjerian et.al
proposeareinforcementlearningalgorithm
that can incorporate feedback and past
experience to guide the search toward
better cluster [7]. They use an adaptive
clustering environment that modifies the
weights of a distance function based on a
feedback. The adaptive clustering
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
375
2 ClusteringAlgorithm
A clustering algorithm finds groups of
objects in a predefined attribute space.
Sincetheobjectshavenoknownpriorclass
membership, clustering is an unsupervised
learning process that optimizes some
explicit or implicit criterion inherent to the
datasuchasthesquaredsummederror[8].
Themainobjectiveofclusteringalgorithmis
to divide the data into different groups
called clusters in such way that the data
withinaclusterareclosertoeachotherand
data from different clusters are farther
from each other. Distance criteria are the
principle component of every clustering
algorithm to approach its objective.
Euclidean distance is a commonly used
distance measure in most cases. Which is
definedasfollow:
m
d ( xi , x j ) =
( x
ia
x ja ) 2 .
(1)
a =1
where
and
xi
xj
are
d ( xi , x j ) =
w ( x
a
ia
x ja ) 2 .
(2)
a =1
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
w
m
3.1 ChromosomeRepresentationand
PopulationInitialization
ForanyGAandES,achromosome
representation is needed to describe each
chromosome in the population. The
representation method determines how, a
solution is represented in the search space
and also determines the type of variation
operators such as crossover and mutation.
Each chromosome is made up of a
sequence of genes from certain alphabet
whichcanconsistofbinarydigits(0and1),
floatingpoint numbers, integers, symbols
(i.e.,A,B,C,D),etc.Ithasbeenshownthat
morenaturalrepresentationscangetmore
efficient and better solutions [11]. In this
paper we use ES algorithm because a
376
3.2 Mutation
In typical EAs, Mutation is carried out by
randomlychangingthevalueofasinglebit
(withsmallprobability)tothebitstrings.
ThemutationoperatorinESmaybeapplied
independentlyoneachobjectivevariableby
adding a normally distributed random
variablewithexpectationzeroandstandard
deviation as shown in Equation 4,
considering n = 0 . The strategy
parameters are mutated using a
multiplicative,
logarithmic
normally
distributedprocessasshowninEquation3.
i ' = i .exp( N (0,1) N i (0,1)).
(3)
(4)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
4 CombingClusteringAlgorithmsandES
forDistanceLearning
In this section, we will give an overview of
CLESDL approach. The key idea of our
approach is to use clustering as a tool to
evaluate and enhance distance function
withrepecttoanunderlyingclassstructure.
377
w
wi = n i .
wl
l
4.1 FitnessFunction
The fitness function is used to define a
fitness value for each candidate solution.
Ourgoalistogenerateachromosomewith
the best (max or min depending on the
problem)fitnessvalue.Inthisproblem,the
fitness measure is the clustering quality
provided by an algorithm which is often a
hard and subjective to be measured. We
useaccuracycriteriaasthefitnessfunction
to evaluate the results. Assume that the
instances in D have been already classified
in k classes c1 , c2 ,..., ck . Consider a
clusteringalgorithmthatpartitionsDintok
clusters cl1 , cl2 ,..., clk . We refer to a
onetoone mapping, F, from classes to
clusters, such that each class ci this
mapped to the cluster cl j = F (ci ) . The
classification error of the mapping is
definedas:
(5)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
k
E = | ci F (ci ) |.
(6)
i =1
5 ExperimentalMethodology
Inordertomeasuretheperformanceofthe
proposed algorithms, we employed the 10
fold cross validation methodology.
Crossvalidation randomly divides the
available dataset into k mutually exclusive
subsets (folds) of approximatelyequal size,
andusestheinstancesofthek1subsetsas
the training data and the instances of the
remaining subset as the test data. The
trainingdatasetisusedfordeterminingthe
classifiers parameters and the testdataset
isusedtocomputethepredictionerror.The
experiment is carried out i times to
compute the prediction error of the whole
dataset. This method is called kfold cross
validation. In our experiments we set k=10
whichismostcommonlyusedvalue.
5.1 ExperimentalSetting
Our experiments were run using standard
evolutionary strategy with tournament
selection.Thereportedresultsarebasedon
10fold cross validation for each
classification task with the following
parametersetting.
Populationsize:50
Numberofgeneration:50
Mutationrate:0.01
(7)
378
Theselectionprocessis( )
5.2 DescriptionofDatasets
Theexperimentalprocedureistotheapply
algorithm to several realworld datasets
from the UCI machine learning repository
[14]. Most of these data sets have been
subject of empirical evaluation by other
researches. This gives the chance of
comparing results among different
researches.Asummaryofcharacteristicsof
thesedatasetsareshowedinTable1.
6 ExperimentalResults
Toobtainthebestdistancefunction,werun
the program 10 times independently. In
each run, ES algorithm evolves its
population for 50 generation and save the
chromosomeswiththebestfitnessvalue.It
is worth mentioning that result of the
Kmeans clustering algorithm based on
represented chromosome is used as the
fitnessvaluefortheevolutionaryprocess.
The whole process of achieving results is
depicted in Figure 1. Table 2 shows the
resultsachievedbythealgorithmappliedto
the UCI datasets. CLESDL method achieve
good performance in all datasets. For each
dataset, the accuracy of the method with
the highest mean accuracy is marked in
boldface.ThemeanaccuracyinallUCIdata
sets using proposed algorithm is 73.87%
whereas the original Kmeans reach to
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
379
Data set
Glass
Ionosphere
Diabetes
Sonar
Vehicle
Wine
Breast-cancer
Iris
K-means_CLES-DL
61.17 0.15
83.32 0.34
74.96 0.18
70.48 0.16
57.23 0.16
82.47 0.61
64.71
96.67
K-means
55.61
72.15
66.02
56.25
52.23 0.03
70.23
63.31 0.21
89.33
Data set
Glass
Ionosphere
Diabetes
Sonar
Vehicle
1-NN_CLES-DL
70.52 1.096
88.90 0.53
70.58 0.39
85.071 0.53
57.67 1.22
Wine
91.50 0.62
Breast-cancer
61.39
0.77
95.33 0.25
Iris
1-NN
72.84 0.83
78.46 0.91
67.30 0.47
82.67 0.65
63.67 0.79
75.10 1.19
61.10 0.60
95.31 0.32
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
380
Table 4: Classification accuracy (and standard deviation) on 5 datasets selected from the UCI data repository
Data set
Glass
Ionosphere
Diabetes
Sonar
Vehicle
Averages
1NN_CLES-DL
72.39 1.096
88.91 0.53
71.85 0.39
87.57 0.53
70.88 1.22
78.32
Version_(1)
68.89 2.00
87.24 0.88
69.29 0.97
85.79 1.93
69.14 0.89
76.07
Bagherjerian's method
Version_(2)
Version_(3)
72.20 1.89
72.01 2.56
88.24 0.88
87.21 0.91
69.75 1.18
70.12 1.11
86.12 1.85
86.07 1.48
68.83 1.09
68.59 1.25
77.02
76.80
7 Conclusions
We present a evolutionary method for
learning distance function with the main
objective of increasing the predictive
accuracyofKNNclassifier.Themethodisa
combination of a clustering algorithm and
ES in orderto find the best weight of each
attributetobeusedinaKNNclassifier.The
proposed method in this paper overcomes
thedisadvantagesoftheKNNalgorithmi.e.
its sensitivity to the presence of irrelevant
features. As a future work we plan to
extend the current work so as it can deal
withnoiseaswell.
Acknowledgements
This work was supported by the Iran Tele
CommunicationResearchCenter.
References
LWINN
Version_(4)
76.26 2.18
88.52 1.12
69.91 1.28
86.22 1.05
70.55 1.12
78.29
69.95
91.73
68.89
86.05
69.86
77.29
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
381
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
The feature oriented method with business
component semantics (FORM/BCS) is an extension
of the feature oriented reuse method (FORM)
developed at Pohan University of Science and
Technology in South Korea. It consists of two
engineering processes: a horizontal engineering
process driven by the FORM domain engineering
process and a vertical engineering process driven
by the FORM application engineering process. This
paper investigates the horizontal engineering
process - which consists of analyzing a product line
and developing reusable architectures and shows
that this process can be systematized through a set
of maps that describe how one can systematically
and rigorously derive the fundamental business
architectures of a product line from the feature
model of that domain. The main result of the paper
is therefore that the formalization of the assets of
FORM/BCS enables a clear definition of how an
activity of the horizontal engineering process
produces a target asset from an input one. This
result opens the door for the development of a tool
supporting the method.
Keywords: Product Line Engineering, FeatureOrientation,
Domain
Analysis,
Business
Components, Reuse, Formal Method.
1. Introduction
FORM with Business Component Semantics [1],
which is an extension of FORM [2, 3, 4, 5, and 14],
is a feature-oriented product line engineering
method. It has two processes: a horizontal
engineering process driven by the FORM domain
engineering process and a vertical engineering
process driven by the FORM application
engineering process. These two processes
correspond respectively to the engineering for
reuse and the engineering by reuse approaches
presented in [6, 7, and 8].
382
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
2.
383
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Domain
Domain
Analysis
Feature
Business
Component
Subsystem
Architecture
Business
Component
Design
Reusable feature
business component
database
Subsystem
Architecture
Business
Component
Process
Architecture
Business
Component
Design
Reusable subsystem
architecture business
component database
Module
Business
Component
Development
Process
Architecture
Business
Component
Reusable process
architecture business
component database
Application
Domain
Selected Subsystem
Architecture Business
Component
Reusable module
architecture business
component database
Specific
requirements
analysis
Specific
feature
business
component
Specific
subsystem
architecture
design
Selected Module
Architecture
Business
Component
Selected Process
Architecture Business
Component
Specific
Subsystem
architecture
business
component
Reusable specific
subsystem architecture
business component
database
Specific
process
architecture
design
Specific
Process
architecture
business
component
Specific
module
architecture
design
Specific
module
architecture
business
component
Reusable specific
module architecture
business
component
database
Legend:
Database
Component storage
Activity
Component reuse
ReusableBusinessComponent = =
[name: Text;
descriptor:Descriptor
realization: Realization]
3.1.1 Descriptors
Module
Architecture
Business
Component
Domain refinement
384
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
(analyze)ACTION(civil
system)TARGET
(describe)ACTION(civil
application)TARGET
servant
management
servant
recruitment
BusinessActivity == [common:
BusinessActivity ;
optional: BusinessActivity;
variabilities: BusinessActivity ]
3.1.2 Realizations
The realization section of a reusable component
provides a solution to the modeling problem
expressed in the descriptor section of the
component. It is a conceptual diagram or a
fragment of an engineering method expressed in the
form of a system decomposition, an activity
organization or an object description. The goals, the
activities and the objects figuring in the realization
section concern the application field (product
fragment) or the engineering process (process
fragment).
The solution, which is the reusable part of the
component, provides a product or a process
fragment. The types of solutions depend on the type
of reusable business component i.e a solution of a
feature business component (respectively a
reference business component) is a feature
(respectively a reference business architecture).
385
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
(fbc2)
decomposition(solution(realization(fbc)))
activity(g) = action(domain(p)) objects(g)
= target(domain(p))
SubSystemBusinessComponent = =
[name: Name;
descriptor: Descriptor;
realization: Realization
ssbc: SubSystemBusinessComponent ,
( solution(realization(ssbc))
SubsystemArchitecture
adapationpoints(realization(ssbc))
(SubSystem SubSystem)]
SubsystemArchitecture = =
[subsystems: SubSystem ;
Links: (Subsystem SubSystem) ]
SubSystem = Feature
objects(f) = target(domain(p))
(sbc2) Any link in the solution of the realization
386
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
ProcessArchitecture = =
[tasks: BusinessActivity ;
datas : Class;
dataaccess: [name: Name
access: BusinessActivityClass ]
messages: [name: Name;
call: (BusinessActivity U
{null})
(BusinessActivity U
{null})]
]
decomposition(action(domain(p)))
(decomposition(a) operations(d)) )))
where decomposition(a) is written for common(a)
optional(a) ( variabilities(a))
(pbc2) Messages are sent only between tasks having
common actions:
p process (context(descriptor(pbc))), (((t1,
387
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Feature , g Feature,
((f F g F)
( h F (objects(f) objects(h)
) (objects(g) objects(h) ))).
- SSA.links(fbc)= {(F,G)
SSA.subsystems(fbc) SSA.subsystems(fbc)/
(f,g)FG decomposition(f)
decomposition(g) }
- SSA.adaptation_points(fbc) =
{(ss,subsystemrealizations(ss)) ss
SSA.subsystems(fbc) ss
adaptation_points(realization(fbc)) }
where subsystemrealizations(ss) =
{ss:Subsystem f ss,g ss / g
featurerealizations(f)
g ss, f ss / g
featurerealizations(f)} and
featurerealizations(f) = { g:Feature
common(f) common(g) V
variabilities(f), ( h common(g) h
V) optional(g) optional(f)}
Lemma 1: if fbc is a well formed feature business
component, then SSA(fbc) is a well formed
subsystem business component.
ii) If f decomposition(solution(realization(fbc)))
and f adaptation_points(fbc) then ss
SSA.subsystems(fbc) such that f ss and ss
SSA.adaptation_points(fbc)
Proof: The proof of (i) is obvious since SSA(fbc)
has the same context as fbc and SSA(fbc) is a well
formed subsystem business component if fbc is a
well formed feature business component (lemma1).
The proof of (ii) is also obvious by the definition of
SSA.adaptation_points.
Example:Let us consider the following skeleton of
a feature business component, hereafter referred as
FM/IMCS.
Name : Functional Model of the Integrated Management of Civil Servants and Salaries in Cameroon
Intention : (Define)ACTION((manage)ACTION(civil servants and salaries)TARGET)TARGET
Context :
Domain : f = (manage)ACTION(carrers,payroll,training,network,mail,system)TARGET
Business processes :
f1 = (manage)ACTION(careers)TARGET
f2 = (manage)ACTION(payroll)TARGET
f3 = (manage)ACTION(training)TARGET
f4 = (manage)ACTION(inter-ministerial network)TARGET
f5 = (administer)ACTION(the system)TARGET
f6 = (manage)ACTION(mail)TARGET
/* sub-processes of f1 */
f11 = (manage)ACTION(recruitment)TARGET
f12 = (manage)ACTION(promotion)TARGET
f13 = (transfer)ACTION(file)TARGET
/* sub-processes of f2 */
f21 = (transfer)ACTION(file)TARGET
f22 = (calculate)ACTION(salaries)TARGET
f23 = (manage)ACTION(users)TARGET
f24 = (manage)ACTION(profiles, users)TARGET
/* sub-processes of f4 */
f41 = (use)ACTION(optic fiber network)TARGET
f42 = (use)ACTION(radio network)TARGET
f43 = (use)ACTION(twisted pair network)TARGET
/* sub-processes of f5 */
f51 = (manage)ACTION(users)TARGET
f52 = (manage)ACTION(profiles, users)TARGET
f53 = (manage)ACTION(connexions, users)TARGET
f54 = (manage)ACTION(the audit track, users)TARGET
388
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
/* sub-processes of f11 */
f111 = (integrate)ACTION (civil servant)TARGET (if he has passed a competitive
examination
or he has a diploma giving right to integration)PRECISION
f112 = (sign recruitment order)ACTION (civil servant)TARGET (if the prime minister office
has authorized)PRECISION
etc
Solution :
f
f1
f11
f111
f12
f112
f2
f3
f13
f4
f41
f42
f5
f43
f51
f52
f6
f53
f54
f113
Solution:
Subsystems:
SS1 = {f1}
SS2 = {f2}
SS3 = {f3}
SS4 = {f4}
SS5 = {f5}
SS6 = {f6}
Links:
389
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
decomposition(action(domain(q))))
((decomposition(a) operations(d) )))
where decomposition(a) is written for common(a)
optional(a) ( variabilities(a)).
basing on (1) a
decomposition(action(domain(q))) a
decomposition(action(domain(p))).
decomposition(action(domain(q))))
((decomposition(a) operations(d) )))
Solution :
tasks: decomposition(action(f1))
datas: target(f1)
dataaccess: {(t, c) decomposition(action(f1)) target(f1) /
decomposition(t) operations(c) }
messages: {(t1, t2)
decomposition (action(f1))
decomposition (action(f1)) / decomposition (t1)
decomposition (t2) }
Adaptation points :
{(t1, A) t1 pa.tasks(f1)
A = {t2 : BusinessActivity common(t1) common(t2)
( V variabilities(t1), g common(t2) g V)
optional(t1) optional(t2)} #A > 1}
End.
390
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
391
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
agreement)PRECISION
/* sub process of the process f11 */
(initiate)ACTION ({Deed, Servant })TARGET
(validate)ACTION ({Deed})TARGET
(visa)ACTION ({Deed})TARGET
(sign)ACTION ({Deed})TARGET
(modify)ACTION ({Deed})TARGET
(delete)ACTION ({Deed})TARGET
(removevalidation)ACTION ({Deed})TARGET
Solution :
pseudonym : recruit;
parameters: {candidates, requests, competitive examinations, civil
servants, deeds};
task: <{initiate, validate, visa, sign}, {modify, delete, remove
validation}, {}>;
task(m) decomposition(action
included: {m:Module
(domain(recruit)) ) specification(m) };
external: {m:Module
task(m) decomposition(action
(domain(recruit)) ) specification(m) = };
specification: {specification(m) m Ma.included(recruit)
Ma.required(recruit)}
Adaptation Points :
{(m1, A) m1 Ma.included(recruit)
A = {m2 : Module common(task(m1)) common(task(m2))
( V variabilities(task(m1)), g common(task(m2)) g V)
optional(task(m1)) optional(task(m2))} #A > 1}
End.
5. Related works.
The scientific community has a lot of interest for
feature-oriented approaches in product line
engineering. A recent and exhaustive overview of
feature-oriented development done by Sven Apel
and Christian Kstner [15] points to connections
between different lines of research and identifies
open issues following the phases of the featureoriented development process: domain analysis,
domain design and specification, domain
implementation, product configuration and
generation, feature-oriented development theory.
The issues addressed in this paper mainly concern
domain analysis and domain design and
specification. Concerning domain analysis,
according to Sven Apel and Christian Kstner, the
main challenge is to reconcile the two field uses of
feature models, which are useful for the
communication between stakeholders on the one
hand and the automation of the development
process on the other hand. Concerning this issue,
the formalism used in this work has a simple and
intuitive syntax which enables modeling of
domains in a natural way. Nevertheless, the Z
notation which gives formal semantics to our
business feature models provides a framework for a
rigorous analysis of the method and opens the door,
through the given mapping rules, for a possible
automation of the development process.
Another major line of research is aimed at enriching
feature models with additional information which
can be used to guide the configuration and
generation process. But, the more information is
exposed to model, the more complex the model
becomes. To avoid this complexity, additional
information are not added to the feature model but,
a new model called feature business component
model is proposed. This new model has two parts:
6. Conclusion
392
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
393
394
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
In this study, the focus was on the use of ternary tree over binary
tree. Here, a new two pass Algorithm for encoding Huffman
ternary tree codes was implemented. In this algorithm we tried to
find out the codeword length of the symbol. Here I used the
concept of Huffman encoding. Huffman encoding was a two pass
problem. Here the first pass was to collect the letter frequencies.
You need to use that information to create the Huffman tree. Note
that char values range from -128 to 127, so you will need to cast
them. I stored the data as unsigned chars to solve this problem,
and then the range is 0 to 255. Open the output file and write the
frequency table to it. Open the input file, read characters from it,
gets the codes, and writes the encoding into the output file. Once
a Huffman code has been generated, data may be encoded simply
by replacing each symbol with its code. To reduce the memory
size and fasten the process of finding the codeword length for a
symbol in a Huffman tree, we proposed a memory efficient data
structure to represent the codeword length of Huffman ternary
tree. In this algorithm we tried to find out the length of the code
of the symbols used in the tree.
Keywords:
1. Introduction
Ternary tree [12] or 3-ary tree is a tree in which each node
has either 0 or 3 children (labeled as LEFT child, MID
child, RIGHT child). Here for constructing codes for
ternary Huffman tree we use 00 for left child, 01 for mid
child and 10 for right child.
Generation of Huffman codes for a set of symbols is based
on the probability of occurrence of the source symbols.
Typically, the construction of a ternary tree, describes the
process this way:
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
395
Here Labeling the left edge by 00, mid edge by 01 and right edge
by 11 satisfies prefix Property
G: 00
I: 0100
C: 0101
F: 0111
D: 1101
A: 1111
E: 110000
B: 110001
H: 110011
This code has "Prefix Property" i.e. the code of any item is
not an initial sub string of the code of any other item. This
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
396
Ex: H=(S, P)
S= {S1, S2,, Sn}
P= {P1, P2,, Pn}
Where p=No. of occurrence
S1
S2
S3
S4
S5
S6
S7
48
31
7
6
5
2
1
S1 48 S1 48
S2 31 S2 31
A1 8 A2 21
S3 7
S4 6
Table I
For a given source listing H, the table of codeword length
uniquely groups the symbols into blocks, where each
block is specified by its codeword length (CL).
SN-2+SN-1+SN=A1
Where A1 is inserted at proper location.
4. Repeat the steps 2 & 3 until we have last three symbols.
Table I shows the symbols used in Huffman tree.
5. Now construct the table of CL-Recording
6. First column of CL-Recording table is represented by Si
where
Si= (entries of last row of every column) 0
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
397
4. Conclusions
Acknowledgements
The author Madhu Goel would like to thank Kurukshetra
University Kurukshetra for providing me University
Research Scholarship & support of Kurukshetra Institute
of Technology & Management (KITM) .
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
.
Dr. Pushpa Suri is a reader in the department of computer
science and applications at Kurukshetra University
Haryana India. She has supervised a number of Ph.D.
398
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
399
Research Scholar for Ph.D. Course, Civil and Environmental Engineering Department, VJTI, Mumbai, India
2
Professor, FCRIMS, University of Mumbai, India and Researcher, Carleton University, Canada
Abstract
Ready Mix Concrete (RMC) industry in India is continuously
growing. This industry is exposed to risks from internal as well
as external sources. It is important to address these risks, so that
industry shall gain credibility and confidence of the customers,
and shall have expected profit margins. Proposed paper presents
a risk quantification approach for risks in RMC plants in India,
using Expected Monetary Value (EMV) analysis. It is developed
using guidelines available in literature in the area of risk
management. It is a simple but effective approach for
quantification of risks and it shall help to achieve the objectives
of RMC business in terms of cost of production and supply of
RMC. Once the risks are quantified, quantitative assessment can
be done further to decide upon the appropriate response strategies
to be adopted to treat the risk related issues in effective way. This
approach is checked for practicability in RMC organizations.
Keywords: RMC, Risk quantification, EMV Analysis, Response
strategies.
1. Introduction
IS 4926-2003 (Bureau of Indian Standard 2003) defines
Ready Mix Concrete (RMC) as Concrete delivered at site
or into purchasers vehicle in the plastic condition and
requiring no further treatment before being placed in a
position in which it is to be set and hardened. Ready Mix
concrete is preferred over site mix concrete because it is
environmental friendly. It is a solution to a messy and time
consuming manufacturing of concrete at construction sites.
It offers solutions to customers specific problems, ensures
customer satisfaction and provides uniform quality. It also
eliminates the need to store materials used to manufacture
concrete at project sites. Currently, RMC is a matured
industry both in Europe and USA. The data from National
Ready Mix Association (March, 2007) indicate that RMC
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
400
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
401
Table 1 contd.
2.
Legal/Contr
actual Risks
3.
Environmen
tal Risks
5.
Physical
Risks
No.
1.
Category
Political
Risks
Risks
Change in Govt.
and Govt. policies
War, Riots etc.
Interference of
local Politicos
Classification
Internal
External
6.
Regulatory
Risks
Disease / Epidemic
Fire
Terrorism
Natural Disaster
Financial
Risks
4.
Water Pollution
Noise Pollution
Soil Pollution
Environmental Litigation
Depletion of Natural
resources
Extreme weather conditions
(cold / hot)
Damage to machineries due
to flood
Ineffective control over
wastage
Inflation
Table 1
Identification, categorization and Classification of Risks in
RMC Plants in India
Contd
Contd
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
402
Table 1 contd
Contd
Table 1 contd
7.
Operational
Risks
Confined spaces
8.
Quality
Risks
9.
Procure
ment
and
storage
Risk
Inaccuracy in statistical
adjustments
Risks of drying and loss of
workability of concrete
Transport strike
Difficulties in importing
equipments
Theft at site
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Table 1 Contd
9.
Procure
ment
and
storage
Risk
Transport strike
Safety
Difficulties in importing
equipments
Improper storage system
(Dampness, no ventilation)
Theft at site
12.
Market
Risks
13.
Social
Risks
14.
11.
Occupat
ional
and
health
Risks
Table 1 Contd.
10.
403
Chemical burns
Over exertion
Ergonomics
Occupational hazards faced by
truck drivers
Injuries at site
Cultural differences
Performance risks
Discharge of concrete on
Inappropriate disposal of
sludge
within organization
Wastage
Risks
various works
16.
Organiz
ational
Risks
Competition in Market
Wrong assessment of
market potential and
demand estimation.
Problems created by nearby
residents
Public outcry with regard to
activities like quarrying
near plant etc
Non productivity /
performance of laborers
Non availability of local
labour
High Labor turnover
Problems by labour union
15.
Risks
Labor
Related
Risks
Inappropriate sewage
treatment and disposal
Contd..
Contd..
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
404
Table 1 contd
R1
R2
R3
R4
C
In signifi
cant
Minor
Moderate
Major
Catestropic
Very
Low
Very
Low
Very
Low
Low
Low
Significant
Low
Low
Possible
Low
Low
Likely
Low
Significant
Almost
Certain
Significant
Significant
Rare
Unlikely
Significant
Significant
High
Significant
Significant
Significant
High
High
High
High
High
Catastrophic
Major
Moderate
Minor
R3
B). Contractual
Risk
Insignificant
Likely
Identified and
classified risk
Consequences
Almost
certain
Probability
Possible
Unlikely
Risks
related
to
repairs
and
mainten
ance of
plant
Rare
17.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
405
Cost
Consequen
ces(In
terms
of
Average
Monthly
Production
and supply
cost
of
RMC)
0-1%
(In
terms of
average
monthly
producti
on and
supply
cost of
RMC
specify)
1%-2%
(In terms
of
monthly
producti
on and
supply
cost of
RMC specify)
3% -5 %
(In terms
of
the
monthly
production and
supply
cost of
RMC specify)
5%-10 %
(In terms
of
the
monthly
production and
supply
cost of
RMC
(specify)
10
%
and
above
(In terms
of
average
monthly
producti
-on and
supply
cost of
RMC specify)
Probability
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
% Decrease in
cost consequence
of
of
of
406
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
5. Conclusion
The proposed paper presents the risk quantification
approach for internal as well as external risks in RMC
plants. The information is gathered through the interviews
and discussion with the team of key personnel working in
RMC plants at Mumbai, Navi Mumbai, Noida, Bangalore,
and Pune in India. A checklist of risk is generated as an
outcome of this study. Subjective Ratings for both,
probability of occurrence and consequences were also
gathered from the same teams for screening the risks
having substantial influence on objective of a company
running RMC plants.
Risk quantification approach proposed in this paper, using
EMV is a simple and effective tool to quantify risks in
terms of cost. The system can be used to find - Range within which the individual risk (in terms of
cost) may vary
- Range within which the total risks in RMC plant (in
terms of cost) may vary
- Expected monetary value of the risks in RMC plant
The approach can be used by the RMC plant owners for
deciding upon risk response strategies. It can be used fairly
for decision making at the starting point of every RMC
manufacturing and supply contract. It helps in identifying
the high risk areas which need to be controlled and
monitored for the achievement of objectives RMC
business in terms of cost. This approach can be made
suitable for incorporating and implementing with a
computer aided decision support system, provided precise
data is made available.
References
[1] Ashley,D.B., Construction project Risks: Mitigation
and Management, Proc.,PMI/Internet Joint Symp.,
Project Management Institute, Drexel Hill, Pa.,
1981,pp. 331-340.
[2] Ammar Ahmed., Berman,K., Sataporn A., A review
of techniques for risk management in projects,
Benchmarking: An International Journal, Volume
14,No.1,2007,pp.22-36,
[3] Akintoye, A., Project risk management practice: The
case of a South African utility company, International
Journal of Project Management, Vol. 26, 2008, pp.
149-163.
[4] Boodman, D. M. Risk Management and Risk
Management Science: An overview, Paper presented
at the Session of Risk Management, TIMS 23rd. Annual
Meeting of Institute of Management Sciences,
1977,Greece.
407
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
First Author
R.C.Walke is has done his bachelors in Civil Engineering
and Masters in Engineering Constructions and
Management. He is pursuing Ph.D.in the area of Risk
Management. He was in civil Engineering Industry for
seven years before coming to academics. At present he is
working as an Assistant Professor in Fr. C. Rodrigues
Institute of Management Studies, University of Mumbai,
India. His areas of Interest are Project Management, Risk
Management and Supply Chain Management.
Second Author
Prof. Dr. Vinay Topkar is a Head of Civil and
Environmental Engineering Department in V.J.T.I.,
University of Mumbai, India. He has worked as DeanAcademics in V.J.T.I. and his areas of research interest are
Project Management and Decision Sciences
Third Author
Dr. Sajal Kabiraj is a Professor of Operations and Supply
Chain Management in FCRIMS, University of Mumbai,
India. Sajal is a Post Doctoral Fellow, PhD, MS, MBA,
Bachelors in Chemical Engineering. He obtained his PhD
from Indian Institute of Information Technology &
Management, Gwalior, India (An Apex Govt of India
Institute) and Masters in International Logistics and
Supply Chain Management from Jonkoping International
Business School, Sweden. He is the recipient of Best
Teacher Award 2008 for teaching and research
excellence amongst all universities and business schools in
India. He has published numerous papers in refereed
international journals and has taught at the post graduate
and doctoral levels in universities in Sweden, Germany,
Austria, Malaysia, Canada, China, UAE and India. He is
also a registered PhD supervisor and examiner. His areas
of research interest are Business Analytics, CRM and
International Business.
408
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
409
Abstract
The paper presents automatic translation of noun phrases
from Punjabi to English using transfer approach. The
system has analysis, translation and synthesis component.
The steps involved are pre processing, tagging, ambiguity
resolution, translation and synthesis of words in target
language. The accuracy is calculated for each step and the
overall accuracy of the system is calculated to be about
85% for a particular type of noun phrases.
Keywords: Tagger, Ambiguity resolver, Transliteration
1 Introduction
Machine Translation (MT), also known as
automatic translation or mechanical translation,
is the name for computerized methods that automate
all or part of the process of translating from one
human language to another.[2] Machine Translation
is the need of the hour. It helps in bridging the
digital divide and is an important technology for
globalization. The mechanization of translation has
been one of humanitys oldest dreams. The work is
done to convert a noun phrase from Punjabi to
English.
2 Approach followed
The transfer architecture not only translates at the
lexical level, like the direct architecture, but
syntactically and sometimes semantically. The
transfer method will first parse the sentence of the
source language. It then applies rules that map the
grammatical segments of the source sentence to a
representation in the target language. The rules,
which are used for the structural transformation of
phrase, for solving the ambiguity problem, all are
stored in the database. The indirect approach, first
of all, divides a phrase into words, tags each word
using morph database, resolves ambiguity, translates
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
sRI
rmysL
cwvlw
(shri
ramesh
chawla), srdwr hrpRIq isMG (sardar
harpreet singh)These named entities will then
be send to transliteration module.
3.2 Tokenization
The output of pre processor is then send to the
tokenizer which divides the given phrase on the
basis of spaces between them into constituents called
tokens which are then passed to further phases.
410
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
3.7 Synthesis
After getting English equivalent of each word in
Punjabi sentence, it should be synthesized to the
phrase in English. Since the order of occurrence of
words is different in target language than the source
language, the approach used while synthesis is
indirect approach, so certain rules have been build to
synthesize the phrases to target language. These
rules of language are also stored in the rule base of
English.
411
AnalysisComponent
Pre
Processor
Morph
analyzer
Tagger
Morph
database
Rule
baseof
Punjabi
Rule
baseof
English
English
Noun
Phrase
Punjabi
English
Dictionary
Generation Component
Synthesizer
Translation
Component
Transliteration
OrTranslation
ofwords
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
dysL
jvwn
iaj
all
6 Example
swry
dy
412
soldiers
ipo
of
country
References
[1] R.M.K. Sinha and Ajay Jain, AnglaHindi:An English
to Hindi Machine Translation System, MT Summit IX,
New Orleans, USA, Sept.23-27, 2003.
[2] S. Dave, J. Parikh and P. Bhattacharyaa. Interlinguabased English-Hindi Machine Translation and Language
Divergence. Machine Translation 16(4) (2001) 251-304.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
[15] R Jain, R M K Sinha, A Jain, Translation between
English and Indian Languages, Journal of Computer
Science and Informatics, March 1997, pp 19-25.
[16] R M K Sinha and others, ANGLABHARTI: A Multilingual Machine Aided Translation Project on Translation
from English to Hindi., IEEE International Conference on
systems, Man and Cybernetics,Vancouver, Canada, 1995,
pp 1609-1614.
[17] R.M.K. Sinha and Anil Thakur, Synthesizing Verb
Form in English to Hindi Translation: Case of mapping
Infinitive and Gerund in English to Hindi, Proceedings of
International Symposium on Machine Translation, NLP
and Translation Support System(iSTRANS-2004),
November 17- 19,2004, Tata Mc Graw Hill, New Delhi,
pp:52-55
[18] Smriti Singh, Mrugank Dalal, Vishal Vachani,
Pushpak Bhattacharya and Om Damani, Hindi Generation
from Interlingua, Machine Translation Summit (MTS 07),
Copenhagen, September, 2007.
[19] Debasri Chakrabarti, Gajanan Rane and Pushpak
Bhattacharyya, Creation of English and Hindi Verb
Hierarchies and their Application to
English Hindi MT, International Conference on Global
Wordnet (GWC 04), Brno, Czeck Republic, January,
2004.
413
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
414
Abstract
In computer vision, segmentation refers to the process
of partitioning a digital image into multiple segments
(sets of pixels, also known as superpixels).Image
segmentation is typically used to locate objects and
boundaries (lines, curves, etc.) in images. More
precisely, image segmentation is the process of
assigning a label to every pixel in an image such that
pixels with the same label share certain visual
characteristics.The result of image segmentation is a
set of segments that collectively cover the entire
image, or a set of contours extracted from the image
(see edge detection). Each of the pixels in a region are
similar with respect to some characteristic or
computed property, such as color, intensity, or
texture.Due to the importance of image segmentation a
number of algorithms have been proposed but based
on the image that is inputted the algorithm should be
chosen to get the best results. In this paper the author
gives a study of the various algorithms that are
available for color images,text and gray scale images.
1. Introduction
All image processing operations generally aim at
a better recognition of objects of interest, i. e., at
finding suitable local features that can be
distinguished from other objects and from the
background. The next step is to check each
individual pixel to see whether it belongs to an
object of interest or not. This operation is called
segmentation and produces a binary image. A
pixel has the value one if it belongs to the object;
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
415
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
416
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
417
[3] Bergmann, E. and J. Dzielski, computer vision and
thresholding method [4], Niblacks method [5]
image undwestanding Dynamics, 1990. 13(1): p. 99and improved Niblack method [6], et al. In
103.
general, they are simple and fast, but fail when
[4] Tanygin, S. image dense stereo matching by
foreground and background are similar.
technique of region growing,.Journal of Guidance,
Alternatively, the similarity based methods
Control, and Dynamics, 1997.20(4): p.
cluster pixels with similar intensities together.
625-632.
For example, Lienhart uses the split & merge
[5] Lee, A.Y. and J.A. Wertz, Harris operator is used
algorithm [7], Wang et al. combine edge
to improve the exact position of point feature, 2002.
detection, watershed transform and clustering
39(1): p. 153-155.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
418
Abstract
The software engineering projects [22, 23] reveals
that a large number of usability related change
requests are made after its deployment. Fixing
usability problems during the later stages of
development often proves to be costly, since many of
the necessary changes require changes to the system
that cannot be easily accommodated by its software
architectural design. This costs high for the
practitioners and prevents the developers from
finding all the usability requirements, resulting in
systems with less than ideal usability. The successful
development of a usable software system therefore
must include creating a software architecture that
supports the optimal level of usability. Unfortunately,
no architectural design usability assessment
techniques exist. To support software architects in
creating a software architecture that supports
usability, practicing a scenario based assessment
technique that leads to successful application of
pattern specification is undergone. Explicit
evaluation of usability during architectural design
may reduce the risk of building a system that fails to
meet its usability requirements and may prevent high
costs incurring adaptive maintenance activities once
the system has been implemented.
Keywords: use-case, patterns, usability, scenarios,
patterns specifications
1. Introduction
Scenarios have been gaining increasing popularity in
both Human Computer Interaction (HCI) and
Software Engineering (SE) as engines of design. In
HCI scenarios are used to focus discussion on
usability [3] issues .They support discussion to gain
an understanding of the goals of the design and help
to set overall design objectives. In contrast, scenarios
play a more direct role in SE, particularly as a front
end to object oriented design. Use case driven
approaches have proved useful for requirements
elicitation and validation. The aim of use cases in
Requirements Engineering is to capture systems
requirements. This is done through the exploration
and selection of system user interactions to provide
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
419
3. Scenarios
Scenarios serve as abstractions of the most important
requirements on the system. Scenarios play two
critical
roles,
i.e.
design
driver,
and
validation/illustration. Scenarios are used to find key
abstractions and conceptual entities for the different
views, or to validate the architecture against the
predicted usage. The scenario view should be made
up of a small subset of important scenarios. The
scenarios should be selected based on criticality and
risk. Each scenario has an associated script, i.e.
sequence of interactions between objects and
between processes [13]. Scripts are used for the
validation of the other views and failure to define a
script for a scenario discloses an insufficient
architecture.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
420
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
421
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
422
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
of systems and software, Elsevier,
1993, pp. 111-122.
[24] R. France, D. Kim, S. Ghosh and E.
Song,
A
UMLBased
Pattern
Specification
Technique,
IEEE
Transactions on Software Engineering,
Vol. 30(3), 2004.
[25] D. Kim, R. France, S. Ghosh and
E. Song, A UMLBased Metamodeling
Language to Specify Design Patterns,
Proceedings of Workshop on Software
Model Engineering (WiSME), at UML
2003, San Francisco, 2003.
[26]
Unified
Modeling
Language
Specification, version 2.0 January
2004, In OMG, http://www.omg.org [15]
J. Warmer and A. Kleppe, The Object
Constraint Language: Getting Your
Models Ready for MDA, 2nd Edition,
Addison-Wesley, 2003.
[27]Boehm
BW.
Characteristics
of
software quality. North-Holland Pub.
Co., Amsterdam New York 1978.
[28]Bowen
TP.
Wigle
GB.
Tsai
JT.
Specification of software quality
attributes
(Report
RADC-TR-85-37).
Rome Air Development Center, Griffiss
Air Force Base NY 1985.
[29] Architecting for usability; a
survey, http://segroup.cs.rug.nl.
423
424
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Abstract
The purpose of this work is to propose an efficient microstrip
rectenna operating on ISM band with high harmonic rejection.
The receiving antenna with proximity coupled feeding line
implemented in a multilayer substrate. The rectenna with
integrated circular sector antenna can eliminate the need for an
low pass filter (LPF) placed between the antenna and the diode
as well as produce higher output power, with maximum
conversion efficiency of 74% using a 1300 load resistor at a
power density of 0.3 mW/cm.
Feeding line
Diode
1. Introduction
Rectifying antenna (rectenna) which can convert RF
energy to DC power plays an important role in free space
wireless power transmission (WPT). Over the last century,
the development of rectenna for space solar power
transmission (SSPT) [1] as well as WPT [2] had great
achievement with specific functions; and the applications
e.g., actuator [3] or wireless sensors [4].
The typical rectenna in the prior literatures [5][7]
basically consists of four elements: antenna, low pass filter
(LPF), diodes, and DC pass capacitor. The initial
development of rectenna focuses on its directivity and
efficiency for great power reception and conversion,
hence, large array [8] was usually adopted for microwave
power reception. Afterward, many functions were added
to enhance the performance of the rectenna array, such as
arbitrary polarization [9], dual-polarization [10], CP [11],
and dual band [12]. Besides, for the antenna integrated
with nonlinear circuits, such as diodes and FETs, it is well
known that harmonics of the fundamental frequency
would be generated. The unwanted harmonics cause
problems of harmonics re-radiation and efficiency
reduction of rectenna; then the LPF is required to suppress
harmonics to improve system performance and prevent
harmonics interference. For size reduction and cost down,
Via
DGS
(3nd harmonic rejection)
Vout
C
RL
Via Via
Via
Feeding line and
Rectifying circuit
Antenna
h2
Durod 5880
h1
Tmm4
Ground
DGS
2. Antenna design
The rectangular radiating patch is printed on the side of
the first substrate while the microstrip feed line is on the
upper side of the second substrate, the ground plane, and
the dumbbell shape slot, are on lower side of the second
substrate. The relative permittivities and the thickness are
r1=2.2, r2=4.5, h1=1.575 mm and h2=1.524 mm. We
should emphasize that the value of r1 has been chosen
small to enhance the patch radiation. Similarly, the
425
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
50
50
e1
e3
e2 e4
Fig. 1 S-parameters and side view of the DGS used to reduce the third
harmonic in the proposed design, e1=0.255mm, e2= 2. 95mm, e3=2.1mm,
e4=2.84mm
(b)
Fig. 3 Field distribution (V/m) at 4.9 GHz (a) and 7.35 GHz (b).
3. Rectenna measurements
The receiving antenna and rectifying are connected by
SMA connectors as shown in Fig. 4. It contains a linearly
polarised patch antenna designed at 2.45 GHz by using
HFSS software[14]. The rectenna contains one
HSMS2860 commercial Schottky diodes in a SOT23
package. The zero bias junction capacitance Cj0 is 0.18 pF
and the series resistance Rs is 5 V.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
Pr
Pt Gt Gr
4r
(1)
Vout 2
r
4. Conclusion
RL
Pr
(2)
3
DC voltage (V)
2,5
5. Acknowledgment
2
1,5
1
0,5
0
0,01
0,03
0,06
0,09
0,12
0,15
0,18
0,21
0,24
0,27
0,3
80
E fficiency (% )
References
60
40
20
0
0,01
0,03
0,06
0,09
0,21 0,24
0,27
0,3
[3] Epp, L.W., Khan, A.R., Smith, H.K., and Smith, R.P.: A
compact dualpolarized 8.51-GHz rectenna for high-voltage
(50 V) actuator applications, IEEE Trans. Microw. Theory
Tech., 2000, 48, (1), pp. 111120
[4] Farinholt, K.M., Park, G., and Farrar, C.R.: RF energy
transmission for a low-power wireless impedance sensor
node, IEEE Sensors J., 2009, 9,(7), pp. 793800
[5] H. Takhedmit, L. Cirio, B. Merabet, B. Allard, F. Costa, C.
Vollaire and O. PiconEfficient 2.45 GHz rectenna design
including harmonic rejecting rectifier device, Electronics
letters 10th June 2010 Vol. 46 No. 12
426
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org
427
Evolutionary computation
Industrial systems
Evolutionary computation
Autonomic and autonomous systems
Bio-technologies
Knowledge data systems
Mobile and distance education
Intelligent techniques, logics, and
systems
Knowledge processing
Information technologies
Internet and web technologies
Digital information processing
Cognitive science and knowledge
agent-based systems
Mobility and multimedia systems
Systems performance
Networking and telecommunications
IJCSI
TheInternationalJournalofComputerScienceIssues(IJCSI)isawellestablishedandnotablevenue
for publishing high quality research papers as recognized by various universities and international
professional bodies. IJCSI is a refereed open access international journal for publishing scientific
papers in all areas of computer science research. The purpose of establishing IJCSI is to provide
assistance in the development of science, fast operative publication and storage of materials and
resultsofscientificresearchesandrepresentationofthescientificconceptionofthesociety.
Italsoprovidesavenueforresearchers,studentsandprofessionalstosubmitongoingresearchand
developments in these areas. Authors are encouraged to contribute to the journal by submitting
articlesthatillustratenewresearchresults,projects,surveyingworksandindustrialexperiencesthat
describesignificantadvancesinfieldofcomputerscience.
IndexingofIJCSI
1.GoogleScholar
2.BielefeldAcademicSearchEngine(BASE)
3.CiteSeerX
4.SCIRUS
5.Docstoc
6.Scribd
7.Cornell'sUniversityLibrary
8.SciRate
9.ScientificCommons
10.DBLP
11.EBSCO
12.ProQuest
IJCSI PUBLICATION
www.IJCSI.org