Вы находитесь на странице: 1из 65

IJCSI IJCSI

International Journal of
Computer Science Issues







Volume 7, Issue 3, No 4, May 2010
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


IJCSI PUBLICATION
www.IJCSI.org

IJCSI proceedings are currently indexed by:






























IJCSI PUBLICATION 2010
www.IJCSI.org
IJCSI Publicity Board 2010



Dr. Borislav D Dimitrov
Department of General Practice, Royal College of Surgeons in Ireland
Dublin, Ireland



Dr. Vishal Goyal
Department of Computer Science, Punjabi University
Patiala, India



Mr. Nehinbe Joshua
University of Essex
Colchester, Essex, UK



Mr. Vassilis Papataxiarhis
Department of Informatics and Telecommunications
National and Kapodistrian University of Athens, Athens, Greece


EDITORIAL


In this third edition of 2010, we bring forward issues from various dynamic computer science
areas ranging from system performance, computer vision, artificial intelligence, software
engineering, multimedia , pattern recognition, information retrieval, databases, security and
networking among others.

As always we thank all our reviewers for providing constructive comments on papers sent to
them for review. This helps enormously in improving the quality of papers published in this
issue.

IJCSI will maintain its policy of sending print copies of the journal to all corresponding authors
worldwide free of charge. Apart from availability of the full-texts from the journal website, all
published papers are deposited in open-access repositories to make access easier and ensure
continuous availability of its proceedings.

The transition from the 2
nd
issue to the 3
rd
one has been marked with an agreement signed
between IJCSI and ProQuest and EBSCOHOST, two leading directories to help in the
dissemination of our published papers. We believe further indexing and more dissemination will
definitely lead to further citations of our authors articles.

We are pleased to present IJCSI Volume 7, Issue 3, May 2010, split in eleven numbers (IJCSI
Vol. 7, Issue 3, No. 4). The acceptance rate for this issue is 37.88%.

We wish you a happy reading!


IJCSI Editorial Board
May 2010 Issue
ISSN (Print): 1694-0814
ISSN (Online): 1694-0784
IJCSI Publications
www.IJCSI.org
IJCSI Editorial Board 2010



Dr Tristan Vanrullen
Chief Editor
LPL, Laboratoire Parole et Langage - CNRS - Aix en Provence, France
LABRI, Laboratoire Bordelais de Recherche en Informatique - INRIA - Bordeaux, France
LEEE, Laboratoire d'Esthtique et Exprimentations de l'Espace - Universit d'Auvergne, France


Dr Constantino Malagn
Associate Professor
Nebrija University
Spain


Dr Lamia Fourati Chaari
Associate Professor
Multimedia and Informatics Higher Institute in SFAX
Tunisia


Dr Mokhtar Beldjehem
Professor
Sainte-Anne University
Halifax, NS, Canada


Dr Pascal Chatonnay
Assistant Professor
Matre de Confrences
Laboratoire d'Informatique de l'Universit de Franche-Comt
Universit de Franche-Comt
France


Dr Yee-Ming Chen
Professor
Department of Industrial Engineering and Management
Yuan Ze University
Taiwan






Dr Vishal Goyal
Assistant Professor
Department of Computer Science
Punjabi University
Patiala, India


Dr Natarajan Meghanathan
Assistant Professor
REU Program Director
Department of Computer Science
Jackson State University
Jackson, USA


Dr Deepak Laxmi Narasimha
Department of Software Engineering,
Faculty of Computer Science and Information Technology,
University of Malaya,
Kuala Lumpur, Malaysia


Dr Navneet Agrawal
Assistant Professor
Department of ECE,
College of Technology & Engineering,
MPUAT, Udaipur 313001 Rajasthan, India


Prof N. Jaisankar
Assistant Professor
School of Computing Sciences,
VIT University
Vellore, Tamilnadu, India
IJCSI Reviewers Committee 2010


Mr. Markus Schatten, University of Zagreb, Faculty of Organization and Informatics, Croatia
Mr. Vassilis Papataxiarhis, Department of Informatics and Telecommunications, National and
Kapodistrian University of Athens, Athens, Greece
Dr Modestos Stavrakis, University of the Aegean, Greece
Dr Fadi KHALIL, LAAS -- CNRS Laboratory, France
Dr Dimitar Trajanov, Faculty of Electrical Engineering and Information technologies, ss. Cyril and
Methodius Univesity - Skopje, Macedonia
Dr Jinping Yuan, College of Information System and Management,National Univ. of Defense Tech.,
China
Dr Alexis Lazanas, Ministry of Education, Greece
Dr Stavroula Mougiakakou, University of Bern, ARTORG Center for Biomedical Engineering
Research, Switzerland
Dr Cyril de Runz, CReSTIC-SIC, IUT de Reims, University of Reims, France
Mr. Pramodkumar P. Gupta, Dept of Bioinformatics, Dr D Y Patil University, India
Dr Alireza Fereidunian, School of ECE, University of Tehran, Iran
Mr. Fred Viezens, Otto-Von-Guericke-University Magdeburg, Germany
Dr. Richard G. Bush, Lawrence Technological University, United States
Dr. Ola Osunkoya, Information Security Architect, USA
Mr. Kotsokostas N.Antonios, TEI Piraeus, Hellas
Prof Steven Totosy de Zepetnek, U of Halle-Wittenberg & Purdue U & National Sun Yat-sen U,
Germany, USA, Taiwan
Mr. M Arif Siddiqui, Najran University, Saudi Arabia
Ms. Ilknur Icke, The Graduate Center, City University of New York, USA
Prof Miroslav Baca, Faculty of Organization and Informatics, University of Zagreb, Croatia
Dr. Elvia Ruiz Beltrn, Instituto Tecnolgico de Aguascalientes, Mexico
Mr. Moustafa Banbouk, Engineer du Telecom, UAE
Mr. Kevin P. Monaghan, Wayne State University, Detroit, Michigan, USA
Ms. Moira Stephens, University of Sydney, Australia
Ms. Maryam Feily, National Advanced IPv6 Centre of Excellence (NAV6) , Universiti Sains Malaysia
(USM), Malaysia
Dr. Constantine YIALOURIS, Informatics Laboratory Agricultural University of Athens, Greece
Mrs. Angeles Abella, U. de Montreal, Canada
Dr. Patrizio Arrigo, CNR ISMAC, italy
Mr. Anirban Mukhopadhyay, B.P.Poddar Institute of Management & Technology, India
Mr. Dinesh Kumar, DAV Institute of Engineering & Technology, India
Mr. Jorge L. Hernandez-Ardieta, INDRA SISTEMAS / University Carlos III of Madrid, Spain
Mr. AliReza Shahrestani, University of Malaya (UM), National Advanced IPv6 Centre of Excellence
(NAv6), Malaysia
Mr. Blagoj Ristevski, Faculty of Administration and Information Systems Management - Bitola,
Republic of Macedonia
Mr. Mauricio Egidio Canto, Department of Computer Science / University of So Paulo, Brazil
Mr. Jules Ruis, Fractal Consultancy, The Netherlands
Mr. Mohammad Iftekhar Husain, University at Buffalo, USA
Dr. Deepak Laxmi Narasimha, Department of Software Engineering, Faculty of Computer Science and
Information Technology, University of Malaya, Malaysia
Dr. Paola Di Maio, DMEM University of Strathclyde, UK
Dr. Bhanu Pratap Singh, Institute of Instrumentation Engineering, Kurukshetra University
Kurukshetra, India
Mr. Sana Ullah, Inha University, South Korea
Mr. Cornelis Pieter Pieters, Condast, The Netherlands
Dr. Amogh Kavimandan, The MathWorks Inc., USA
Dr. Zhinan Zhou, Samsung Telecommunications America, USA
Mr. Alberto de Santos Sierra, Universidad Politcnica de Madrid, Spain
Dr. Md. Atiqur Rahman Ahad, Department of Applied Physics, Electronics & Communication
Engineering (APECE), University of Dhaka, Bangladesh
Dr. Charalampos Bratsas, Lab of Medical Informatics, Medical Faculty, Aristotle University,
Thessaloniki, Greece
Ms. Alexia Dini Kounoudes, Cyprus University of Technology, Cyprus
Mr. Anthony Gesase, University of Dar es salaam Computing Centre, Tanzania
Dr. Jorge A. Ruiz-Vanoye, Universidad Jurez Autnoma de Tabasco, Mexico
Dr. Alejandro Fuentes Penna, Universidad Popular Autnoma del Estado de Puebla, Mxico
Dr. Ocotln Daz-Parra, Universidad Jurez Autnoma de Tabasco, Mxico
Mrs. Nantia Iakovidou, Aristotle University of Thessaloniki, Greece
Mr. Vinay Chopra, DAV Institute of Engineering & Technology, Jalandhar
Ms. Carmen Lastres, Universidad Politcnica de Madrid - Centre for Smart Environments, Spain
Dr. Sanja Lazarova-Molnar, United Arab Emirates University, UAE
Mr. Srikrishna Nudurumati, Imaging & Printing Group R&D Hub, Hewlett-Packard, India
Dr. Olivier Nocent, CReSTIC/SIC, University of Reims, France
Mr. Burak Cizmeci, Isik University, Turkey
Dr. Carlos Jaime Barrios Hernandez, LIG (Laboratory Of Informatics of Grenoble), France
Mr. Md. Rabiul Islam, Rajshahi university of Engineering & Technology (RUET), Bangladesh
Dr. LAKHOUA Mohamed Najeh, ISSAT - Laboratory of Analysis and Control of Systems, Tunisia
Dr. Alessandro Lavacchi, Department of Chemistry - University of Firenze, Italy
Mr. Mungwe, University of Oldenburg, Germany
Mr. Somnath Tagore, Dr D Y Patil University, India
Ms. Xueqin Wang, ATCS, USA
Dr. Borislav D Dimitrov, Department of General Practice, Royal College of Surgeons in Ireland,
Dublin, Ireland
Dr. Fondjo Fotou Franklin, Langston University, USA
Dr. Vishal Goyal, Department of Computer Science, Punjabi University, Patiala, India
Mr. Thomas J. Clancy, ACM, United States
Dr. Ahmed Nabih Zaki Rashed, Dr. in Electronic Engineering, Faculty of Electronic Engineering,
menouf 32951, Electronics and Electrical Communication Engineering Department, Menoufia university,
EGYPT, EGYPT
Dr. Rushed Kanawati, LIPN, France
Mr. Koteshwar Rao, K G Reddy College Of ENGG.&TECH,CHILKUR, RR DIST.,AP, India
Mr. M. Nagesh Kumar, Department of Electronics and Communication, J.S.S. research foundation,
Mysore University, Mysore-6, India
Dr. Ibrahim Noha, Grenoble Informatics Laboratory, France
Mr. Muhammad Yasir Qadri, University of Essex, UK
Mr. Annadurai .P, KMCPGS, Lawspet, Pondicherry, India, (Aff. Pondicherry Univeristy, India
Mr. E Munivel , CEDTI (Govt. of India), India
Dr. Chitra Ganesh Desai, University of Pune, India
Mr. Syed, Analytical Services & Materials, Inc., USA
Dr. Mashud Kabir, Department of Computer Science, University of Tuebingen, Germany
Mrs. Payal N. Raj, Veer South Gujarat University, India
Mrs. Priti Maheshwary, Maulana Azad National Institute of Technology, Bhopal, India
Mr. Mahesh Goyani, S.P. University, India, India
Mr. Vinay Verma, Defence Avionics Research Establishment, DRDO, India
Dr. George A. Papakostas, Democritus University of Thrace, Greece
Mr. Abhijit Sanjiv Kulkarni, DARE, DRDO, India
Mr. Kavi Kumar Khedo, University of Mauritius, Mauritius
Dr. B. Sivaselvan, Indian Institute of Information Technology, Design & Manufacturing,
Kancheepuram, IIT Madras Campus, India
Dr. Partha Pratim Bhattacharya, Greater Kolkata College of Engineering and Management, West
Bengal University of Technology, India
Mr. Manish Maheshwari, Makhanlal C University of Journalism & Communication, India
Dr. Siddhartha Kumar Khaitan, Iowa State University, USA
Dr. Mandhapati Raju, General Motors Inc, USA
Dr. M.Iqbal Saripan, Universiti Putra Malaysia, Malaysia
Mr. Ahmad Shukri Mohd Noor, University Malaysia Terengganu, Malaysia
Mr. Selvakuberan K, TATA Consultancy Services, India
Dr. Smita Rajpal, Institute of Technology and Management, Gurgaon, India
Mr. Rakesh Kachroo, Tata Consultancy Services, India
Mr. Raman Kumar, National Institute of Technology, Jalandhar, Punjab., India
Mr. Nitesh Sureja, S.P.University, India
Dr. M. Emre Celebi, Louisiana State University, Shreveport, USA
Dr. Aung Kyaw Oo, Defence Services Academy, Myanmar
Mr. Sanjay P. Patel, Sankalchand Patel College of Engineering, Visnagar, Gujarat, India
Dr. Pascal Fallavollita, Queens University, Canada
Mr. Jitendra Agrawal, Rajiv Gandhi Technological University, Bhopal, MP, India
Mr. Ismael Rafael Ponce Medelln, Cenidet (Centro Nacional de Investigacin y Desarrollo
Tecnolgico), Mexico
Mr. Supheakmungkol SARIN, Waseda University, Japan
Mr. Shoukat Ullah, Govt. Post Graduate College Bannu, Pakistan
Dr. Vivian Augustine, Telecom Zimbabwe, Zimbabwe
Mrs. Mutalli Vatila, Offshore Business Philipines, Philipines
Dr. Emanuele Goldoni, University of Pavia, Dept. of Electronics, TLC & Networking Lab, Italy
Mr. Pankaj Kumar, SAMA, India
Dr. Himanshu Aggarwal, Punjabi University,Patiala, India
Dr. Vauvert Guillaume, Europages, France
Prof Yee Ming Chen, Department of Industrial Engineering and Management, Yuan Ze University,
Taiwan
Dr. Constantino Malagn, Nebrija University, Spain
Prof Kanwalvir Singh Dhindsa, B.B.S.B.Engg.College, Fatehgarh Sahib (Punjab), India
Mr. Angkoon Phinyomark, Prince of Singkla University, Thailand
Ms. Nital H. Mistry, Veer Narmad South Gujarat University, Surat, India
Dr. M.R.Sumalatha, Anna University, India
Mr. Somesh Kumar Dewangan, Disha Institute of Management and Technology, India
Mr. Raman Maini, Punjabi University, Patiala(Punjab)-147002, India
Dr. Abdelkader Outtagarts, Alcatel-Lucent Bell-Labs, France
Prof Dr. Abdul Wahid, AKG Engg. College, Ghaziabad, India
Mr. Prabu Mohandas, Anna University/Adhiyamaan College of Engineering, india
Dr. Manish Kumar Jindal, Panjab University Regional Centre, Muktsar, India
Prof Mydhili K Nair, M S Ramaiah Institute of Technnology, Bangalore, India
Dr. C. Suresh Gnana Dhas, VelTech MultiTech Dr.Rangarajan Dr.Sagunthala Engineering
College,Chennai,Tamilnadu, India
Prof Akash Rajak, Krishna Institute of Engineering and Technology, Ghaziabad, India
Mr. Ajay Kumar Shrivastava, Krishna Institute of Engineering & Technology, Ghaziabad, India
Mr. Deo Prakash, SMVD University, Kakryal(J&K), India
Dr. Vu Thanh Nguyen, University of Information Technology HoChiMinh City, VietNam
Prof Deo Prakash, SMVD University (A Technical University open on I.I.T. Pattern) Kakryal (J&K),
India
Dr. Navneet Agrawal, Dept. of ECE, College of Technology & Engineering, MPUAT, Udaipur 313001
Rajasthan, India
Mr. Sufal Das, Sikkim Manipal Institute of Technology, India
Mr. Anil Kumar, Sikkim Manipal Institute of Technology, India
Dr. B. Prasanalakshmi, King Saud University, Saudi Arabia.
Dr. K D Verma, S.V. (P.G.) College, Aligarh, India
Mr. Mohd Nazri Ismail, System and Networking Department, University of Kuala Lumpur (UniKL),
Malaysia
Dr. Nguyen Tuan Dang, University of Information Technology, Vietnam National University Ho Chi
Minh city, Vietnam
Dr. Abdul Aziz, University of Central Punjab, Pakistan
Dr. P. Vasudeva Reddy, Andhra University, India
Mrs. Savvas A. Chatzichristofis, Democritus University of Thrace, Greece
Mr. Marcio Dorn, Federal University of Rio Grande do Sul - UFRGS Institute of Informatics, Brazil
Mr. Luca Mazzola, University of Lugano, Switzerland
Mr. Nadeem Mahmood, Department of Computer Science, University of Karachi, Pakistan
Mr. Hafeez Ullah Amin, Kohat University of Science & Technology, Pakistan
Dr. Professor Vikram Singh, Ch. Devi Lal University, Sirsa (Haryana), India
Mr. M. Azath, Calicut/Mets School of Enginerring, India
Dr. J. Hanumanthappa, DoS in CS, University of Mysore, India
Dr. Shahanawaj Ahamad, Department of Computer Science, King Saud University, Saudi Arabia
Dr. K. Duraiswamy, K. S. Rangasamy College of Technology, India
Prof. Dr Mazlina Esa, Universiti Teknologi Malaysia, Malaysia
Dr. P. Vasant, Power Control Optimization (Global), Malaysia
Dr. Taner Tuncer, Firat University, Turkey
Dr. Norrozila Sulaiman, University Malaysia Pahang, Malaysia
Prof. S K Gupta, BCET, Guradspur, India
Dr. Latha Parameswaran, Amrita Vishwa Vidyapeetham, India
Mr. M. Azath, Anna University, India
Dr. P. Suresh Varma, Adikavi Nannaya University, India
Prof. V. N. Kamalesh, JSS Academy of Technical Education, India
Dr. D Gunaseelan, Ibri College of Technology, Oman
Mr. Sanjay Kumar Anand, CDAC, India
Mr. Akshat Verma, CDAC, India
Mrs. Fazeela Tunnisa, Najran University, Kingdom of Saudi Arabia
Mr. Hasan Asil, Islamic Azad University Tabriz Branch (Azarshahr), Iran
Prof. Dr Sajal Kabiraj, Fr. C Rodrigues Institute of Management Studies (Affiliated to University of
Mumbai, India), India
Mr. Syed Fawad Mustafa, GAC Center, Shandong University, China
Dr. Natarajan Meghanathan, Jackson State University, Jackson, MS, USA
Prof. Selvakani Kandeeban, Francis Xavier Engineering College, India
Mr. Tohid Sedghi, Urmia University, Iran
Dr. S. Sasikumar, PSNA College of Engg and Tech, Dindigul, India
Dr. Anupam Shukla, Indian Institute of Information Technology and Management Gwalior, India
Mr. Rahul Kala, Indian Institute of Inforamtion Technology and Management Gwalior, India
Dr. A V Nikolov, National University of Lesotho, Lesotho
Mr. Kamal Sarkar, Department of Computer Science and Engineering, Jadavpur University, India
Dr. Mokhled S. AlTarawneh, Computer Engineering Dept., Faculty of Engineering, Mutah University,
Jordan, Jordan
Prof. Sattar J Aboud, Iraqi Council of Representatives, Iraq-Baghdad
Dr. Prasant Kumar Pattnaik, Department of CSE, KIST, India
Dr. Mohammed Amoon, King Saud University, Saudi Arabia
Dr. Tsvetanka Georgieva, Department of Information Technologies, St. Cyril and St. Methodius
University of Veliko Tarnovo, Bulgaria
Dr. Eva Volna, University of Ostrava, Czech Republic
Mr. Ujjal Marjit, University of Kalyani, West-Bengal, India
Dr. Prasant Kumar Pattnaik, KIST,Bhubaneswar,India, India
Dr. Guezouri Mustapha, Department of Electronics, Faculty of Electrical Engineering, University of
Science and Technology (USTO), Oran, Algeria
Mr. Maniyar Shiraz Ahmed, Najran University, Najran, Saudi Arabia
Dr. Sreedhar Reddy, JNTU, SSIETW, Hyderabad, India
Mr. Bala Dhandayuthapani Veerasamy, Mekelle University, Ethiopa
Mr. Arash Habibi Lashkari, University of Malaya (UM), Malaysia
Mr. Rajesh Prasad, LDC Institute of Technical Studies, Allahabad, India
Ms. Habib Izadkhah, Tabriz University, Iran
Dr. Lokesh Kumar Sharma, Chhattisgarh Swami Vivekanand Technical University Bhilai, India
Mr. Kuldeep Yadav, IIIT Delhi, India
Dr. Naoufel Kraiem, Institut Superieur d'Informatique, Tunisia
Prof. Frank Ortmeier, Otto-von-Guericke-Universitaet Magdeburg, Germany
Mr. Ashraf Aljammal, USM, Malaysia
Mrs. Amandeep Kaur, Department of Computer Science, Punjabi University, Patiala, Punjab, India
Mr. Babak Basharirad, University Technology of Malaysia, Malaysia
Mr. Avinash singh, Kiet Ghaziabad, India
Dr. Miguel Vargas-Lombardo, Technological University of Panama, Panama
Dr. Tuncay Sevindik, Firat University, Turkey
Ms. Pavai Kandavelu, Anna University Chennai, India
Mr. Ravish Khichar, Global Institute of Technology, India
Mr Aos Alaa Zaidan Ansaef, Multimedia University, Cyberjaya, Malaysia
Dr. Awadhesh Kumar Sharma, Dept. of CSE, MMM Engg College, Gorakhpur-273010, UP, India
Mr. Qasim Siddique, FUIEMS, Pakistan
Dr. Le Hoang Thai, University of Science, Vietnam National University - Ho Chi Minh City, Vietnam
Dr. Saravanan C, NIT, Durgapur, India
Dr. Vijay Kumar Mago, DAV College, Jalandhar, India
Dr. Do Van Nhon, University of Information Technology, Vietnam
Mr. Georgios Kioumourtzis, University of Patras, Greece
Mr. Amol D.Potgantwar, SITRC Nasik, India
Mr. Lesedi Melton Masisi, Council for Scientific and Industrial Research, South Africa
Dr. Karthik.S, Department of Computer Science & Engineering, SNS College of Technology, India
Mr. Nafiz Imtiaz Bin Hamid, Department of Electrical and Electronic Engineering, Islamic University
of Technology (IUT), Bangladesh
Mr. Muhammad Imran Khan, Universiti Teknologi PETRONAS, Malaysia
Dr. Abdul Kareem M. Radhi, Information Engineering - Nahrin University, Iraq
Dr. Mohd Nazri Ismail, University of Kuala Lumpur, Malaysia
Dr. Manuj Darbari, BBDNITM, Institute of Technology, A-649, Indira Nagar, Lucknow 226016, India
Ms. Izerrouken, INP-IRIT, France
Mr. Nitin Ashokrao Naik, Dept. of Computer Science, Yeshwant Mahavidyalaya, Nanded, India
Mr. Nikhil Raj, National Institute of Technology, Kurukshetra, India
Prof. Maher Ben Jemaa, National School of Engineers of Sfax, Tunisia
Prof. Rajeshwar Singh, BRCM College of Engineering and Technology, Bahal Bhiwani, Haryana,
India
Mr. Gaurav Kumar, Department of Computer Applications, Chitkara Institute of Engineering and
Technology, Rajpura, Punjab, India
Mr. Ajeet Kumar Pandey, Indian Institute of Technology, Kharagpur, India
Mr. Rajiv Phougat, IBM Corporation, USA
Mrs. Aysha V, College of Applied Science Pattuvam affiliated with Kannur University, India
Dr. Debotosh Bhattacharjee, Department of Computer Science and Engineering, Jadavpur University,
Kolkata-700032, India
Dr. Neelam Srivastava, Institute of engineering & Technology, Lucknow, India
Prof. Sweta Verma, Galgotia's College of Engineering & Technology, Greater Noida, India
Mr. Harminder Singh BIndra, MIMIT, INDIA
Dr. Lokesh Kumar Sharma, Chhattisgarh Swami Vivekanand Technical University, Bhilai, India
Mr. Tarun Kumar, U.P. Technical University/Radha Govinend Engg. College, India
Mr. Tirthraj Rai, Jawahar Lal Nehru University, New Delhi, India
Mr. Akhilesh Tiwari, Madhav Institute of Technology & Science, India
Mr. Dakshina Ranjan Kisku, Dr. B. C. Roy Engineering College, WBUT, India
Ms. Anu Suneja, Maharshi Markandeshwar University, Mullana, Haryana, India
Mr. Munish Kumar Jindal, Punjabi University Regional Centre, Jaito (Faridkot), India
Dr. Ashraf Bany Mohammed, Management Information Systems Department, Faculty of
Administrative and Financial Sciences, Petra University, Jordan
Mrs. Jyoti Jain, R.G.P.V. Bhopal, India
Dr. Lamia Chaari, SFAX University, Tunisia
Mr. Akhter Raza Syed, Department of Computer Science, University of Karachi, Pakistan
Prof. Khubaib Ahmed Qureshi, Information Technology Department, HIMS, Hamdard University,
Pakistan
Prof. Boubker Sbihi, Ecole des Sciences de L'Information, Morocco
Dr. S. M. Riazul Islam, Inha University, South Korea
Prof. Lokhande S.N., S.R.T.M.University, Nanded (MH), India
Dr. Vijay H Mankar, Dept. of Electronics, Govt. Polytechnic, Nagpur, India
Dr. M. Sreedhar Reddy, JNTU, Hyderabad, SSIETW, India
Mr. Ojesanmi Olusegun, Ajayi Crowther University, Oyo, Nigeria
Ms. Mamta Juneja, RBIEBT, PTU, India
Dr. Ekta Walia Bhullar, Maharishi Markandeshwar University, Mullana Ambala (Haryana), India
Prof. Chandra Mohan, John Bosco Engineering College, India
Mr. Nitin A. Naik, Yeshwant Mahavidyalaya, Nanded, India
Mr. Sunil Kashibarao Nayak, Bahirji Smarak Mahavidyalaya, Basmathnagar Dist-Hingoli., India
Prof. Rakesh.L, Vijetha Institute of Technology, Bangalore, India
Mr B. M. Patil, Indian Institute of Technology, Roorkee, Uttarakhand, India
Mr. Thipendra Pal Singh, Sharda University, K.P. III, Greater Noida, Uttar Pradesh, India
Prof. Chandra Mohan, John Bosco Engg College, India
Mr. Hadi Saboohi, University of Malaya - Faculty of Computer Science and Information Technology,
Malaysia
Dr. R. Baskaran, Anna University, India
Dr. Wichian Sittiprapaporn, Mahasarakham University College of Music, Thailand
Mr. Lai Khin Wee, Universiti Teknologi Malaysia, Malaysia
Dr. Kamaljit I. Lakhtaria, Atmiya Institute of Technology, India
Mrs. Inderpreet Kaur, PTU, Jalandhar, India
Mr. Iqbaldeep Kaur, PTU / RBIEBT, India
Mrs. Vasudha Bahl, Maharaja Agrasen Institute of Technology, Delhi, India
Prof. Vinay Uttamrao Kale, P.R.M. Institute of Technology & Research, Badnera, Amravati,
Maharashtra, India
Mr. Suhas J Manangi, Microsoft, India
Ms. Anna Kuzio, Adam Mickiewicz University, School of English, Poland
Dr. Debojyoti Mitra, Sir Padampat Singhania University, India
Prof. Rachit Garg, Department of Computer Science, L K College, India
Mrs. Manjula K A, Kannur University, India
Mr. Rakesh Kumar, Indian Institute of Technology Roorkee, India
TABLE OF CONTENTS




1. Dynamic path-loss estimation using a particle filter
Javier Rodas, Carlos J. Escudero Cascn


2. Grids Portals: Frameworks, Middleware or Toolkit
Xavier Medianero-Pasco, Beln Bonilla-Morales, Miguel Vargas-Lombardo


3. Evaluation of JFlex Scanner Generator Using Form Fields Validity Checking
Ezekiel Okike, Maduka Attamah


4. Kansei Colour Concepts to Improve Effective Colour Selection in Designing Human
Computer Interfaces
Tharangie K G D, Irfan C M A, Yamad K, Marasinghe A


5. An Efficient System Based On Closed Sequential Patterns for Web Recommendations
Utpala Niranjan, R.B.V. Subramanyam, V-Khana


6. Energy Efficient Real-Time Scheduling in Distributed Systems
Santhi Baskaran, P. Thambidurai


7. Exploration on Selection of Medical Images employing New Transformation
Technique
J.Jaya, K.Thanushkodi


Pg 1-5



Pg 6-11



Pg 12-20



Pg 21-25




Pg 26-34



Pg 35-42



Pg 43-48
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


1
Dynamic path-loss estimation using a particle filter
Javier Rodas
1
and Carlos J. Escudero
2



1
Department of Electronics and Systems, University of A Corua
A Corua, 15071, Campus de Elvia, Spain


2
Department of Electronics and Systems, University of A Corua
A Corua, 15071, Campus de Elvia, Spain



Abstract
The estimation of the propagation model parameters is a main
issue in location systems. In these systems, distance estimations
are obtained from received signal strength information, which is
extracted from received packets. The precision of these systems
mainly depends on the proper propagation model selection. In
this paper we introduce an algorithm based on Bayesian filtering
techniques, which estimate the path-loss exponent of a log-
normal propagation model. This estimation is made dynamically
and in real time. Therefore, it can track propagation model
changes due to environmental changes.

Keywords: path-loss, RSS, WSN, Bluetooth, particle filter,
Bayesian filtering, propagation model.
1. Introduction
Modeling signal propagation is an important topic in some
applications such as location systems using Wireless
Sensor Networks (WSN) [1]. The most typical information
used to estimate mobile node locations in a WSN is the
Received Signal Strength (RSS). This parameter is
relatively easy to obtain in most WSN architectures like
Wifi, ZigBee or Bluetooth. Location systems usually
consider anchor nodes in fixed and known positions,
which obtain RSS information and then estimate mobile
node positions using the gathered information.

It is well known that the power of the transmitted
signal decays exponentially with distance, depending on
the obstacles that surround or interpose between the
transmitter and the receiver, environment characteristics
(indoor, outdoor), etc. [2]. Moreover, RSS varies
randomly depending on the environment characteristics.
This variability can be interpreted by means of small and
large-scale propagation models [2], which statistically
represent changes in signal levels.

In this paper, we use a classical path-loss propagation
model, which is defined as:


where is the received signal power with distance ,
is the power with reference distance , is the
path-loss exponent and represents the noise, by using a
normally distributed random variable, with zero mean and
standard deviation. As shown, the model assumes log-
normal variations of the power with distance. Both and
parameters are usually estimated using off-line linear
regression analysis from real RSS data obtained at several
distances in the environment.

The path-loss value typically varies between 1 and
3 in indoor environments when there is a clear line-of-
sight (LOS) between a transmitter and a receiver, and it
suddenly changes when the line-of-sight is blocked, that is,
in the non line-of-sight (NLOS) case. Therefore, if the real
path-loss ( ) value changes significantly with respect to
the considered value in the location algorithm, the system
accuracy will surely be lower or even null.

In [3] we introduced a method to jointly estimate
propagation model path-loss parameter and position, and
we also showed the magnitude of the path-loss estimation
error in the system accuracy. In the case of this paper, we
do not jointly estimate position and path-loss parameter
values. Instead, we are going to estimate the path-loss
value in a continuous manner when node positions are
known. The main advantage of this new approach is the
possibility to track the value in a real-valued range
between two fixed limits, that is, without forcing the
possible values to a fixed set of discrete values,
extracted from off-line measurements. Now, we can detect
channel conditions in real time between two nodes with
known positions. For instance, we could detect the loss of
the LOS among the WSN anchor nodes after performing a
cross measuring process, to obtain RSS information
among themselves. With this information, we could take
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


2
some decisions about prioritization or calibration of some
of the anchor nodes.

Therefore, in order to achieve
a reliable
location
system, it is mandatory to track propagation model
changes by estimating its parameters frequently.

Other different approaches can be found in the
bibliography. To consider these changes in the
propagation model, [4] and [5] consider transitions among
different situations using a two node Markov model,
which takes into account the probability of LOS between
transmitter and receiver. In [6] they also identify LOS and
NLOS channel conditions but they study it using an UWB
network and time-of-arrival (TOA) approach applied to
location systems. In [7] the unknown propagation model
parameters are deduced from mathematical formulation,
and in [8] parametric propagation models are proposed as
a feasible way to track the channel. Our approach
considers Bayesian filtering (particle filter) in order to
estimate the path-loss parameter and, therefore, to detect
channel conditions.

This paper is organized as follows. Section 2
introduces the propagation model problem, emphasizing
possible changes that could take place in model
parameters. Section 3 introduces our proposed algorithm
based on particle filtering to estimate the path-loss
parameter of a log-normal propagation model. Section 4
shows the results obtained by simulation, which prove the
advantages of the proposed algorithm. Finally, section 5
summarizes the conclusions and future lines of work.
2. Propagation model
In this paper we consider the simplified log-normal
propagation model defined in (1). Typically, this path-loss
model is considered known a priori by assuming a perfect
free-space channel, or extensive channel measurement and
modeling is performed prior to system deployment. Such
an assumption is an oversimplification in many
applications and scenarios, where no extensive channel
measurement is possible (i.e. hostile of inaccessible
environments). In some other scenarios, such as indoor
scenarios with moving people or devices, the channel
characteristics tend to change considerably over a short
period of time, mainly because of the loss of the line-of-
sight (LOS). In some outdoor scenarios, instead, the
channel tend to change over a long period of time due to
seasonal and accidental reasons [9].

Moreover, it is not guaranteed that all anchor nodes
radiate in the same manner. Even with identical anchor
hardware from the same manufacturer, depending on their
antenna orientation and tolerance, pigtail lengths, etc.,
radiation could be different, producing different and
values. It is well known that the noise deviation increases













Fig 1. Power loss versus distance in line-of-sight (LOS) case.














Fig 2. Power loss versus distance in non line-of-sight (LOS) case.

with the distance in indoor environments, due to multipath
fading effects produced by obstacles.

The problem is that usually the value is assumed to
be constant. However, this consideration is not valid for
real environments when can change suddenly due to the
loss of LOS. It should only be considered constant for a
certain period of time, and a reliable location algorithm
needs to optimally accommodate and adapt to changing or
unknown channel conditions.

At least, path-loss value should be estimated
frequently, while it is enough to set an upper bound for ,
based on an estimated value obtained from off-line
measurements. Note that, in order to guarantee good
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


3
results and the convergence of our algorithm, this
assumed upper bound must be greater than any real value
this parameter could take in real life. For this reason, we
should always choose the worst observed value,
measured at the longest distance our location system can
reach, or even a greater value.
In the section 4 we are going to perform some
experiments based on RSS values and channel conditions
that usually appear in real indoor scenarios when using a
real WSN. Figures 1 and 2 show the fitting curve of the
log-normal propagation model in eq. 1 for the LOS and
NLOS cases, respectively. These measurements were
obtained at different distances between an anchor node
and a mobile device, from 1 to 9 meters, in a meter
real scenario, using a Bluetooth sensor network. They
were taken during four minutes at each position in order to
obtain enough samples. For the NLOS case, we put an
obstacle at 40 cm in front of the device to be located,
blocking the line of sight. As shown, and
for the LOS case whereas for NLOS case
these parameters are and .

In some other scenarios these and values
will be surely different when using the same WSN, and
both also can vary depending on new obstacles and
moving people. However, these path-loss values obtained
from off-line measurements can give us a good idea of the
kind of values that path-loss and can take in real life.
And assuming them as fixed within a location algorithm is
always a bad idea, as shown in [3] in the CDF curves
about the loss of accuracy.

Therefore, it is desirable to have a system that could
blindly track the real path-loss value in real time, as the
one introduced in this paper.
3. Particle filter
A particle filter is a Monte Carlo (MC) method for
implementing a recursive Bayesian filter [10]. It is based
on a set of random samples, named particles, associated to
different weights that represent a probability density
function (pdf). Basically, the objective is to construct the a
posteriori pdf recursively, , where is the
state of the -th particle and is an observation at a
given instant .

In this paper, the state of the -th particle is only
composed by the channel parameter , which estimates the
real path-loss exponent value. The value is constant and
an upper bound based on the worst deviation value
observed from off-line measurements. After a random
initialization of the particle states and all their weights
as , the algorithm performs several consecutive
iterations. Each iteration is divided into the following
steps: prediction, update, resampling and estimation.
3.1 Prediction
The prediction step computes the state of each particle
with respect to the previous one, based on the dynamic
model that indicates how the parameters must be updated.
In our case, is the value for the -th particle, and the
dynamic model is as simple as shown:





where is a Gaussian distribution with mean and
standard deviation, is the interval of time between
iterations (RSS samples), models the variations of the
dynamic model, and , are, respectively, the
minimum and maximum values allowed for . Note that in
the first iteration, the values are updated with a Uniform
distribution . Note that particles update
their state in a random way.
3.2 Update
Each particle has an associated weight directly
related to [11].

These weights are updated and normalized as follows:



where stands for the normalized weights. In the
update process, the conditioned probability of the
observations with respect to the state depends on the
propagation model in (1). Taking into account the
Gaussian noise, we obtain the following expression for the
-th anchor node:




where is defined by the propagation model shown
in (1), but applying the appropriate parameter stored in
the -th particle:



3.3 Resampling
To avoid degeneration problems in the particle system,
new particles are generated when many of them have low
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


4
weights after some iterations and the majority of the
overall weights is accumulated in only a few particles [10],
[12].
We consider a bootstrap approach, where the
particles are replaced by using each -th sample replicate
probability based on its weight. Therefore, the
strongest particles, that is, the particles with highest
weights, will tend to be replicated while the weakest ones
will tend to disappear.
This resampling step is only performed when an
effective number of samples, , is lower than a
threshold :



3.4 Estimation
Finally, the parameter estimation is computed by means of
a weighted sum of the state information from all the
particles. It is computed as follows:



4. Experimental results
We have made some experiments to show the way our
algorithm estimate the path-loss parameter, , in the
environment described in section 2. We considered a
particle filter with , , and a
sampling period (time between algorithm interactions)
sec. The noisy simulated measurements were
always generated using a dBm, to simulate a
harsh environment like the real scenario described in
section 2.
Figures 3 and 4 show the value temporal evolution
and the estimation achieved by the particle filter. We have
considered a value of , for the LOS case, and a
change to in the middle of the experiment, to
simulate the loss of the line-of-sight due to an obstacle. As
it was shown in section 2, this kind of values is typical in
some real indoor scenarios. The figures show how our
algorithm tracks dynamically when different
assumptions in the value were considered. In Figure 3
we have considered a fixed value of , equal to the
real value used in the simulation to generate the noisy
measurements ( ). Note that the mean error of the
estimation is very low ( ), which means that the
estimation is centered on the real value. In Figure 4 we
show the effects of considering a wrong parameter. The
worst situation happens when is lower that its real value.
Nevertheless, greater values than the obtains a
correct path-loss estimation, even when much higher
values are considered. The only effect in these cases is a
lower speed of convergence in estimation. Although it is
not realistic, we even considered an over-sized value for
, 27 dB greater than the real value, to show the effect in
estimation.
Therefore, it is even better to choose a slightly higher
value for to guarantee that we never fall into the
problematic case.
As soon as we have a stable estimation of , after
detecting that the variance of some consecutive
estimations is very low, we can easily assume that the
environment is not changing to much and that the path-
loss value is close to the mean of over a certain
interval of time. Instead of, if a high variance in the
prediction is detected over a certain period of time, we can
easily assume that the channel condition is changing and
we can take the appropriate decisions over the anchor
nodes.













Fig 3. Path-loss estimation for a real path-loss and the right
.












Fig 4. Path-loss estimation for a real path-loss and wrong .
It is important to note that log-normal model (1) is
suitable when there are not many multipath components.
Therefore, in some real indoor scenarios this model can
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


5
not be applied and we should study other more complex
models like Nakagami [1]. Authors are working in a
solution, based on the results here described, for this kind
of models.
5. Conclusions
In this paper we presented a particle filter algorithm for
the estimation of the path-loss parameter of a log-normal
propagation model. The importance of good estimation of
propagation model parameters to achieve good results in
applications as, for instance, location systems, is well
known.

The introduced algorithm can dynamically estimate
this parameter. In the experimental results section we have
shown the effect of possible deviations in the estimation of
this parameter in order to analyze the algorithm accuracy.
We can conclude that the particle filter is very flexible and
suitable to solve the raised problem.

As a future line of work we should study the
possibility of using other more complex propagation
models, not as simple as the log-normal model used in this
paper, to take into account multipath fading effects, very
present in some real indoor scenarios.
Acknowledgments
This work has been supported by: 07TIC019105PR
(Xunta de Galicia, Spain), 2007/CENIT/6731 (Centro para
el Desarrollo Tecnolgico Industrial, Spain) and TSI-
020301-2008-2 (Ministerio de Industria, Turismo y
Comercio, Spain).

References
[1] A. Boukerche, H. A. B. F. Oliveira, E. F. Nakamura, A. A. F.
Loureiro, ''Localization Systems for Wireless Sensor
Networks'', IEEE Wireless Communications, pp. 6-12, Dec.
2007.
[2] T. S. Rappaport, Wireless Communications: principles and
practice, 2nd edition, Prentice Hall, 2002.
[3] J. Rodas and C. J. Escudero, ''Joint Estimation of Position and
Channel Propagation Model Parameters in a Bluetooth
Network'', in Proc. Synergies in Communications and
Localization (SyCoLo), ICC, Germany, Jun. 2009.
[4] M. Klepal, R. Mathur, A. McGibney, D. Pesch, ''Incluence of
People Shadowing on Optimal Deployment of WLAN
Access Points'', VTC, Los Angeles, USA, vol. 6, pp. 4516-
4520, Sep. 2004.
[5] E. Lutz, D. Cygan, M. Dippold, F. Dolainsky, W. Papake,
''The Land Mobile Satelite Communication Channel -
Recording, Statistics, and Channel Model'', IEEE
Transactions on Vehicular Technology, vol. 40, No. 2, 1991,
pp. 375-386, May 1991.
[6] C. Morelli, M. Nicoli, V. Rampa, U. Spagnolini, "Hidden
Markov Models for Radio Localization in Mixed LOS/NLOS
Conditions", IEEE Transactions on Signal Processing, vol.
55, no. 4, 2007, pp. 1525-1542, Apr. 2007.
[7] X. Li, ''RSS-based Location Estimation with Unknown
Pathloss Model'', IEEE Transactions on Wireless
Communications, vol. 5, issue 12, pp. 3626-3633, Dec. 2006.
[8] P. Tarrio, Ana M. Bernardos, J. R. Casar, ''An RSS
Localization Method Based on Parametric Channel Models'',
SensorComm, Valencia, Spain, Oct. 2007.
[9] K. Martinez, J. K. Hart, and R. Ong, ''Environmental sensor
networks'', Computer, vol. 37, pp. 50-56, Aug. 2004.
[10] A. Doucet, S. Godsill, C. Adrieu, ''On Sequential Monte
Carlo Sampling Methods for Bayesian Filtering'', Statistics
and Computing, no. 10, pp. 197-208, 2000.
[11] S. Arulampalam, S. Maskell, N. J. Gordon, and T. Clapp, ''A
Tutorial on Particle Filters for On-line Non-linear/Non-
Gaussian Bayesian Tracking'', IEEE Transactions of Signal
Processing, vol. 50(2), pp. 174-188, Feb. 2002.
[12] R. Douc, O. Capp and E. Moulines, "Comparison of
resampling schemes for particle filtering", in Proc. of the 4-th
International Symposium on Image and Signal Processing
and Analysis, pp. 64-69, Sep. 2005.


Javier Rodas Gonzlez. He received a M.Sc degree in Computer
Engineering from the Univertity of A Corua in 2007.
His area of interest is indoor positioning combined with short range
communications, wireless sensor networks (WSN), ad-hoc
networks, signal processing techniques for indoor localization,
hybrid positioning using both WSN and GNSS systems, and
embedded system designs and prototyping.

Carlos J. Escudero Cascn. He received a M.Sc degree in
Telecommunications Engineering from the University of Vigo in
1991 and a PhD degree in Computer Engineering from the
University of A Corua in 1998. He received two grants to stay at
the University of Ohio State as a research visitor, in 1996 and
1998. In 2000 he was appointed Associate Professor, and more
recently, in 2009, Government Vice-Dean in the Faculty of
Computer Engineering, University of A Corua. His area of
research is the signal processing, digital communications, wireless
sensor networks and location systems. He has published several
technical papers in journals and conferences, and supervised one
PhD thesis.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


6
Grids Portals: Frameworks, Middleware or Toolkit
Xavier Medianero-Pasco
1
, Beln Bonilla-Morales
2
, Miguel Vargas-Lombardo
3



1,2,3
Technological University of Panama
Panama City, Panama




Abstract
Grid portals are interfaces for interconnection between Grid
resources and users. Grid portals are supported by components
called portlets, servlets and middleware. Frameworks,
middleware and/or development toolkits are used in the design
and development of grid portals; each of them presents
distinctive characteristics like any existing tool for grid portals
such as: GridSphere, GridPort, PortalLab and others. The concept
and definition of the components of a Grid Portal are not clear at
the time of their application, because different models and works
are appointed arbitrarily without taking into account the approach
and features of the concept. Their clarification can give an insight
to each grid portal and existing components.

Keywords: Grid Portals, Middleware, Portlets, Servlets,
Framework.
1. Introduction
Grid computing is a technology that enables coordinated
exchange and use of hardware resources, software and
information in virtual organizations [1]. These resources
are not centralized, but distributed extensively across the
web. Grid computing environments are dynamic and open
and have autonomy to control services.

Generally, grid portals give access to the Grid structures
through portals or access points [2]. Grid portals are web-
based user interfaces which act as interfaces relating
environments to users within the Grid environment. The
main function of a grid portal is to interconnect different
grids, in other words, to unify and compact the information
search and utilization of hardware resources and software,
and to hide the level of complexity of the Grid, enabling
inexperienced users to access and use the Grid.

Grid portals emerged as a solution for the need of a simple
and straightforward environment for grid exchange, so that
the middleware can handle distributed resources provided
by the user through the portal.

The next generations of Grid portals are called semantic
Grid portals [3], whose objective is to integrate content,
data tracking and semantic information search. A semantic
portal is responsible for searching the meaning of the
functions by itself: describe what it does, give details of its
uses and the information generated, thus increasing the
impact rather than simply establish which variables need
and in response returns.

There are several projects involved in the development of
Grid portals, some of them related to semantic portals [3]
such as: Portals Finland Museum, Cultural Relic Museum,
Onto Web Portals, UDM Grid, among others. The
community-oriented generation of Grid portals have design
projects such as: Portlab, Grid Portal Development Kit,
Astrolable, Gridsphere, GridPort Toolkit, among others
[3,4,5]. All these projects have a common objective: to
interconnect different points (Clusters and Grid) in a larger
Grid.

The components of Grid portals are designed portal
Frameworks, Middleware, a set of Java components,
Servlets, Portlets, including technical differences
depending on architecture, structure, functions and
components.

This paper consists of several sections. Section II defines
Portals and states their characteristics, definitions and
models. Section III defines concepts, structure and
elements of Frameworks, Middleware and components.
Section IV will provide information about Portlets and
Servlets. Section V provides the proper classification of
some Web portals whose definition does not match the
correct concept. Section VI presents the conclusion of
research.
2. Portals
Portals are Web applications that enable easy user
interaction with services and resources of the grid [6]; they
utilize a support infrastructure called: portlet, which can be
implemented in conjunction with the servlet.

A Grid portal is an application server that offers an online
environment, insurance for collecting information on
services and resources of the network [7]. In addition, the
Grid portal is the access point to a web system; it provides
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

7
an environment where the user can access the resources and
services of the Grid, implement and monitor network
applications and collaborate with other users [3].
Grid Portals are responsible for managing the entry and
authentication of users (which play roles as authors,
searchers, administrators, researchers) through an interface
that provides a list of information resources, job scheduling
and file transfer [8].

The main disadvantage of websites is that once they are
configured it is very difficult to adapt a new application [2].

Grid portals are made up of portlets that are responsible for
incorporating the same features including validations and
search engines across the grid. Each service represents a
portlet so that the removal, modification or addition of Web
services results in a small impact, since the cohesion
between services is minimal.

Portals can be classified into two models depending on
their execution form, and ubiquitous access (present
everywhere). OGCE and HotPage are examples of portals
that provide ubiquitous access; the user handles certain
aspects of the Grid portal including accounts with high
performance computing resources, have setup and
configuration responsibilities, and others. Portals that are
controlled directly by the administrator correspond to the
second model. An example of the second model is Punch
In-Vigo, where the administrator is responsible for almost
all manufactured Grid portal [5].

According to [3], the portals can be classified into two
models depending on user control.

Some portals provide access to high-performance
computing resources at anytime; users have control access
to any resource to install what they need. Note that in this
type of portal users have knowledge of the Grid
middleware itself, and therefore are required to implement
changes.

Other portals provide access and resource control to a
specific user administrator who is responsible for installing,
configuring, or running applications and middleware. Users
(clients) only interact with the portal and are unaware of the
procedures running under it.

Of the two previous models, the second provides a more
stable and controlled setting, because the control of the
portal is the responsibility of the administrators and users
only perform the operations they need, knowing that the
level of knowledge of the administrator, a specialized user,
is much higher than that of the users although they know
the Grid works partially in the first model.
These two models have advantages and disadvantages;
therefore, the model that best suits the needs of what wants
to be deployed is the one that should be used.

According to [6], portals can be classified into:
Portals targeting users: for a given area of a population.
Portals oriented applications: for specific applications and
uses of the Grid.
Portals Oriented Development: tools that are portals for the
development of new portals.
2.1 Grid Portal based on GPIR [7]
The GPIR (Grid Port Information Repository) is a
framework that has three layers: user, service and
application. The grid architecture underlying GPIR have
three layers: resource layer, data layer and client tier. The
resource layer controls and integrates the resources of the
grid; the database layer keeps track of resources you have,
while the client tier allows user interaction with the Grid
portal.

Concerning the framework, the user layer is responsible
for the interaction of the customer with the Grid portal,
specifically search and user tools. The service layer
controls authentication, security modules, and Grid
services that are not linked directly to the users.
Meanwhile the resource layer is responsible for
monitoring, storaging and handling Grid-related
information under a specific portal.
2.2 UDM Grid [3]
University Digital Museum (UDM) is a Chinese
Information Grid that aims to integrate resources of some
thirty museums scattered across the network to share
resources, filter and sort information according to the
requirements and knowledge level of the users, and
eliminate data islands connecting all subnets.

The UDM is an application of Grid MIPGRID (Massive
Information Processing Grid) which was designed in 2004
to solve the problems of integration and heterogeneity of
information across the various virtual museums in China
[9].

There are several projects carried out on UDM Grid, which
were studies on the progress of the portal, tools for the
design of ontology and semantic annotations on the UDM
Grid [9], both in 2006. This project is currently run by
China Education Research groups.

The UDM serves as a virtual digital museum that facilitates
access to exchange information, or utilization of resources.
It has incorporated a Grid portal which consists of several
layers of services which are as follows:

Grid Resource Layer: represents the scattered resources
that must be added and integrated into the Grid.

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


8
Grid Service Layer: stores unified layer Grid resources
services allowing access to the portlet services. This layer
is the ontology of the systems service.

Portlet Service Layer: a layer with the function of setting
logic to applications and services and to adjust the Grid
hiding through the portal the complexity of the Grid to the
users.

Portlet Layer: contains the portlets used as facilities for
inserting the Grid portal that provides ontology services.

Application Layer: contains the portlets stored in the
portlet layer and deploy it in the application that must be
deployed and run. Application services are indifferent to
the user.

User Layer: is the user of the portal including standard
users, resources and suppliers.

These layers are integrated to form the UDM Grid
architecture; users interact with the application layer where
elements are deployed portlet layers, which are run and
controlled by service portlets that are responsible for
linking resources to the Grid.

The UDM Grid architecture consists of twelve modules,
which are responsible for the following functions:

Semantic search (meaning methods and functions, what it
does, how they are applied and y are processed), converts a
query to data schema, determines ontology, provides
access to heterogeneous databases, details, classification of
the museum information resources, database and semantic
annotation tools. It integrates heterogeneous resources to
the information network including information, knowledge
and software resources, stores the results of user requests in
the portal to increase efficiency when the same resources
are accessed again, and integrates resources obtained from
database access services and Web service resources.
2.3 Grid Sphere [11]
Grid Sphere is a framework of a Grid Web portal that
allows the integration of portlet and servlet containers. It
enables the design of portals in a fast and structured way.

Grid Sphere has a set of Portlets that allows you to perform
the following functions:

Authentication and Log services, application and
maintenance of accounts, user profile and maintenance,
layer configuration, portlet subscription , managing local
files, notes and instant messaging.
3. Frameworks, middleware, toolkits?
There are differences between the concepts of Framework,
Components and Middleware:
3.1 Frameworks
Frameworks are designed structures that contain modules,
methods and software features whose function is to be a
template for applications derived from the same
frameworks. In other words, using the same core and are
covered by a specific area.

Conceptually, frameworks are composed of two sections:
the core, which represent classes, libraries, methods and
modules that are equal and that serve as the basis for all
applications, thus, , all applications generated by this
framework will have the same basic features stored in this
layer. The second section is the slots which represent the
elements that can be adapted, added, or simply ignored in
the application. You can imagine slots as checklists where
you choose additional methods that the application must
have and the elements in the slots that are not required for
the framework to function. .

Characteristics of the frameworks:

Frozen spots: represent those points of the framework that
are neither extensible nor adaptable. Frozen spots are the
basic components of the framework.

Hot Spots: are highly extensible, adaptable and
configurable.

Extension Points: are the segments that might link hot and
frozen spots, or two hot spots.
3.2 Middleware
Middleware is connectivity software that serves as
intermediary between the various platforms in which the
portals can be. Middleware interacts with physical
resources such as users and built applications [10].

The middleware emerged in the 1980's aiming to integrate
applications that were on different platforms and / or
operating systems.

Characteristics of middleware:
Middleware are independent of the applications, use
standard communication protocols, and are run
independently to the platform.
3.2.1 In-Vigo [5, 10]
In-Vigo is a middleware that provides a distributed
environment where multiple applications coexist in virtual
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

9
and physical resources and it uses virtualization to create
virtual resources.

It was developed by ACIS Laboratory using Java Servlets
as the core element of the system which was first
implemented in 2004. It has different applications, is used
in DineroIV (cache simulator for memory), can be
configured to run on a large number of trial and scientific
applications such as XSpim (Simulator) and Gamma (a
chemical package testing).

In-Vigo is based on a standard three-layer architecture
virtualization (abstraction of resources):

First Layer: Creates a pool of resources namely virtual
machines, data, applications and networks. Jobs are
assigned to virtual resources, and virtual resources are
controlled only through the domain.

Second Layer: Grid applications are instantiated as
"services" that can be added if needed.

Third Layer: The value-added services are presented to
users through portals and other channels.
In this model, are used three portlets that are used for
generating a dynamic interface, display and monitor the
implementation of activities and to convey messages
generated by the GAP.

It has three roles for users:

The first role is the single user who is responsible for
interacting with the Grid portal using any of the three
modules presented in the portlets. Users are clients
(humans or other services) who use the legacy scientific
applications. They have a working knowledge of how to
use the application to be invoked, but not about the
installation requirements of the application or about the
underlying middleware involved in computing.

The second role is the expert user of the application. The
user does not use the same user portlets above though using
Grid-enabling instructions can access the application
descriptor. This role is well informed about the
implementation of the Grid -enable but does not know
about the technical details of Web application middleware.

The other role of the application is the Administrator who
is responsible for managing the Grid resources Grid and
portals. They have an in-depth knowledge of Grid
computing middleware and portal systems, even though
users do not understand in-depth the performance the
applications of Grid computing portals.
3.3 Toolkits
They are development tools that incorporate features to
design portals grids.
3.3.1 Grid Portal Toolkit
It is a development tool for creating web portals and
applications for Grid infrastructure [12], it was developed
by Texas Advanced Computing Center (TAAC) [7], and its
security model based on Globus GSI (Grid Security
Infrastructure) for a single login and authentication to
remote servers.

It allows to automatically incorporate the use of MyProxy,
to be possible to authenticate users through identification
and credentials for a short period of time [12], using the
JSR168 standard.
3.3.2 Globus Toolkit
The Globus is a development toolkit used for building
Grids. It was developed by Globus Alliance in 1998, the
most popular versions incorporate GT 3.x OGSI (Open
Grid Service Architecture), GT 3.9 and GT 4.0 supported
by WSRF (WebService Resource Framework) [2].

Globus Toolkit includes features that can be used either
independently or together with the application such as:
security software, information infrastructure, resource and
management, fault detection, among others.
4. Portlets and Servlet
Portlets are Web components based on Java or Java Server
Pages that provide a modular structure that allows and
facilitates the establishment of permissions to increase the
granularity, and provide greater control over Grid portals
[2].

Java Servlets are components of a web server with the role
of providing impetus to the implementation, in this case to
a Grid portal. According to [13], the difference between
servlet and portlet is that portlets are multi-step and user-
facing applications, portlets are used in Grid portals as
pluggable interface components that form the user
presentation layer, instead of the whole portal as Servlets
do. [5].

The portlet will join the system stability based on low
cohesion of services, in other words, whether the portlet
that executes the functionality of a service is modified,
deleted or a new one is added, other services are not totally
shocked by this event.

The most widely used Java portlet standard JSP168 are
called Java Portlet v1.0 and JSP286 called Java Portlet
v2.0. One of the main objectives in the area of portlets is to
maintain the interoperability of the data in the Grid portal.
In [13], portlets are classified based on Data-based APIs.

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


10
Ownership of Data-based (based on data) allows portlets to
share information among them in regard to how the concept
of portlet application is introduced by the JSR-168
standard. This standard introduces portlets for Java
platform designed interfaces, and the specific runtime
behavior and relationship that should exist between a
portlet and its container. The property-based API (based on
APIs) can provide a programming interface for portlets, so
that they can communicate their status to parties directly
related to them. In addition, portlets can be used within the
Grid as tools to apply ontology to Grid portal services [13].
This is accomplished by incorporating the standard
languages OWL-S or BPEL4WS to the design of portlets
so that it can capture data input, output and the process
must form semantics.

Portals employ portlets as pluggable and related
components that provide a presentation layer and user
interface, allowing Grid portals to have a complete and
easy to maintain structure by reducing the element of
cohesion.

The standard JSR168 (Java Specification Request 168) is a
portlet specification, in charge of maintaining
interoperability between Grid portals and the portlets it
uses. Thus its composition does not affect any of these or
cause inconvenient in the implementation f a given service.

The JSR168 standard provides an API for portlets to
generate URLs that exploit the context information
provided during the encryption of the portal. JSR 186
Differentiates two types of URL Portlets: the navigation
and resource [14].

Sometimes the portlet cannot run all the features needed in
a Grid portal, in these cases, duality to the work is
incorporated through Servlets [15].

The main advantages of the portlets include the following
benefits: use the standard JSR-168, can be added or
removed easily from a portal environment, can be chosen
according to the needs of users. Portlets in a group of
components can access multiple service groups [16].

Portlet and Servlet containers manage and e provide an
environment for the execution of each of them.

Servlets communicate directly with their customers while
portlets are invoked indirectly via the portal application.
Portlets and Servlets are similar in many respects, with the
main difference that Portlets can run in a portal
environment, and servlets can only run on a default servlet
container [17].
5. Discussion
The classification of the components of Grid Portals varies
on the different papers presented; nonetheless, they
coincide with the concepts of Frameworks, Middleware
and Development Toolkits.

GridSphere Project as a framework is always considered as
[2, 11, 18, 19, 20], but it is also considered a development
toolkit [3]. Based on the characteristics stated Gridsphere is
a framework for Grid portals because it is a prototype for
the generation of certain web sites. It allows fast
configuration of elements of the framework variants, such
as: the core set of portlets that controls a wide range of
portal services. In addition, it presents features for the
development of other elements of Grid portals. Jetspeed 2
is a framework according to [2].

Other projects such as GridPort have been named
differently by each of the quoted authors. Middleware is
considered in [2], also [7] is considered a portal framework
and in addition it is considered a development toolkit in
[12]. Port is a based Grid Globus Grid Security
Infrastructure that presents tools for the creation,
development and configuration of portals; consequently, it
can be considered a development toolkit.

Additionally, PortalLab is considered a Grid Portal [17]
and a Toolkit in [3]. One of the features of PortalLab
Development Toolkit is that it allows the creation of
semantic portals consisting of Web Service oriented
reusable portlets.

Globus is Toolkit [21] that has characteristics of
Middleware and Development Toolkit. It is designed
specifically for the development of grid portals including
its components.

The In-Vigo project is a middleware [5, 10] based on its
interconnection and independence characteristics.
Astrolabe is a Grid Portal [4] that can be considered a
Framework because it provides features similar to
GridSphere as detailed in [4].

GTLab are libraries supporting workflows using a Directed
Acyclic Graphs (DAG) Framework supported by the
Globus Toolkit. Meanwhile, UDM Grid is a grid portal that
does not correspond to a structure framework, or
middleware toolkit, since it is a project designed for a
specific use and presents the implementation of external
elements to the same project.
6. Conclusion
Grid portals are intermediary applications between the user
and the resources and services of the Grid. Portals consist
of portlets, servlets and other Java components.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

11

To design a Web portal, we use a portal framework,
middleware, java components or development toolkits.
These components are different in structure, architecture
and features; hence, that the right and consistent name
presented in the different works provides the best
understanding of them. Portal frameworks are structures
defined for a specific portal model, while middleware
facilitates communication between structures, and toolkits
allow the development of a portal from pre-developed
tools.
References
[1] I. Foster, "The Anatomy of the Grid: Enabling Scalable
Virtual Organizations," in Proceedings of the 1st
International Symposium on Cluster Computing and
the Grid (CCGRID 01), 2001.
[2] Y. Cai, et al., "Portlet-based Portal Design for Grid
Systems," in the Fifth International Conference on Grid
and Cooperative Computing Workshops, 2006, p. 5.
[3] X. Chen, et al., "A Multilayer Portal Model for
Information Grid," in The Third ChinnaGrid
Conference, 2008, p. 8.
[4] H. Lin, et al., "Astrolabe: a Grid Operating
Environment with Full-fledged Usability," The Sixth
International Conference on Grid and Cooperative
Computing (GCC 2007), p. 6, 2007.
[5] V. Sanjeepan, et al., "A Service-Oriented, Scalable
Approach to Grid-Enabling of Legacy Scientific
Applications," in IEEE International Conference on
Web Services, 2005, p. 8.
[6] B. Bansal and S. Bawa, "Design and Development of
Grid Portals," in IEEE Region 10th Conference,
Melbourne, Australia, 2005, p. 5.
[7] J. Fang, et al., "Grid Portal System Based on GPIR,"
presented at the Second International Conference on
Semantics, Knowledge, and Grid, IEEE, 2006.
[8] X. Meng, et al., "Applying Service Composition in
Digital Museum Grid," Springer-Verlag Berlin
Heidelberg 2005, 2005.
[9] X. Chen, et al., "Toolkits for Ontology Building and
Semantic Annotation in UDMGrid," in the
International Workshop on Metropolis/Enterprise Grid
and Applications, 2005, pp. 486-495.
[10] S. Adabala, et al., "From virtualized resources to
virtual computing grids: the In-VIGO system,"
Advanced Computing and Information Systems (ACIS)
Laboratory, 2004.
[11] J. Novotny, et al., "GridSphere: An Advanced Portal
Framework," in the 30th EUROMICRO Conference,
IEEE, 2004.
[12] M. Dahan, et al., "Grid Portal Toolkit 3.0 (GridPort),"
in Proceedings of the 13th IEEE International
Symposium on High Performance Distributed
Computing, 2004, p. 2.
[13] O. Daz, et al., "Improving Portlet Interoperability
Through Deep Annotation," p. 10, 2005.
[14] O. Daz and I. Paz, "Turning Web Applications into
Portlets: Raising the Issues," in the 2005 Symposium
on Applications and the Internet, 2005, p. 7.
[15] B. Beeson and A. Wright, "Developing Reusable
Portals for Scripted Scientific Codes," in Proceedings
of the First International Conference on e-Science and
Grid Computing, 2005, p. 6.
[16] S. Trujillo, et al., "Feature Oriented Model Driven
Development: A Case Study for Portlets," in 29th
International Conference on Software Engineering,
2007, p. 10.
[17] M. Li, et al., "PortalLab: A Web Services Toolkit for
Building Semantic Grid Portals," in the 3rd IEEE/ACM
International Symposium on Cluster Computing and
the Grid, 2003, p. 8.
[18] X. D. Wang, et al., "Flexible Grid Portlets to Access
Multi Globus Toolkits," in The Fifth International
Conference on Grid and Cooperative Computing
Workshops, 2006, p. 6.
[19] R. Cao, et al., "USGPA: A User-centric and Secure
Grid Portal Architecture for High-performance
Computing," in International Symposium on Parallel
and Distributed Processing with Applications, IEEE,
2009, p. 7.
[20] X. Yang and R. Allan, "Bringing AJAX to Grid
Portals," IEEE, p. 8, 2007.
[21] F. Ionescu, et al., "Portlets-Based Portal for an E-
Learning Grid," CIMCA 2008, IAWTIC 2008, and ISE
2008, 2008.



Xavier Medianero-Pasco is a student of the Master of Science in
Information and Communication Technology at the Technological
University of Panama. He was awarded a Bachelors degree in
Engineering and Computer Science from the Technological
University of Panama in 2009. His research interests include
Aspect Oriented Programming Framework, Grid Computing and
other topics.

Belen Bonilla-Morales is a student of the Master of Science in
Information and Communication Technology at the Technological
University of Panama. She was awarded a Bachelors degree in
Engineering and Computer Science from the Technological
University of Panama in 2009. Her research interests include
Software Engineering, Semantic Web and Grid Computing.

Miguel Vargas-Lombardo is a professor at the Technological
University of Panama, PhD awarded by the Polytechnic University
of Madrid. His main research lines are gird computing, health-grid
and e-health.









IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814

12
Evaluation of JFlex Scanner Generator Using Form Fields Validity
Checking


Ezekiel Okike
1
and Maduka Attamah
2



1
School of Computer Studies, Kampala International University,
Kampala, 256, Uganda


2
Department of Computer Science, University of Ibadan,
Ibadan, Oyo State 02 234, Nigeria


Abstract
Automatic lexical analyzer generators have been available
for decades now with implementations targeted for C/C++
languages. The advent of such a tool targeted to the Java
language opens up new horizons, not just for the language
users but for the Java platform and technology as a whole.
Work has also been done by the Java language developers
to incorporate much of the theory of lexical analysis into
the language. This paper focuses on the application of the
JFlex tool (a Java based lexical analyzer) towards the
generation of a standalone class that will work as one more
entity in a solution space. The study application is a lexer
that serves to check the validity of fields in a form.
Keywords: Lexical Analyzer, Complier Generators, Java
Language, Validity Checking

1. Introduction

A frequently encountered problem in real life
application is that of checking the validity of field
entries in a form. For example, a form field may
require a user to enter a strong password which
usually must contain at least one lower case letter, an
upper case letter and a digit. If the user fails to enter
password matching such specification, the program
should respond by alerting the user with appropriate
message such as Your password is not strong.

















The job of checking the validity of fields in our
application thus properly falls to the lexical analyzer.
In this case, the Graphical User Interface (GUI) form
collects the user inputs, constructs an input string
from the input fields and user supplied values, and
channels the input string to the scanner. The scanner
matches each segment (field) of the input string
against a regular expression and reports its
observation. The report is thus generated and fed
back to the GUI for the user to see. The user is
allowed to correct any erroneous field as long as it
appears. The model for this application is shown in
figure 1 above.

Specifically, the application in question (see figure 2)
is a Web Directory Form. Web administrators could
use it to organize the different administrative
information they may have on the web sites they
administer. The web directory form thus forms but
one module in a potential software. This form could
check the validity of the fields before they are
forwarded to the database. The fields checked in this
case are as follows:

Web URL, IP Address, E-Mail Address, GSM Phone
Number (Global), Strong Password, Security
Question, Country, Time Zone.

Thus a regular expression for the above fields in a
single lexical specification file generates a single
scanner. Using Jflex (Java Flex), a scanner
implementation in Java following a given lexical
specification could be achieved.
It is also possible to achieve the implementation in
other languages such as C or C++ with Lex, or Flex
which is a faster version of Lex [3][10].
Scanner Report
GUI
USER
Figure 1: Model of GUI Lexical Analysis
for a Web Directory Application
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

13



Figure 2: Snapshot of the Web Directory Application


This paper is thus focused on the use of the JFlex tool
to generate a scanner for a language defined by the
extended regular expression and supported by the
JAVA programming language.

2. The Method

The development and evolution of Lex and Flex is
described in [7][10]. Originally these tools were
developed for the UNIX operating system to generate
scanners in the C programming language but
afterwards versions for other operating systems and
programming languages began to surface. Popular
Java versions of lex include JLex and JFlex. Flex can
be downloaded along with its documentation at [1]
whereas JFlex can be downloaded at [2].

JFlex was written in Java. One of the consequences
of this is that in order to use the tool you must have
your Java Runtime Environment (JRE) well setup.
Other setup procedures for the JFlex tool are also
required.

The setup of Jflex involves two major steps, namely:
Installation of the JRE [3][4] and installation of
JFlex [2][5]. Following appropriate installation and
configuration procedures, the screenshot showing
that Java is properly configured on the local machine
is shown in figure 3, while the screenshot which
shows that JFlex tool is well configured is shown in
figure 4.






Figure 3: Snapshot That Shows That Java
Is Well Configured On The Local Machine



Figure 4: Snapshot Which Shows That Java
JFlex Tool is Well Configured and Ready

2.1 The Lexical Specification for JFlex

A lexical specification file was created from a text
file (.txt) by changing the extension of the file from
.txt to .flex. This implies that a specification file
can be edited by simple text editors like notepad.exe
or wordpad.exe (both on Microsoft Windows
operating system).

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

14

2.2 The structure of the lexical specification file

The lexical specification file for JFlex consists of
three parts divided by a single line starting with %%
[8][10]:

User code
%%
Options and declarations
%%
Lexical rules

2.2.1 User Code

The first part contains user code that is copied
literally into the beginning of the source file of the
generated scanner, just before the scanner class is
declared in the java code. This is the place to insert
package declarations and import statements.

2.2.2 Options and Declarations

The second part of the lexical specification contains
options to customize the generated scanner (JFlex
directives and Java code to include in different parts
of the scanner), declarations of lexical states and
macro definitions for use in the third section
(Lexical Rules). Each JFlex directive must be placed
at the beginning of a line starting with the %
character.

2.2.3 Lexical Rules

This section is introduced by another set of %%
characters on a new line after all the intended macro
definitions. It contains a set of regular expressions
and actions (Java code) that are executed when the
scanner matches the associated regular expression
[5].

The lexical rules are also called transition rules. An
example is shown below:

{EXPR} {
System.out.print(%s, Valid > +
yytext());
}

The above example means that if you match an
expression, display the expression such that the string
Valid > expression is on the output screen. Thus the
transition or lexical rule is what happens when a
pattern, defined by a regular expression is matched.

2.3 Lexical Specification for the Web Directory
Scanner

The file containing the lexical specifications for this
study is WebDirectoryScanner.flex. The user code
section of this file is empty as there was no need for
the package statement and the import statement. The
classes were made standalone and simply copied to
an application directory. Thus the specification file
begins as follows:


%%
%public
%class WebDirectoryScanner
%standalone
%unicode


To make the generated class accessible by other
classes in separate applications, the %public option
was used. The %class option specifies
WebDirectoryScanner as the name of the generated
scanner class. The %standalone option was used
because the scanner would not be plugged into a
parser. The scanner is also required to throw out
token information on the console (at the background,
since the main view port for the user is the Graphical
User Interface) to be used as debugging information.

To ensure the character set constituting the terminals
of the scanner is sufficient to consider any special
character, the unicode was formally activated. This is
especially important if the .(dot) character class is
used in some of the regular expressions. The dot
character class refers to all characters.

Class variables were not declared, hence the %{ (and
the counterpart %}) was not necessary.


2.4 Macro Definitions

These are included in the second section, usually
after the options. A macro definition has the form

macroidentifier = regular expression

A macro definition is a macro identifier (letter
followed by a sequence of letters, digits or
underscores), that can later be used to reference the
macro. This is followed by optional white-space,
followed by an "=", followed by optional white-
space, followed by a regular expression. The regular
expression on the right hand side must be well
formed and must not contain the ^, / or $ operators
[5][8][9].



IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

15

2.4.1 The Macros of WebDirectoryScanner.flex
File

Macros were defined for the Web URL, IP Address,
E-Mail Address, GSM Number, Strong Password,
Security Question, Country and Time Zone.

3. Analysis of the Lexical Specification for
the Web Directory Scanner

The breakdown of the associated macros is as
follows:

3.1 Web URL

The URL (Universal Resource Locator) must point to
a web page or a definite resource to be valid. The
following is a listing of the web URL module from
the lex specification file.






To the left of the assignment (=) lies the name of the
macro (WEBURL), to the right lies the regular
expression defining this macro.

This regular expression defines all strings beginning
with an optional http://www., https://www.,
ftp://ftp. or tftp://tftp., then followed by any
string of one or more length, followed by another dot
and terminating with two to six characters. Some of
the strings in this language include:

www.yahoo.com
https://www.allmp3.net
ftp://ftp.glide.com/personal/readme.html
www.operators.org/review.html

The front slash (/) is a meta symbol (meaning that
JFlex converts it as one of its own operating
symbols). The only way to use this symbol as a
terminal in a regular expression is to escape it as
follows - \/ that is, a backslash (the escape character)
followed by a front slash. The same applies to the dot
., a meta symbol which stands for the universal set
of all characters available to the particular flex
specification (this universal set is defined with the
%unicode, %ascii, etc, in the options section). To use
the dot character as a terminal, there is need to escape
it. All meta characters must receive the same
treatment if they are intended to be used as terminals
in the language in question.
The question mark is an optional closure which
indicates that its operand may be (once) or not be
there at all. The square brackets [] are used to define
character sets in shorthand. For example [a-z] which
implies one of a or b up to z. The {2, 6} is the range
specifier which indicates that its operand (in this case
the character set [a-z]) can occur for a minimum of
two times and a maximum of six times. The + meta
symbol indicates iteration (or closure). This means
that there can be at least one or more iterations (the
empty string is not possible). The * meta symbol
indicates the Kleene closure which means that the
iteration can occur zero or more times up to infinity
(the empty string is possible with the Kleene closure).

JFlex also uses the Java format for comments and as
such comments can be contained in between /* and
*/ (for multiline comments) or after // (for single
line comments).

3.2 IP Address

The listing is as shown below.







An IP address has four octets, each containing a
number between 0 and 255 inclusive. The OCTET
macro defines this range and explained as follows:
when the first digit is a zero or one, then the second
and third digits can range from 0 through 9. When the
first digit is two, then the second and third digits can
only range from zero through five. When there are
only two digits, then they can both range from zero to
nine. When there is only one digit, it can range from
zero to nine. The semantics of the IPADDRESS macro
follows intuitively. Examples of strings in this
language include:

127.0.0.0
192.168.5.6
255.255.78.10
10.0.1.11

3.3 E-Mail Address

The listing is as shown below:

EMAILADDRESS = [a-z0-9_]+ @[a-z0-9_]+
(\. [a-z] {2,6} )+

/*****WebURL Module ****************/
WEBURL = ((http:\/\/)?www|(https:\/\/)
?www|(ftp:\/\/)?ftp|(tftp:\/\/)?tftp)
(\.)(.)+(\.)[a-z]{2,6}
/****IP Address Module ***************/
OCTET = ([0-1][0-9][0-9])|
(2[0-5][0-5])|([0-9][0-9])|[0-9]

IPADDRESS ={OCTET}\.{OCTET}\.{OCTET}\.{OCTET}
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

16

An e-mail address is made up of a string consisting of
a possible mix of lower case letters, digits,
underscore, separated by the @ symbol and ending
with a dot and a string of lowercase letters of length
ranging from two to six. A closure (i.e. {2,6}) is
placed after the \.[a-z] to show that the extension
could repeat. Examples of strings in this language
include:
pfizer@yahoo.co.uk, james@gmail.com,
bluecollar@37.com, emeks_rex@tim.net

3.4 GSM Number




A regular expression that can match any GSM
(Global System for Mobile Communication) number
was implemented. GSM numbers usually comprise of
ten to eleven digits (or twelve to thirteen digits if
country code is included).

3.5 Strong Password

The definition of a strong password requires a string
which contains at least one lowercase letter, one
uppercase letter and one digit. A strong password
module is shown below for FLAGONE, FLAGTWO and
FLAGTHREE respectively. Possible strings in this
language would include:

234loRd, socK3vbn*, YmeL01d, shu^b5b1












3.6 Security Question

A string is considered to be a question when it ends
with a question mark. A regular expression for this is
as follows:


3.7 Country

The only requirement for a country name in the
regular expression is that it contains only characters
from A through Z whether uppercase or lowercase.
The listing is as follows.



Since the language is English, special characters
involving modifiers like dieresis would not be
accepted (at least in this version). The space character
is included in the character range.

3.8 Time Zone

/*..........Time Zone Module.............*/
TIMEZONE = ((g|G)(m|M)(t|T))[ ]
*((\+|-)[ ]*([1-9]|10|11|12):(00|30|45)|
(\+13:00))?[ ]*([a-zA-Z]| [ ]|,|;)+

The time zone whose regular expression listing is
shown above expects a string beginning with gmt
(the case mixtures does not matter), followed by an
optional space or spaces, followed by an optional
plus or minus sign, space or spaces, and a number
ranging from one through twelve. Additional +13 is
added for completeness (the time zone dating is not
symmetrical). Finally the string ends with a country
or region name with possible commas or semi-colon
as defined by the following part of the regular
expression: ([a-zA-Z]|[ ]|,|;)+

The pair of square brackets with a space in between
[ ] is the syntax for adding the space character to our
regular expression.

3.9 Analysis of the Lexical Rules for the Web
Directory Scanner

The lexical or transition rules section forms the heart
of the real usefulness of this scanner. For any of the
patterns matched, the lexical rule updates a custom
class (Report.java) which keeps a record of whether
each field is valid or not. The report class has one
class variable (each is a boolean type) for each field
in the web directory form. The class variable
declarations in Report.java are shown below:










From all the fields in the form, a single string is
constructed and passed to the scanner for analysis.
/****** Global GSM Number ********/
GSMNUMBER = (\+[0-9]{3}|[0-9])[0-9]{10}
/***** Strong Password Module *****/
FLAGONE = [a-z]
FLAGTWO = [A-Z]
FLAGTHREE = [0-9]

STRONGPWD = (.)*{FLAGONE}+(.)*
{FLAGTWO}+(.)
*{FLAGTHREE}+(.)*|(.)*{FLAGONE}
+(.)*{FLAGTHREE}+(.)*{FLAGTWO}+
(.)*|(.)*{FLAGTWO}+(.)*{FLAGONE}
+(.)*{FLAGTHREE}+(.)*|(.)*{FLAGTWO}
+(.)*{FLAGTHREE}+(.)*{FLAGONE}+(.)
*|(.)*{FLAGTHREE}+(.)*{FLAGTWO}+(.)
*{FLAGONE}+(.)*|(.)*{FLAGTHREE}+
SECURITYQUESTION = (.)+\?
/*..Country Module...........*/
COUNTRY = [a-zA-Z ]+
public static String report = "";
public static boolean webUrlOk = false;
public static boolean ipAddressOk = false;
public static boolean emailAddressOk = false;
public static boolean gsmPhoneNoOk = false;
public static boolean strongPwdOk = false;
public static boolean securityQuestionOk =
false;
public static boolean countryOk = false;
public static boolean timeZoneOk = false;
public static String SATISFACTORY =
"All Fields Are Satisfactory!";
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

17

public static String getReport(){

report = ""; // clearing the string first of
initial values

if(!webUrlOk)
report = report + "Invalid Web URL field\n";

if(!ipAddressOk)
report = report + "Invalid IP Address field\n";

if(!emailAddressOk)
report = report + "Invalid E-Mail Address
field\n";

if(!gsmPhoneNoOk)
report = report + "Invalid GSM Number field\n";
if(!strongPwdOk)
report = report + "Your Password Is Not Strong.
Have At Least One Digit, One Uppercase
And One Lowercase Letter\n";
if(!securityQuestionOk)
report = report + "Your Question Is Not Complete,
Check It\n";
if(!countryOk)
report = report + "Invalid Country Name field\n";
if(!timeZoneOk)
report = report + "Invalid Time Zone field\n";
if(report == "")
report = report + SATISFACTORY;

// reset all fields before exiting

webUrlOk = false;
ipAddressOk = false;
emailAddressOk = false;
gsmPhoneNoOk = false;
strongPwdOk = false;
securityQuestionOk = false;
countryOk = false;
timeZoneOk = false;
return report;

}
The string is constructed as follows:

Each value supplied by the user is concatenated to a
string which stands for the title (or identification) for
the field in which the entry was made. For example if
the user enters mathi@yahoo.com in the E-Mail
Address field, then the string mathi@yahoo.com is
concatenated to "Email Address:" and terminated by
a \n, to show the end of that segment or field. And
this yields "Email Address:mathi@yahoo.com\n" as
the resulting string. For the form shown below (figure
5), the string passed to the scanner would be










Figure 5: Snapshot of Web Directory Application under Testing

\n is the escape character for new line. When the
scanner sees a new line it stops trying to recognize
the pattern and compares what it already has to see if
it is sufficient.

4. Discussion

Consider the transition rules below:
















When the scanner sees the constant string Web
URL followed by another string that matches the
pattern for WebURL {WEBURL}, and a newline
character, then there is a valid WebURL field. The
webURL field of the Report class (Report.webUrlOk)
will be set to true indicating that that field is valid.
This happens to the rest of the string. At the end of
the string (zzAtEOF) (see also 4.1), all the fields of the
Report class are inspected, and if any of them is still
false (they are false by default), then the scanner
assumes that there is an error with the particular field.
An error message tied to the field or fields is
displayed.

The following code snippet compiles the report string
to be displayed on the Form Check Report pane.




If the value of webUrlOk is false, the report string
should be filled-in so as to reflect the situation. And
the process goes on as in the listing below.








































"Web Url:http://www.google.com\n
IP Address:10.0.11.98\nEmail
Address:gmailadmin@gmail.com\n
GSMNumber:+8938097865342\nStrong
Password:goGetM1cors0ft\nSecurity
Question:How are you today?\n
Country:Moscow\nTimeZone:GMT +
3:00 St. Petersburg, Volgograd\n
%%
"Web Url:"{WEBURL}"\n"
{Report.webUrlOk = true;}
"IP Address:"{IPADDRESS}"\n"
{Report.ipAddressOk = true;}
"Email Address:"{EMAILADDRESS}
"\n" {Report.emailAddressOk = true;}
"GSM Number:"{GSMNUMBER}"\n"
{Report.gsmPhoneNoOk = true;}
"Strong Password:"{STRONGPWD}"\n"
{Report.strongPwdOk = true;}
"Security Question:"{SECURITYQUESTION}
"\n" {Report.securityQuestionOk = true;}
"Country:"{COUNTRY}"\n"
{Report.countryOk = true;}
"TimeZone:"{TIMEZONE}"\n"
{Report.timeZoneOk = true;}
if(!webUrlOk)
report = report +
"Invalid Web URL field\n";
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

18

4.1 The Scanning Process

The following code, embedded as the action code for
clicking button Submit on the graphical user
interface, starts off, sustains and terminates the
process of scanning.














The string passed to the scanner is assigned to a
string object userInputs (a class variable of class
WebDirectory). For the scanner class
WebDirectoryScanner, its constructor takes an object
of InputReader class. Since StringReader is a
subclass of InputReader, it can substitute it for the
more general InputReader since input to the scanner
is just contained in a string. The object of
WebDirectoryScanner created is scanner.

Scanner.yylex() is the method for actual scanning.
When the end-of-file is reached, the state of the
scanner, scanner.zzAtEOF is set to true to denote
that the end of the string has been reached. At this
point it stops scanning and displays the content of the
Report class found on the lower part of the
application window. This is achieved by calling:

this.reportTextArea.setText(
Report.getReport());

4.2 Generating the Scanner
(WebDirectoryScanner.java)

To generate the scanner class, the JFLex tool is
executed by pointing to the file
WebDirectoryScanner.flex as in figure 6.

Clicking the Generate button will display the
following, at the same time as it drops the generated
file in the folder indicated in the output directory. If
errors were encountered, the source of the errors will
instead be indicated on the Messages pane, and the
user would go back and correct the ill-formed lexical
specification.




Figure 6: Snapshot of JFlex at Code
Generation Time



Figure 7: Snapshot Showing JFlex Messages after Generation of
Code

JFlex generates exactly one file containing one class
from the specification (except another class in the
first section of the lexical specification is declared).
WebDirectoryScanner scanner = new
WebDirectoryScanner(new StringReader
(this.userInputs));
try {
while ( !scanner.zzAtEOF )
scanner.yylex();
} catch (IOException ex) {
ex.printStackTrace();
}
this.reportTextArea.
setText(Report.getReport());
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

19

The generated class contains (among other things) the
DFA tables, an input buffer, the lexical states of the
specification, a constructor, and the scanning method
with the user supplied actions [5].

The message pane shows that the DFA has a total of
455 states before minimization and a total of 212
states after minimization. This is over 100%
efficiency! The class Yylex by default could be
customizable with the %class directive. The input
buffer of the scanner is connected with an input
stream via the java.io.Reader object which is
passed to the scanner in the generated constructor.

The main interface of the scanner to the outside
world is the generated scanning method (its default
name yylex(), and its default return type yytoken().
Most of the generated class attributes are
customizable (name, return type, declared exceptions
etc.). If yylex() is called, it will consume input
until one of the expressions in the specification is
matched or an error occurs. If an expression is
matched, the corresponding action is executed
immediately. It may return a value of the specified
return type (in which case the scanning method
returns with this value), or if it does not return a
value, the scanner resumes consuming input until the
next expression is matched. When the end of file is
reached, the scanner executes the EOF action, and
(also upon each further call to the scanning method)
returns the specified EOF value. Details of the
scanner methods and fields accessible in the actions
API can be found in [5].

4.3 Compiling the Generated Scanner

The generated scanner class is compiled using the
Java compiler.



Figure 8: Screenshot Showing the Command
for Compiling the Generated Code
Pressing the Enter key in the command line will
produce the file WebDirectoryScanner.class
which the Java Virtual Machine can then execute
this assumes there were no errors in the codes
otherwise debugging would be done first.

Other class files needed for this case study were
Report.class and WebDirectory.class. The entry
point (Main class) to the entire application is in
WebDirectory.class. Hence, this class was passed
to the Java Virtual Machine at the command line
during start of execution

4.4 Evaluation of JFlex tool

4.4.1 Some Limitations of JFlex 1.4.2

Some of the observed limitations of Jflex from this
study are as follows.

There are currently no provisions for assertions.
Assertions form part of the theory of the extended
regular expression. Some assertions are:
Lookahead assertion (?=)
Negative lookahead (?!)
Lookbehind assertion (?<=)
Negative lookbehind (?!= or ?<!)
Once-only subexpression (?>)
Condition [if then] ?()
Condition [if then else] ?()|

With facilities like this, some regular expressions
would have been possible or easier to write.
An example is the regular expression for strong
password which could have been written as:

(?=.*\[0-9])(?=.*[a-z])(?=.*[A-Z])

Using the lookahead assertion, it could have easily
tracked the occurrence of digits, lowercase and
uppercase letters and even special symbols. To add
another character flag to our own version of regular
expression using facilities provided by JFlex would
have been very tedious. Facilities for lookahead
assertions should be included in later versions of
JFlex.

As a sequel to the above, this study found that trying
to set the range for the length of the password field
using the range specifier {x,y}, the JFlex tool was
not terminating during code generation, and the
processes of constructing the DFAs was taking an
infinite time. Compared to the usual total generating
time of about 500ms, JFlex did not finish even after
25 minutes. This happened after several trials until it
was discovered that the range specifier {8, 20}
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No. 4, May 2010
www.IJCSI.org

20

(which says that the password should be at least 8
characters in length and at most 20 characters) was
the cause of the non termination. Hence, the range
specifier was not useful in this case. JFlex also lacks
the special range specifier {x,} which indicates at
least x, but no upper limit. JFlex should therefore be
optimized in subsequent versions so that the range
specifier can be used in a practical situation.

The way the space character was implemented in
JFlex is truly not the best. The idea of defining a
space character using [ ] is very obscure especially
when it lies side-by-side other characters. For
example [a-zA-Z ] (used in the COUNTRY macro).
The space character following the capital Z is not
very clear. The use of escape characters like \s or the
posix [:space:] for the space character might be a
better approach.

JFlex does not check the Java code included in the
user code section and in the lexical rules for
correctness. It is relatively straight forward to plug-
in the Java Compiler (javac) into the jflex tool so as
to do a static check on the java codes, routing the
error messages to JFlex display so that the user can
also correct the Java codes earlier on, instead of
waiting for a separate session, at run time to start the
debugging process. We recommend the integration of
a lightweight debugger into JFlex.

The only uses well foreseen and provided for by the
creators of the JFlex tools have been for either
standalone scenarios (where the scanner can be run as
a complete and separate application) or a scenario
where the scanner is used by an overlying parser (in
this case, the scanning method returns a token
periodically to the parser). In our present use, the
scanner has to be content with just being one of the
utility classes in the entire application. The best bet is
for it to standalone but then the field of the scanner
that indicates that it has reached end-of-file
(zzAtEOF) is a private member of the scanner class.
And JFlex does not generate accessor methods for
this field. Creating a non-standalone scanner even
makes matters worse because the scanner will be
expecting to scan, get a token, return the token and
then stop (waiting for a call by a parser). The class
acting as the parser will have to look out for a
predetermined integer in the returned tokens which
denote an end-of-file from the scanner. In our
application, we desired that the scanner scans through
the entire input stream before returning. And the
calling class is not a parser!

The generated scanner had to be modified for this
study application. An accessor method for the
zzAtEOF field was implemented so as to be able to
track the end of file (EOF) as shown in the following
code snippet.









The while statement will continue to execute the
scanning method yylex()while the end-of-file is not
reached (EOF is known when
scanner.getZzAtEOF() method returns true). We
recommend that JFlex should generate public
accessors for all the scanner fields so that
applications that are neither standalone nor parser
compatible can more easily utilize the scanner.

References
[1] http://www.gnu.org/software/flex/
[2] http://www.jflex.de/
[3] http://java.sun.com/javase/6/download.jsp
[4] http://java.sun.com/javase/6/webnotes/install/
[5] JFlex Users Manual, Version 1.4.2, May 27,
2008 JFlex Installation Pack Accompanying Manual
[6] The Regular Expression Cheat Sheet From
www.ILoveJackDaniels.com
[7] Compilers: Principles, Techniques and Tools, Alfred
V. Aho et. al.; 2
nd
Ed., Pearson Education Inc., 2007.
[8] S. Chattopadhayay. Compiler Design. New Delhi:
Prentice- Hall, xvii +225pp; 2005
[9] K. D. Cooper and L. Torczon. Engineering a
Compiler. New York: MK, xxx + 801pp, 2008
[10] J. Levine. Flex and Bison text processing tools.
New York: OReilly Media, pp304, 2009.

Ezekiel U. Okike received the B.Sc. degree in Computer
Science from the University of Ibadan Nigeria in 1992, the
Master of Information Science (MInfSc) in 1995 and PhD in
Computer Science in 2007 all from the same University. He
has been a lecturer in the Department of Computer Science,
University of Ibadan since 1999 till date. Since September,
2008 to date, he has been on leave as a Senior Lecturer and
Dean of the School of Computer Studies, Kampala
International University, Uganda. His current research
interests are in the areas of Software Engineering, Software
Metrics, Compilers and Programming Languages. He is a
member of IEEE Computer and Communication societies.

Maduka Attamah is currently a Researcher and Associate
Lecturer in the Department of Computer Science, University
of Ibadan. He is also serving in the ICT Unit of University of
Ibadan as Systems Analyst and Software Engineer. He
holds a B.Eng. (2006) in Electronic Engineering from the
University of Nigeria, Nsukka and an M.Sc. (2009) in
Computer Science from the University of Ibadan, Nigeria.
WebDirectoryScanner scanner =
new WebDirectoryScanner(new
StringReader(this.userInputs));
try {
while ( !scanner.getZzAtEOF() )
scanner.yylex();
} catch (IOException ex)
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814



21
Kansei Colour Concepts to Improve Effective Colour Selection in
Designing Human Computer Interfaces
Tharangie K G D, Irfan C M A, YAMAD K and MARASINGHE A.

Department of Management and Information Systems Engineering, Nagaoka University of Technology
Niigata, Japan

Abstract
Colours have a major impact on Human Computer Interaction.
Although there is a very thin line between appropriate and
inappropriate use of colours, if used properly, colours can be a
powerful tool to improve the usefulness of a computer interface
in a wide variety of areas. Many designers mostly consider the
physical aspect of the colour and tend to forget that
psychological aspect of colour exists. However the findings of
this study confirm that the psychological aspect or the affective
dimension of colour also plays an important role in colour
Interface design towards user satisfaction. Using Kansei
Engineering principles the study explores the affective variability
of colours and how it can be manipulated to provide better
design guidance and solutions. A group of twenty adults from Sri
Lanka, age ranging from 30 to 40 took part in the study. Survey
was conducted using a Kansei colour questionnaire in normal
atmospheric conditions. The results reveal that the affective
variability of colours plays an important role in human computer
interaction as an influential factor in drawing the user towards or
withdrawing from the Interface. Thereby improving or degrading
the user satisfaction.
Key words: Computer Interfaces, Kansei, kansei Engineering,
colours, Visual design, Affective variability of colour.
1. Introduction
Colour is a major component in Graphical User
Interfaces (GUI). Due to the development of GUI
applications on computers and in the Web, the
examination of colour has become a pertinent factor in
computer and web application design. The effective use of
colour can improve the performance of an application
while in effective use of colour can degrade an
application's performance and lessen user satisfaction.
Therefore using colour effectively requires careful
coordination with colours and their other associated
dimensions.

Although there are abundance of design guidelines and
recommendations available for selecting colours for
interface designing, designers still fail to cater for the
fullest satisfaction of individual user needs. In searching
the reasons for the above issue, the designers ignorance
of affective (Affect is a neurophysiologic state that is
consciously approachable as a simple non reflective
feeling [1].) dimension of colour in interface designing
was detected as a dominant factor; what a user
psychologically feels for the interface and how the user
psychologically drawn towards or withdrawn from the
Interface. Although there are existing emotion - colour
spaces, formulae and image scales which can be used to
understand the affective variability of colours in computer
Interfaces [2][3][4][5][6][7], they commonly provide
algorithms and guidelines in an abstract or in a rigid
manner. Therefore underlying colour-emotion factors are
poorly understood, making it difficult to understand
without the aid of a practical approach. In order to address
these issues, in existing research, our study presents a new
research paradigm and solution methodology based on
Kansei Engineering techniques to appraise prospective
users affective variability of colours and provide better
guidance to improve the colour usage in human computer
interfaces.
2. What is Kansei?
Several scholars have given various definitions for Kansei.
What is highlighted by many of them is that Kansei is a
subjective internal process in the brain which is activated
by external stimuli. Nagamachi in his studies futher
explains this concept by articulating that Kansei is an
individual's subjective impression from a certain artefact,
environment or situation, using all the senses of sight,
hearing, feeling, smell, taste as well as recognition [11].
Lee et al. further elaborating the word Kansei says that
the word Kansei is embedded with more semantics such as
sensibility, sense, sensitivity, aesthetics, emotion, affection
and intuition [12].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org



22

3. How Kansei is measured?
Even though it is very difficult to capture ones Kansei, it
can be approximately measured using the following four
different methods: peoples behaviours and actions; words
(spoken), facial and body expressions; physiological
responses as such heart rate, electro myo graphy (EMG),
electro encephalo graphy (EEG), etc [11]. While Some
Kansei researchers prefer physiological methods which
can directly measure users Kansei while a user is actively
interacting with the setup. Some prefer psychological
methods such as self reporting, Semantic Differential
Analysis methods etc. Some Kansei researchers adopt both
measuring methods to find better solutions [15]. However
this study fully focuses on a psychological method,
semantic differential analysis (SDA) as the main Kansei
measurement method which was designed to measure the
connotative meaning of objects, events, and concepts [13].

Kansei knowledge gaining process follows a hierarchical
model. If collected Kansei data were represented by a
pyramid, primary data belongs on the bottom level or the
basement of the pyramid. When data are analysed further,
according to its refined level, they go higher in the
pyramid until they reach the peak of the pyramid where
we find the highest level of Kansei or general Kansei.
Lower degree of Kansei is more subjective or individual
than higher levels of Kansei. Highest level of Kansei is
more refined and generalised, and from that data new
design principles can be derived. Utilising this Kansei
Knowledge to design new applications or, in other words,
to translate consumers feelings towards a product into
design elements is called Kansei Engineering (figure 1).
Kansei is a word originates from Japanese culture.
Although there is no direct translation for Kansei in
English language, engineers commonly represent it by the
English word affective and Kansei engineering as
Affective engineering.




Fig.1: Working Principal of Kansei Engineering
4. Subjects & Material
20 male and female Sri Lankan adults (government
employees) have taken part in this research as subjects.
Test subjects age ranged between 40 and 60. No subjects
reported in any colour deficiency. Eleven colours were
selected as the colour

Table 1: Selected colour palette.

Colour RGB Value
Red 255, 0, 0
Orange 255, 165, 0
Brown 165, 42, 42
Yellow 255, 255, 0
Green 0, 128, 0
Blue 0, 0,255
Purple 128, 0, 128
Pink 255, 192, 203
White 255, 255, 255
Grey 128, 128, 128
Black 0, 0, 0

sample (Table 1). These colours are generally known as
the colours that were almost never confused throughout
the cultures: red, pink, purple, blue, green, yellow, orange,
brown, white, grey, black [14].The RGB values of the
colours are displayed in Table 1. In addition to that, ten
highly ranked Kansei words were selected to design bi-
polar scales: angrynot angry, sad-not sad, fearful-not
fearful, surprising-not surprising, bored-not bored, happy-
not happy, exciting-not exciting, affectionate-not
affectionate, pleasant-not pleasant. Finally a colour
questionnaire was prepared consisting bi-polar scales as
well as other open ended questions to gather general and
other related information.
5. Methodology
Experiment was conducted in a quiet room in normal day
light conditions using a questionnaire prepared using
Kansei Engineering principals. The questionnaire
consisted of three consecutive sessions: preliminary data
acquisition, detailed data gathering, and final
questionnaire testing with bipolar scales. Medium of the
questionnaire is the subjects native language (Sinhala).
The questionnaire was separated in to three sections. The
fist section of the questionnaire tested the direct colour
affective associations in terms of mapping colours with
appropriate kansei words and providing reasons for these
associations. Second part of the questionnaire was aimed
at affective variability of colour and interface designing,
finally the third part of the questionnaire measure affective
variability of colours using semantic differential scales.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org



23

Kansei words are emotion related adjectives in the subject
domain and they can be used as representatives of the
affective variability of participants. Therefore using these
Kansei words in semantic differential analysis scales,
affective variability of participants towards colours can be
quantified. Kansei words were selected using various
collection methods such as by participants themselves,
statistical methods, affinity diagrams and expert judgment.
Bi-polar scales were designed using these Kansei words
and using these bipolar scales subjects were asked to
quantify their affect for individual colours (testing colour
samples against bi-polar scales). Finally the result was
statistically analysed to find the trends of affective
variability of colour in Interface designing.
6. Results and Discussion
Results are categorized into 2 important subjections. In the
section 6.1 three trait Kansei words were selected to
highlight relationship of affective variability of colour and
the computer interfaces. In section 6.2, the influence of
affective variability of warm and cool colour usage in
interfaces was compared and discussed with the previous
research work.

6.1 Affective variability of colour and Interface
designing.
Bored
-1.5
-1
-0.5
0
0.5
1
R
e
d
O
r
a
n
g
e
Y
e
l
l
o
w
G
r
e
e
n
B
l
u
e
P
u
r
p
l
e
P
i
n
k
W
h
i
t
e
B
l
a
c
k
B
r
o
w
n
G
r
e
y
Not bored
m
e
a
n

s
c
o
r
e

Fig. 2: Illustrates the Subjects affective variability for in the domain of
Kansei word bored which has a negative connotation. Therefore higher
values were achieved for colours which generate negative affect and
lower values were achieved for colours which generate positive affect.

To gain domain specific affective knowledge,
considerations of trait kansei words are more important
and useful. In figures 2, 3, 4 bi-polar scales result of three
trait kansei words were illustrated. The bi-polar scale
bored not bored is a negative kansei word based bipolar
scale. Kansei word bored is associated with gray and
brown and the other colours are deviate from the semantic
end bored and associated mainly with not bored (figure 2).
Therefore on the basis of affective variability of gray and
brown it can be argued that dominant use of these colours
in interface designing may not impress the users.
Pleasant
-1.5
-1
-0.5
0
0.5
1
1.5
R
e
d
O
r
a
n
g
e
Y
e
l
l
o
w
G
r
e
e
n
B
l
u
e
P
u
r
p
l
e
P
i
n
k
W
h
i
t
e
B
l
a
c
k
B
r
o
w
n
G
r
e
y
unpleasent
m
e
a
n

s
c
o
r
e

Fig. 3: Illustrates the subjects preferences in mean value for colours in
the domain of Kansei word pleasant. Higher values were achieved in the
descending order, pink, yellow, orange, blue etc. Lowest values were
achieved for colours grey, brown and black comparatively.

Moreover the other two bipolar scales selected are
pleasant - unpleasant and exciting and unexciting. These
two scales are positive kansei words based bi-polar scales.
Positive values gained by colours for these scales were
drawn towards the positive end of the bi- polar scale and
negative values drawn towers the negative end of bi-polar
scale.
Exciting
-1.5
-1
-0.5
0
0.5
1
1.5
R
e
d
O
r
a
n
g
e
Y
e
l
l
o
w
G
r
e
e
n
B
l
u
e
P
u
r
p
l
e
P
i
n
k
W
h
i
t
e
B
l
a
c
k
B
r
o
w
n
G
r
e
y
Unexciting
m
a
n

s
c
o
r
e

Fig. 4: Illustrates the subjects preferences in mean value for colours in
the domain of Kansei word exciting. Comparatively higher values were
achieved in the descending order, yellow, pink and orange etc. Lowest
values were achieved for colour grey, brown and black.
The colours such as brown, gray, black and red have
scored negative values for pleasant - unpleasant scale
(figure 3), which means the tendency of affective
variability of these colours are towards unpleasant. On the
contrary remaining colours have scored positive values,
which draw the tendency of these colours towards the
semantic border of pleasant. Therefore according to the
results of this study for kansei word pleasant, mostly warm
and cool colours induce pleasant feelings except red,
brown and purple. Furthermore for kansei word pleasant,
neutral colours such as grey and black induce unpleasant
affect except white.

According to bi-polar scales exciting unexciting ( figure
4), yellow has scored higher positive value for kansei
word exciting than it was for kansei word pleasant and
pink has gained less positive value for kansei word
exciting compared to the score pink was gained for kansei
word pleasant. Although there are slight differences in the
selection of the colours for kansei word pleasant and
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org



24

exciting, the colours gray, brown, and black have gained
negative scores and were drawn towards the negative
semantic end of these positive bi-polar scales remain
constant in both cases (figure 4). In the both bi-polar
scales these colours have gained negative values which in
essence are more drawn towards the negative border of the
two kansei words pleasant and exciting. Consequently it
can be seen that black, brown and gray do not induce
positive affect in this study. Therefore according to the
affective variability of these colours in this study it can be
argued that using these colours dominantly in interface
designing will not impress the users.

Examining the outcomes of above three bi-polar scales,
(bored, pleasant and exciting) clues for generalised aspect
of affective variability of colours can be drawn. In essence,
brown and gray are unpleasant, unexciting and bored
colours. On the contrary pink, yellow, orange, green, blue
are pleasant and exciting colours. Thus how is this
knowledge of knowing the affective variability of colour
useful for a designer? Knowing the trend of user affective
variability, designers get added advantages to make a
better judgment for selecting colour palette for respective
interface designing. With regard to this results, when
planning to get use of colours associated with bored,
unpleasant and unexciting, it should be careful about to
what extend these colours should be used; how these
colours can be balanced to make colours combinations to
be combined with highly preferred colours.

Moreover, results reveal that, as colours black, grey and
brown are not so exciting or pleasant while colours as blue,
yellow, pink green comparatively have higher preferences.
With regard to this results, when one plans to get use of so
called bored, unpleasant colours, he needs to be careful to
what extend these colours should be used; how these
colours can be balanced to make colours combinations to
be combined with highly preferred colours.

6.2 Manipulation of Warm and Cool Colours
utilising Kansei concepts.

Colours should be used in a way not only to add
excitement and flavour to the interface but also to improve
the functionality and usability. For an instance cool soft
colours does not show any dominant associations with any
of the extreme positive or negative kansei words ( except
pink) but these colours commonly have gained higher
value for kansei word pleasant (figure 6). Therefore
looking at the slightly positive affective nature of these
colours it can be argued that these colours are capable of
keeping ones mind at rest and provide an environment to
work for a longer period of time. On the contrary warm
colours that generate warm feelings draw attention
instantly than others do; they also put our mind at agitation
which is also uncomfortable to associate for a longer
period of time. For an Instance colour red has gain highest
value for negative kansei word angry which is a
representation of an aggressive atmosphere. (figure 5).
This argument can be further strengthened by comparing it
with the result of a previous study which investigated the
effect of colour on human emotions. This previous study
detected that red and yellow were stimulating and
disagreeable colours. Further these colours draw attention
of people on outward environment. Moreover these
colours are forceful, expressive and provocative of
behaviour. The same study found that the green and blue
were restful colours; quieting and agreeable. These colour-
associations draw peoples attention inwards and cause
reserved and stable behaviours [17].

-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
S
a
d
A
n
g
r
y
B
o
r
e
d
F
e
a
r
u
l
D
i
s
g
u
s
t
i
n
g
H
a
p
p
y
E
x
c
i
t
i
n
g
A
f
f
e
c
t
i
o
n
a
t
e
P
l
e
a
s
a
n
t
S
u
r
p
r
i
s
i
n
g
Red Orange Yellow Brown

Fig. 5: Illustrates variation of warm colours for selected Kansei words.
These colours are considered as warm colours or colours which has a
high temperature effect. Red is considered as the warmest among all
colours which have the highest value for Kansei word angry.
-2
-1.5
-1
-0.5
0
0.5
1
1.5
S
a
d
A
n
g
r
y
B
o
r
e
d
F
e
a
r
u
l
D
i
s
g
u
s
t
i
n
g
H
a
p
p
y
E
x
c
i
t
i
n
g
A
f
f
e
c
t
i
o
n
a
t
e
P
l
e
a
s
a
n
t
S
u
r
p
r
i
s
i
n
g
Green Blue Purple Pink

Fig. 7: Illustrates variation of cool colours for selected Kansei words
Therefore it can be argued that warm colours such as red,
yellow, and orange are better solution for highlighting and
drawing ones attention to particular areas of the Interface
and also can assist in enhancing the visual search.
Nevertheless according to their arousing and dominating
nature warm colours should not be selected to represent a
larger area of a computer interface since they may degrade
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org



25

the performance of the application. On the contrary, cool
colours (see figure 6) are soft and calming to the eye as
well as to the mind.

Another previous study reveals that while warm colours in
the red colour system arouse genial, positive, active
feelings, cool colours such as green promote moderate,
calm, ordinary feelings.[16]. Therefore these attributes that
a cool colour comprises make them better candidates for
background colours and theme colours. Figure 6
illustrates the representation of cool colours for the same
Kansei words selected for the warm colours. Although
blue and green are mostly associated with positive feelings,
altogether all the colours in the cool colour graph do not
show significant representation for positive or negative
feelings. This clue indicates that cool colours make better
background and theme colours because of their tendency
towards balanced representation of feelings.
8. Conclusion
This study discusses how Kansei engineering concepts can
be put in to practice in selecting better colour solutions to
design human computer interfaces. Making better colour
decision and customizing it to the needs of user is never a
simple task. Particularly when the designer is a
programmer himself, it would be helpful if there are
already existing systems which he can get assistance to
select better colours choices. Although there are already
existing systems to recommend good colour themes, it is
doubtful that these readymade generic colour themes can
fulfill prospective user needs. Experienced designers can
overcome this problem by adopting an existing colour
scheme and customise it to prospective user needs. But for
inexperienced designers it would be greatly helpful if he
can get use of simple mechanism to get a proper idea
about the affective variability of the prospective users.

One of the advantages of conducting a Kansei survey is
revealing the users feelings or affectivity towards colours
beforehand. While providing assistance to selecting better
colour schemes Kansei concepts reveal another dimension
of colours as such the users affective interaction towards
colour. Analysing and generalising these affective
interactions, designers can get a better idea about the
users impression towards colours. Furthermore focusing
on affective variability of colour in designing a human
computer interface is an extra aid to improve the
simplicity, consistency, clarity and handling cultural
differences to improve the quality of the design.

References

[1]. Russell, J. A; Core affect and the psychological construction of
emotion. Psychological Review, Vol. pp 110, 145172 (2003)
[2]. Ou, L., Luo, M. R., Woodcock, A., and Wright, A., A study of
colour emotion and colour preference, Part II: colour emotions for
two-colour combinations, Colors Research and Application Vol.
29, 292-298 (2004b).
[3]. Ou, L., Luo, M. R., Woodcock, A., and Wright, A., A study of
colour emotion and colour preference, Part III: colour preference
modelling, Colors Research and Application Vol. 29, 381-389
(2004c).
[4]. Ou, L., Luo, M. R., A colour harmony model for two-colour
combinations, Colours Research and Application Vol. 31, 191-204
(2006).
[5]. Kobayashi, S., The aim and method of the Colours Image Scale,
Colors Research and Application, Vol. 6, 93-107 (1981).
[6]. S. Kobayashi, Colour Image Scale. Kodansha International Ltd.,
Bunkyo-ku, Tokyo, Japan (1990)
[7]. S. Kobayashi, Colourist, A practical Handbook for personal and
professional use, Kodansha International Ltd., Bunkyo-ku, Tokyo,
Japan (1998)
[8]. Sato, T., Kajiwara, K., Hoshino, H., and Nakamura, T.,
Quantitative Evaluation and categorising of human emotion
induced by colour, Advances in Color Science and Technology,
Vol. 3, 53-59 (2000).
[9]. Xin, J. H., Cheng, K. M., Taylor, G., Sato, T. and Hansuebsai, A.:
Cross-regional comparison of colour emotions Part I: Quantitative
Analysis, Colours Research and Application, Vol. 29, 451-457
(2004a)
[10]. Xin, J. H., Cheng, K. M., Taylor, G., Sato, T. and Hansuebsai, A.:
Cross-regional comparison of colour emotions Part II: Quantitative
Analysis, Colors Research and Application, Vol. 29, 458-466
(2004b)
[11]. Nagamachi, M. Workshop 2 on Kansei Engineering. Proceedings
of International Conference on Affective Human Factors Design,
Singapore, (2001)
[12]. Lee S., Harada A. & Stappers, P J. Pleasure with Products: Design
based Kansei. Pleasure with Products: Beyond usability, Green, W.
and Jordan, P. (ed.), Taylor & Francis, London, pp. 219-2 (2002)
[13]. Osgood, C E., Suci, G.J, Tannenbaum P. H.: The Measurement of
Meaning, University of Illions press Urbana (1957)
[14]. Derefeldt G., Swartling T., Berggrund U F., Bodrogi P.,Cognitive
colours, Colors Research & Application, Vol 29 Issue 1, Pages 7
19 (2004)
[15]. Nagasaswa, S. Y.: Kansei and Business. Kansei, Engineering
International Journal of Kansei Engineering, Vol. 3, pp 3-8 ( 2002)
[16]. Yoto A., Katsuura T., Iwanaga K., & Shimomura Y., Effects of
Object Colours Stimuli on Human Brain Activities in Perception
and Attention Referred to EEG Alpha Band Response, Journal of
Physiological Anthropology Vol. 26 (2007) , No. 3 pp.373-379
[17]. Stone N J, English A J, Task type, posters, and workspace color on
mood, satisfaction, and performance. Journal of Environmental
Psychology. Vol. 18 (1998) pp 175185


IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 26
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


An Efficient System Based On Closed Sequential Patterns for
Web Recommendations

Utpala Niranjan
1
, Dr.R.B.V. Subramanyam
2
, Dr.V.Khanaa
3

1
Department of Information Technology, Rao & Naidu Engineering College, Ongole, Andhra Pradesh, 523 001, India,

2
Department of Computer Science and Engineering, National Institute of Technology, Warangal, Andhra Pradesh 506 004, India,

3
Department of Information Technology, Bharath University, Chennai, Tamilnadu 600 020, India



Abstract

Sequential pattern mining, since its introduction has
received considerable attention among the researchers
with broad applications. The sequential pattern algorithms
generally face problems when mining long sequential
patterns or while using very low support threshold. One
possible solution of such problems is by mining the closed
sequential patterns, which is a condensed representation of
sequential patterns. Recently, several researchers have
utilized the sequential pattern discovery for designing a
web recommendation system, which provides personalized
recommendations of web access sequences for users. This
paper describes the design of a web recommendation
system for providing recommendations to a users web
access sequence. The proposed system is mainly based on
mining closed sequential web access patterns. Initially, the
PrefixSpan algorithm is employed on the preprocessed
web server log data for mining sequential web access
patterns. Subsequently, with the aid of post-pruning
strategy, the closed sequential web access patterns are
discovered from the complete set of sequential web access
patterns. Then, a pattern tree, a compact representation of
closed sequential patterns, is constructed from the
discovered closed sequential web access patterns. The
Patricia trie based data structure is used in the construction
of the pattern tree. For a given users web access
sequence, the proposed system provides recommendations
on the basis of the constructed pattern tree. The
experimentation of the proposed system is performed
using synthetic dataset and the performance of the
proposed recommendation system is evaluated with
precision, applicability and hit ratio.

Keywords: Closed Sequential pattern mining, Web
Personalization, Web server log data, Prefix span, Pattern
tree, Patricia trie.

1. Introduction

The development of data mining techniques has been
centralized on discovering hidden data in an efficient way
that is beneficial for corporate decision-makers [1, 2].
Sequential pattern mining is an important subject of data
mining which is extensively applied in several areas [3]. In
general, sequential pattern mining is defined as
determining the complete set of frequent subsequences in
a set of sequences [4, 13]. Sequential pattern is a sequence
of itemsets that frequently appear in a particular order,
such that, all items in the same itemset are expected to
have the same transaction time value or within a time gap.
Each sequence relates to a temporally ordered list of
events, where each event is considered as a collection of
items/itemset occurring at the same time [5]. Generally, all
the transactions of a customer are collectively considered
as a sequence, known as customer-sequence, where each
transaction is denoted as an item set in that sequence and
all the transactions are listed in a particular order in
connection with the transaction-time [6].

Normally, sequential pattern mining algorithms determine
all frequent sequential patterns, defended by a minimum
support from a database of sequences. This mining
algorithm has an outcome of the following two problems.
(1) First, sequential pattern mining generates large number
of candidate patterns in an exponential curve regularly,
which is unavoidable when the database comprises of long
frequent sequential patterns. For instance, let us assume
that the database holds a frequent sequence
k
i i i , , ,
2 1

,
k=20, it will generate 1 ) 20 2 ( . frequent subsequences
which are basically redundant patterns. (2) Second,
establishing minimum support is also a challenging task
for knowledge works: The smaller value of minimum
support may lead to generate larger amount of candidate
sequential patterns, while a larger value may result in no
answer [7]. A closed sequential pattern mining is
developed to address the challenges in mining all frequent
sequential patterns like the large number of mined
patterns, low support threshold and the long running time
[8, 9].

A closed sequential pattern is said to be a specialized
sequential pattern which has no super-sequence with the
same occurrence frequency [10]. There exist two
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 27
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


approaches for mining closed frequent patterns: (1)
determine the final closed pattern set (2) determine a
closed pattern candidate set and perform post-pruning on it
(for the extreme cases, perform on-the-fly checking, to be
exact., for each newly determined pattern verify the
previous patterns to notice if it is closed with respect to the
discovered patterns). It seems desirable to employ the first
approach owing to the reason that the second needs
memory space to store the discovered patterns and
perform post-pruning; but, when the patterns turn out to be
more complex, it is hard to assure that each generated
pattern is closed without examining the formerly
discovered patterns [11]. The mining of closed sequential
pattern have fascinated most of the researchers for its
potential of employing compact results to maintain the
equivalent expressive power as traditional mining [12].

In recent years there has been an overwhelming attention
in employing sequential mining techniques to construct
personalization systems. In order to obtain a compact
result, usage of closed sequential pattern mining is more
useful as it retains all the potential information. Web
personalization is any action that amends the data or
services offered by a Web site to the requirements of a
specific user or a set of users procuring benefit of the
knowledge obtained from the users navigational behavior
and individual interests, together with the content and the
structure of the Web site [14]. The purpose of a Web
personalization system is to supply users with the
information they require, without expecting from them to
ask for it clearly [15]. Web Recommender system is a kind
of personalized web application which supplies
considerable user value by personalizing a number of sites
on the Web [16].

Web recommender systems have been drawing attention
to a greater extent as an appropriate approach to counter
information overload and aid the users of the Web
information space to find what they require in a faster way
[17]. Web recommendation systems [18] are useful in
instructing the users to the target pages. On the other hand,
employing recommendation systems to available Web
sites needs considerable work and knowledge [19]. There
exist many methods to build such systems. Among these
methods, the building of web recommendation system by
web access log is an accepted method [20], [21]. In this
paper, we have designed a web recommendation system
based on closed sequential pattern. The proposed system
can be used for generating a personalized Web experience
for the user. The input to the proposed system is the web
server log data. The web server log data describes the
users visiting behavior and generally consists of the
following fields: IP address, access time, HTTP request
method used, URL of the referring page and browser
name. It is difficult to mine frequent sequential web
access patterns from the raw web server log data directly.
So, the web server log data is to be preprocessed and then
sequential web access patterns are mined from it. Then,
closed sequential web access patterns are discovered from
the mined sequential web access patterns. Subsequently, a
pattern tree is constructed using the mined closed
sequential web access patterns. For a current user access
sequence, web recommendations can be given to the user
by matching the constructed pattern tree.

The rest of the paper is organized as follows: Section 2
presents a brief review of the researches related to the
proposed system. Section 3 details the proposed web
recommendation system. The experimental results and
performance evaluation of the proposed system are given
in Section 4 and finally the conclusions are summed up in
Section 5.

2. Review of Related Research

Numerous researches are available in the literature for web
recommendation system using sequential pattern mining.
It looks more preferable if we design web
recommendation system based on closed sequential
patterns for its compact representation. Here, we present
some of the researches related with closed sequential
pattern mining along with web recommendation system
based on sequential pattern mining.

Zhou. B et al. [26] have proposed an intelligent Web
recommender system identified as SWARS (sequential
Web access-based recommender system) that employs
sequential access pattern mining. In the proposed system,
CS-mine, an efficient sequential pattern mining algorithm
was made use of to recognize frequent sequential Web
access patterns. The access patterns were then stored in a
compact tree structure (Pattern-tree) which was then
employed for matching and generating Web links for
recommendations. The performance of the proposed
system was assessed on the basis of precision, satisfaction
and applicability. An efficient sequential access pattern
mining algorithm, called CSB-mine (Conditional
Sequence Base mining algorithm) was presented by
Baoyao Zhou et al. [27]. The presented CSB-mine
algorithm was on the basis of conditional sequence bases
of each frequent event which removes the need for
constructing WAP-trees. This enhanced the efficiency of
the mining process considerably in comparison to WAP-
tree based mining algorithms, particularly when the value
of support threshold becomes smaller and the database
size gets larger. The proposed CSB-mine algorithm and its
performance were discussed. They have also described a
sequential access-based web recommender system that has
included the CSB-mine algorithm for web
recommendations.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 28
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814



Cui Wei et al. [28] have presented a hybrid web
personalization system which was on the basis of
clustering and contiguous sequential patterns. Their
system clustered log files to find out the basic architecture
of websites, and for each cluster, they employed
contiguous sequential pattern mining to optimize the
topologies of websites further. They have presented two
evaluating parameters to test the performance of our
system. Zhenglu Yang et al. [13] have presented an
efficient sequential mining algorithm (LAPIN_WEB:
LAst Position INduction for WEB log), which is an
extension of previous LAPIN algorithm to extract user
access patterns from traversal path in Web logs. Web log
mining system comprises of data preprocessing, sequential
pattern mining and visualization. The experimental results
and performance studies established that LAPIN_WEB
was efficient and outplayed familiar PrefixSpan by up to
an order of magnitude on real Web log datasets.
Furthermore, they also implemented a visualization tool to
aid interpret mining results and also forecast users future
requests.

Kuo-Yu Huang et al. [29] have presented an approach
which extends a frequent sequence with closed itemsets
instead of single items. The closed sequential patterns
were made up of only closed itemsets. Therefore, needless
item extensions which generate non-closed sequential
patterns were prevented. Experimental results proved that
the proposed approach was two orders of magnitude faster
than the existing related works with a reasonable memory
cost. An efficient algorithm, known as TSP (top-k closed
sequential patterns), was developed by Petre Tzvetkov et
al. [30] for mining closed patterns without min_support.
Starting at (absolute) min_support=1, the algorithm used
the length constraint and the properties of top-k closed
sequential patterns to execute dynamic support raising and
projected database pruning. The performance study
illustrated that TSP has high performance. It outplayed the
efficient closed sequential pattern-mining algorithm,
CloSpan, even when the latter was processed with the best
tuned min_support threshold.

Takeaki Uno et al. [31] have presented an efficient
algorithm LCM (Linear time Closed pattern Miner) for
mining frequent closed patterns from large transaction
databases. The major contribution towards the theoretical
part was the proposed prefix-preserving closure extension
of closed patterns, which allowed them to look for all
frequent closed patterns in a depth-first manner, in linear
time for the number of frequent closed patterns. Their
algorithms do not require any storage space for the earlier
obtained patterns, while the existing algorithms require it.
Performance analysis of LCM with straightforward
algorithms illustrated the positive aspects of prefix-
preserving closure extension. Nancy P. Lin et al. [7] have
presented an algorithm for mining closed frequent
sequences, a scalable, condensed and lossless structure of
complete frequent sequences that was mined from a
sequence database. The algorithm, FMCSP (Fast Mining
of Closed Sequential Patterns), has employed a number of
optimization methods, namely equivalence class, to lessen
the needs of searching space and run time. Specifically,
one of the main problems in this type of algorithms was
the redundant generation of the closed sequences;
therefore, they presented an efficient and memory saving
methods, diverse from existing works, that does not
require the complete set of closed sequences to be residing
in the memory.

3. A Proposed Web Recommendation System Based
On Closed Sequential Patterns

Web Personalization is an application of data mining and
machine learning techniques to build models of user
behavior. It can be useful to the task of predicting user
needs and adapting future interactions with the main aim
of improved user satisfaction [22]. A unique and important
class of personalized Web applications is represented by
Web recommendation systems. It highlights on the user-
dependent filtering and selection of relevant information.
Several approaches like Content-Based Filtering,
Clustering Based Approaches, Graph Theoretic
Approaches, Association and Sequence Rule Based
Approaches are available in the literature for designing
web recommendation system [22]. Here, the proposed web
recommendation system is designed based on the closed
sequential patterns. The proposed system is used for
generating a personalized Web experience for a user. The
block diagram of the proposed web recommendation
system based on closed sequential patterns is given in Fig
1. The important steps for generating recommendations to
the user is as follows,

Preprocessing
Mining of sequential web access patterns
Mining of closed sequential web access patterns
Construction of Pattern tree
Generation of recommendations for user web
access sequences
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 29
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814



Fig 1: Block diagram of the proposed web recommendation system
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 30
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


3.1 Preprocessing

The purpose of data preprocessing is to extract useful data
from raw web log and then transform these data in to the
form necessary for pattern discovery. Due to large amount
of irrelevant information in the web log, the original log
cannot be directly used in the web log mining procedure,
hence in data preprocessing phase, raw Web logs need to
be cleaned, analyzed and converted for further step. A
Web log is a file to which the Web server writes
information each time a user requests a resource from that
particular site. Most logs use the format of the common
log format. Each entry in the log file consists of a
sequence of fields relating to a single HTTP transaction
with the various fields.

The input for the proposed web recommendation system is
a web server log data and it comprises IP address, access
time, HTTP request method used, URL of the referring
page and browser name (for an instance, Web server log
file: 192.162.26.12 [12/Oct/2009:11:17:55] "GET /
HTTP/1.1"
http://en.wikipedia.org/wiki/Association_rule_learning"
Mozilla/5.0 Windows xp). It is difficult for these web
server log data to be directly used to mine the desired
sequential pattern mining process. So, due to that
phenomenon, the following preprocessing techniques need
to be used in the raw web server log data.

(1) User Identification: User identification means
identifying each user accessing web page, whose goal is to
mine every users access characteristic. Users may be
tracked based on the IP address of the computer requesting
the page and user sessions. A new IP address is used to
identifying the new user and at the same time, the user
session must be within certain time limits.
(2) Mapping: For every identified user, the visiting web
pages are arranged into row-wise in such a way that it
forms the sequential database D.

3.2 Mining of Sequential web access Pattern

Sequential web access patterns are mined from the
sequential database D, which is a set of 2-
tuples ) , ( o sid , where sid is a user-id and o is a
sequence of web pages accessed by users. The problem of
mining sequential pattern is defined as: Let
{ } z , ,
n 1
z W = be a set of web access events. A
sequence
l
Z Z
1
= o is an ordered list of web access
events. A web access event ( ) l i Z
i
s s 1 in a sequence
may have a special attribute like, timestamp denoted as
, . t Z
i
which registers the time when the web access event
i
Z was executed. As a notational convention, for a
sequence
l. j i 1 for . . ,
1
s < s < = t Z t Z Z Z
j i l
o . The
number of web access event in a sequence is called the
length of the sequence. A sequence o with length l is
called an l -sequence, denoted as l len = ) (o . A
sequence
n
Z Z
1
= o is called a subsequence of
another sequence
m
Y Y
1
= | , ) ( m n s and | a
super-sequence of o , denoted as | o _ , if there exist
integers m i i
n
s < < s
1
1 such that
n
i n i
Y Z Y Z _ _ , ,
1
1
. A tuple ) , ( o sid in a sequence
database S is said to contain a sequence , if is a
subsequence ofo . The number of tuples in a sequence
database S containing sequence is called the support
of , denoted as ) sup( . Given a positive integer
sup min_ as the support threshold, a sequence is a
sequential pattern in sequence database S if
sup min_ ) sup( > [23].

3.2.1 PrefixSpan algorithm

The proposed recommendation system utilizes PrefixSpan,
a well-known pattern-growth algorithm, for mining the
complete set of sequential web access pattern from the
sequential database D. The main advantage of PrefixSpan
is the use of projected databases. An a-projected sequence
database is the set of subsequences in the sequence
database that are suffixes of the sequences that have the
prefix a. In every step, the algorithm checks for the
frequent sequences with prefix a, in the correspondent
projected database [24]. The algorithmic description of
PrefixSpan algorithm is shown in Fig 2.


IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 31
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


Fig 2: Pseudo Code of the PrefixSpan algorithm
3.3 Mining of Closed Sequential Web Access Patterns

This sub-section details the mining of closed sequential
web access patterns from the complete set of sequential
web access patterns ( FS ) resulted from the previous sub-
section. The closed patterns are mined with the aid of the
post-pruning strategy. The definition of closed sequential
patterns is as follows:

Definition: The set of closed frequent sequential pattern is
defined as follows.
( ) ( ) { } | o | o | o o support support and such that FS and FS = _ e -/ e = | CS
The recommendation system used with the closed
sequential pattern mining has the same expressive power
as compared with the regular sequential pattern mining.
Additionally, the closed sequential patterns have the
capability to provide the compact result set (reducing the
generation of the number of sequential patterns) [25].

First, each sequential web access pattern in the pattern set
i
FS is compared with the other patterns in the set, for
instance
j
FS
. The patterns
i
FS
is removed from the pattern
set FS , if and only if, (1) the support of both web access
patterns
i
FS
and
j
FS should be same and (2)
i
FS must
be a subset of
j
FS
(
j i
FS FS _
). At the end, we obtain the
closed pattern set CS (
FS CS _
) from the pattern set FS .

Example: Consider a sequential database of web access
sequence D in Table 1. The complete set of sequential
patterns with
2 sup min_ =
are FS = {a:4, aa:2, ab:4,
abb:2, abc:4, ac:4, b:4, bb:2, bc:4, c:4, ca:3, cab:2, cabc:2,
cac:2, cb:3, cbc:2, cc:2}. The set FS consists of 17
sequences. From these 17 frequent sequential patterns, the
closed sequential patterns discovered are: CS = {aa:2,
abb:2, abc:4, ca:3, cabc:2, cb:3}. From the above example,
it is clear that closed sequential pattern set CS is a
compact list of frequent sequential pattern set FS .

User id Web access sequence
1 c a a b c
2 a b c b
3 c a b c
4 a b b c a
Table 1: Sample sequential database of web access sequences


3.4 Construction of Pattern Tree

The construction of a pattern tree from the mined closed
sequential web access patterns is described in this sub-
section. The constructed pattern tree is used in making the
recommendation for a users web access sequence. The
constructed pattern tree is based on the Patricia trie (Radix
tree or crit bit tree) [32] data structure. In general, the
Patricia trie is used to store a set of strings but in regular
trie, single character is stored in each node. By using
Patricia trie, the tree can be even more compacted when
compare with the regular trie. The procedure for
constructing the pattern tree in the proposed system is as
follows:

1) Create an empty root node.
2) Insert the most sub pattern in the closed pattern
set CS into a node next to the root node.
3) Insert the postfixes of pattern into child node if
the current pattern to be inserted is a super pattern
of inserted patterns.
4) Otherwise, current pattern is inserted into the
node next to the root node.
5) Step 3 and step 4 is repeated for every pattern in
the closed pattern set CS .
.
Example: The closed web access sequential pattern CS =
{aa: 2, abb: 2, abc: 4, ca: 3, cabc: 2, cb: 3}. Initially, we
create an empty root node. Then, web access patterns <aa:
2>, <abb: 2>, <abc: 4> and <ca: 3> are inserted next into
the root node based on the procedure depicted above. Web
access pattern <cabc: 2> is a super pattern of <ca> so that
the postfix of <ca> i.e., <bc> is inserted into a child node
of <ca>. Moreover, <cb> is also inserted next to the root
node. The pattern tree for the chosen example is shown in
Fig 3.



Fig 3: The Pattern-tree constructed from the closed sequential web access
patterns

3.5 Generation of Recommendations for user access
sequences

The constructed pattern tree is used for matching and
generating web links for recommendations for user access
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 32
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


sequences. Once the pattern tree is constructed, instead of
using the sequential database of web access sequence D,
we can use the constructed pattern tree for the generation
of recommendations. The recommendations are retrieved
for a given users web access sequence S , length of the
user web access sequence S must satisfy the thresholds
(minlen and maxlen). For a users web access sequence,
initially, the sequences with the presence of the user
access sequence are identified from the constructed pattern
tree. Then, the postfixes of those patterns are provided as
recommendation to the users web access sequence. For a
pattern without postfixex, its child node are provided as
recommendations.

For instance, the proposed system provides
recommendations R for user web access sequences <ab>
and <ca> as follows:

Case 1: The constructed pattern tree is searched for
sequential patterns with the presence of Users web access
sequence <ab>. In the pattern tree (shown in figure 3),
next to the root node has the sequences <abb: 2> and
<abc: 4> that are subset of given user web access sequence
<ab>. So the postfixes of these sequential patterns i.e.,
<b> and <c> are given as recommendations to the user
( R = b: 2 and c: 4).

Case 2: For user web access sequence <ca>, the pattern
tree (shown in figure 3) is searched for pattern with the
presence of <ca>. As the pattern <ca> does not have any
postfixes, its child node i.e., <b> is provided as
recommendations for the user ( R = b: 2).

4. Experimental Results and Analysis

The performance evaluation and the experimental results
of the proposed recommendation system are presented in
this section. The proposed recommendation system is
implemented in Java (jdk 1.6). The synthetic dataset is
used for evaluating the performance of the web
recommendation system. The used synthetic dataset has a
collection of web access sequence and it is spitted into
two parts: (1) Training dataset: It is used for designing
the web recommendation system based on mined closed
sequential patterns on it and (2) Test dataset: It is used to
test the designed web recommendation system. At first,
the pattern tree is constructed by using the training dataset
and then, the proposed web recommendation system is
evaluated with the test dataset by using the evaluation
measures given below.

4.1 Evaluation Measures

By evaluating the proposed system, we have used three
measures such as precision, applicability and hit ratio. The
formal definition of these three measures is given as,
+
+
+
=
R R
R
recision P

Where,
+
R Number of correct recommendations.

R Number of incorrect recommendations.



Definition: Let
n 1 j j 2 1
s s s s
+
= s S be a web
access sequence of test dataset. The recommendation
} , , , {
2 1 k
r r r R =
is generated by using the constructed
pattern tree for the
subsequence
s s
j 2 1
s S
sub
=
maxlen) j minlen ( s s .
The recommendation is said to be correct, if it contains
1 + j
s
(
R s
j
e
+1
). Otherwise, is said to be incorrect
recommendation.
| | R
R R
ity Applicabil
+
+
=

Where, | | R Total number of given requests.

R
R
ity Applicabil recision ratio Hit
+
= = P


4.2 Experimental Results

The experimental results of the proposed recommendation
system are presented in this section. We measure the
precision, applicability and the hit ratio for different
support thresholds and the results are plotted in graph as
shown in figure 4. The comparison of sequential pattern
mining (SPM) with closed sequential pattern mining
(Closed SPM) in terms of generated web access patterns is
given in Table 2 and the corresponding graph is shown in
fig 5.
0
20
40
60
80
100
120
10 20 30 40 50
Support (%)
E
v
a
l
u
a
t
i
o
n

M
e
a
s
u
r
e
s

(
%
)
Precision
Applicablity
HitRatio

Fig 4: Evaluation measures vs. Support threshold

Support SPM Closed SPM
10 6203 4760
20 1125 1027
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 33
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


30 559 470
40 218 201
50 92 85
Table 2: Number of patterns generated for SPM and closed SPM
0
1000
2000
3000
4000
5000
6000
7000
10 20 30 40 50
Support (%)
N
u
m
b
e
r

o
f

w
e
b

a
c
c
e
s
s

p
a
t
t
e
r
n
s

(
%
)
SPM
Closed SPM

Fig 5: Comparison graph of SPM and Closed SPM.

5. Conclusion

In this paper, we have devised a web recommendation
system based on closed sequential pattern mining. In the
proposed system, we have employed PrefixSpan (pattern
growth algorithm) for mining the sequential web access
patterns from the preprocessed web server log data and the
closed sequential web access patterns are obtained from
the mined sequential web access pattern. Then, Patricia
trie based data structure is used to build the pattern tree,
where mined closed sequential web access patterns are
stored. The target web link is given to the new user by
matching the pattern tree with the new users current web
access sequence. The proposed web recommendation
system is validated with the synthetic dataset and the
experimental results showed that the proposed system
outperformed with good precision, applicability and hit
ratio.

References

[1] M. Kantardzic, Data Mining: Concepts, Models,
Methods, and Algorithms, John Wiley & Sons Inc.,
New York, 2002.
[2] M.S. Chen, J. Han and P.S. Yu, Data mining: an
overview from a database perspective, IEEE
Transactions on Knowledge and Data Engineering, vol.
8, pp. 866-883, 1996.
[3] Sizu Hou, Xianfei Zhang, "Alarms Association Rules
Based on Sequential Pattern Mining Algorithm," In
proceedings of the Fifth International Conference on
Fuzzy Systems and Knowledge Discovery, vol. 2,
pp.556-560, Shandong, 2008.
[4] R. Agrawal and R. Srikant, Mining Sequential
Patterns, In Proceedings of the 11
th
International
Conference on Data Engineering, pp. 3-14, Taipei,
Taiwan, 1995.
[5] Salvatore Orlando, Raffaele Perego and Claudio
Silvestri, "A new algorithm for gap constrained
sequence mining", In Proceedings of the ACM
Symposium on Applied Computing, pp. 540 547,
Nicosia, Cyprus, 2004.
[6] Qiankun Zhao and Sourav S. Bhowmick, "Sequential
Pattern Mining: A Survey", Technical Report, CAIS,
Nanyang Technological University, Singapore, no.118,
2003.
[7] Nancy P. Lin, Wei-Hua Hao, Hung-Jen chen, Hao-En
Chueh, Chung-I chang, "Fast Mining of Closed
Sequential Patterns", WSEAS Transactions on
Computers, vol. 7, no. 4, pp. 133-139, 2008.
[8] Claudia Antunes and Arlindo L. Oliveira, "Sequential
Pattern Mining with Approximated Constraints", in
proceedings of the International Conference on
Applied Computing, pp. 131-138, 2004.
[9] Congnan Luo, Soon M. Chung, "Efficient Mining of
Maximal Sequential Patterns Using Multiple Samples",
in proceedings of International Conference on Signal
Processing Systems, pp.788-792, 2005.
[10] Shengnan Cong, Jiawei Han and David Padua,
"Parallel Mining Of Closed Sequential Patterns", in
Proceedings of the eleventh ACM SIGKDD
international conference on Knowledge discovery in
data mining, pp. 562 567, Chicago, Illinois, USA,
2005.
[11] Yan. X, Han. J and Afshar. R, "CloSpan: mining
closed sequential patterns in large datasets", In
Proceedings of the 3rd SIAM International Conference
on data mining, pp. 166-177, San Francisco, CA, May
2003.
[12] Ming-Yen Lin, Sue-Chen Hsueh and Chia-Wen Chang,
"Mining Closed Sequential Patterns with Time
Constraints", Journal of Information Science and
Engineering, vol. 24, pp. 33-46, 2008.
[13] Zhenglu Yang, Yitong Wang and Masaru Kitsuregawa,
"An Effective System for Mining Web Log", LNCS,
vol. 3841, pp.40-52, 2006.
[14] Magdalini Eirinaki and Michalis Vazirgiannis, "Web
Mining for Web Personalization", ACM Transactions
on Internet Technology (TOIT), vol. 3, no.1, pp. 1 - 27
, 2003.
[15] Mulvenna. M. D, Anand. S. S and Buchner. A. G,
Personalization on the Net using Web Mining, .
Communications of the ACM, vol. 43, no. 8, pp. 123
125, August 2000.
[16] J. Ben Schafer, Joseph A. Konstan and John T. Riedl,
"Recommender Systems for the Web", In
Visualizing the Semantic Web, Springer,
pp.102-123, 2006.
[17] Olfa Nasraoui, Zhiyong Zhang, and Esin Saka, "Web
Recommender System Implementations in Multiple
Flavors: Fast and (Care) Free for All", in Proceedings
of SIGIR Open Source Information Retrieval
Worskhop, Seattle, WA, July 2006.
[18] J. Schafer, J. Konstan, and J. Riedl, Recommender
Systems in E-Commerce, in Proceedings of ACM
Conference on Electronic Commerce (EC-99), pp.158-
166, 1999.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 34
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


[19] Hiroshi Ishikawa, Manabu Ohta, Shohei Yokoyama,
Junya Nakayama, and Kaoru Katayama, "On the
Effectiveness of Web Usage Mining for Page
Recommendation and Restructuring", Lecture Notes in
Computer Science, Springer Berlin / Heidelberg, vol.
2593, pp. 253-267, 2009.
[20] Osmar R. Zaiane, Jia Li, and Robert Hayward,
"Mission-Based Navigational Behaviour Modeling for
Web Recommender Systems", in proceedings of the
6th International workshop on knowledge discovery,
Seattle, WA, USA, August 22-25, 2004.
[21] Mehdi Adda, Petko Valtchev, Rokia Missaoui and
Chabane Djeraba, "Toward Recommendation Based on
Ontology Powered Web-Usage Mining", Internet
Computing, vol. 11, no.4, pp. 45-52, 2007.
[22] Sarabjot Singh Anand and Bamshad Mobasher,
"Intelligent Techniques for Web Personalization",
Lecture Notes in Computer Science, Vol. 3169,
Springer, 2005.
[23] Jian Pei, Jiawei Han and Wei Wang, Mining
sequential patterns with constraints in large databases",
in Proceedings of the eleventh international conference
on Information and knowledge management, McLean,
Virginia, USA, pp. 18 - 25, 2002.
[24] Claudia Antunes and Arlindo L. Oliveira,
"Generalization of Pattern-growth Methods for
Sequential Pattern Mining with Gap Constraints",
Lecture Notes in Computer Science, Vol. 2734,
pp.239-251, 2003.
[25] Jianyong Wang and Jiawei Han, "BIDE: Efficient
Mining of Frequent Closed Sequences", in Proceedings
of 20th International Conference on Data Engineering,
pp. 79- 90, 2004.
[26] Zhou. B,Hui. S.C and Chang. K, "An intelligent
recommender system using sequential Web access
patterns", IEEE Conference on Cybernetics and
Intelligent Systems, vol. 1, pp. 393 - 398, December
2004.
[27] Baoyao Zhou, Siu Cheung Hui and Alvis Cheuk Ming
Fong, "Efficient sequential access pattern mining for
web recommendations", International Journal of
Knowledge-based and Intelligent Engineering Systems,
vol. 10 , no. 2, pp. 155-168, April 2006.
[28] Cui Wei, Wu Sen, Zhang Yuan and Chen Lian-Chang,
"Algorithm of mining sequential patterns for web
personalization services", ACM SIGMIS Database,
vol. 40 , no. 2, pp. 57-66,May 2009.
[29] Kuo-Yu Huang , Chia-Hui Chang , Jiun-Hung Tung
and Cheng-Tao Ho, "COBRA: Closed Sequential
Pattern Mining Using Bi-phase Reduction Approach",
Lecture Notes in Computer Science, Springer Berlin /
Heidelberg, vol. 4081, pp. 280-291, 2006.
[30] Petre Tzvetkov, Xifeng Yan and Jiawei Han, "TSP:
Mining top-k closed sequential patterns", Knowledge
and Information Systems, Vol. 7, No. 4, pp. 438-457,
May 2005.
[31] Takeaki Uno, Tatsuya Asai , Yuzo Uchida and Hiroki
Arimura, "An Efficient Algorithm for Enumerating
Closed Patterns in Transaction Databases", Lecture
Notes in Computer Science, Springer Berlin /
Heidelberg, vol. 3245, pp. 16-31, 2004.
[32] Knizhnik, Konstantin. "Patricia Tries: A Better Index
for Prefix Searches", Dr. Dobb's Journal, June, 2008.

First UTPALA NIRANJAN received M.Sc., and
M.Tech., degrees from Department of Computer Science
and Information Technology from Sri Venkateswara
University, Tirupati and Bharath University, Chennai in
1999 and 2005 respectively. He is working in Rao &
Naidu Engineering College, Ongole. He is currently
pursuing the Ph.D. degree from Bharath University,
Chennai, working closely with Dr.R.B.V Subramanyam,
NIT Warangal and Dr.V.Khanaa, Bharath University
Chennai.

Second Dr.R.B.V..SUBRAMANYAM received
the M.Tech., and Ph.D degrees from IIT Kharagpur He is
working in NIT Warangal, Andhra Pradesh. He works in
the field of Data mining. He is one of the reviewer for
IEEE Trans. On Fuzzy Systems and also for Journal of
Information Knowledge Management. Membership in
Professional bodies:1.Member, IEEE; 2. Associated
Member, The Institution of Engineers (INDIA).

Third Dr.V.KHANA is working as Dean
Information Bharath University, Chennai.





IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 35
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


Energy Efficient Real-Time Scheduling in Distributed
Systems

Santhi Baskaran
1
and P. Thambidurai
2



1
Department of information Technology, Pondicherry Engineering College
Puducherry 605 008, India


2
Department of Computer Science and Engineering, Pondicherry Engineering College
Puducherry 605 008, India




Abstract
Battery powered real-time systems have been widely used in
many applications. As the quantity and the functional complexity
of battery powered devices continue to increase, energy efficient
design of such devices has become important. Also these real-
time systems have to concurrently perform a multitude of
complex tasks with strict time constraints. Thus, minimizing
power consumption and extending battery life while guaranteeing
the timing constraints has become a critical aspect in designing
such systems. Moreover, energy consumption is also a critical
issue in parallel and distributed systems. We present novel
algorithms for energy efficient scheduling of Directed Acyclic
Graph (DAG) based applications on Dynamic Voltage Scaling
(DVS) enabled systems. Experimental results show that our
proposed algorithms give better results than the existing
algorithms.
Keywords: Real-time, slack, energy reduction, scheduling
1. Introduction
Energy consumption reduction is becoming nowadays an
issue reflected in most aspects of our lives. The obvious
driving force behind addressing energy consumption in
digital systems is the development of portable
communication and computation. The consumers demand
better performance and more functionality from the hand-
held devices, but this also means higher power and energy
consumption. Hence energy efficiency is an important
optimization goal in digital system design, and most of
these are in fact time critical systems.

Timeliness and energy efficiency are often seen as
conflicting goals. When designing a real-time system, the
first concern is usually time. Yet, with the right methods
energy efficiency can also be achieved. Energy-efficient
architectures may be selected, while still meeting the
timing constraints.
The ready availability of inexpensive processors with large
memory capacities and communication networks with
large bandwidth are making it more attractive to use
distributed (computer) systems for many of the real-time
applications. Distributed real-time systems have emerged
as a popular platform for applications such as multimedia,
mobile computing and information appliances. For hard
real-time systems, application deadlines must be met at all
time. However, early completion (before the dead- line) of
the applications may not bring the systems extra benefits.
In this case, we can trade this extra performance for other
valuable system resources, for example,
energy consumption. Dynamic voltage scaling (DVS),
which varies the processor speed and supply voltage
according to the workloads at run-time, can achieve the
highest possible energy efficiency for time-varying
computational loads while meeting the deadline
constraints [1]. DVS takes advantage of the quadratic
relationship between supply voltage and energy
consumption [2], which can result in significant energy
savings. There are many commercially available voltage-
scalable processors, including Intels Xscale [3],
Transmetas Crusoe [4], and AMDs mobile processors
with AMD PowerNow! technology support [5].

An important problem that arises from using distributed
systems is how to assign tasks and resources to the
processors so as to fully utilize the processors, while
ensuring that the timing constraints of the tasks are met
along with energy efficiency. Hence in this paper we
address energy efficiency in the context of distributed real-
time systems, targeting variable speed processor
architectures.

In energy efficient scheduling, the set of tasks will have
certain deadline before which they should finish their
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 36
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


execution and hence there is always a time gap between
the actual execution time and the deadline. This time gap
is called slack. In this paper, we have proposed algorithms
to allocate this slack in an efficient way therefore resulting
in more energy savings.

Following this introduction, Section 2 gives a brief
overview of basics and related work. Details of system
model are presented in Section 3. In Section 4, we present
an overview of the existing algorithms. In Section 5, we
describe our energy-efficient algorithm in detail.
Experimental results are presented in Section 6. Finally,
we conclude in Section 7.
2. Basics and related work
Operating system is responsible for functioning of the
entire system, including task constraints and status,
resource usage, etc. Therefore, it is one of the most
effective and efficient approaches to reduce energy
consumption with proper task scheduling algorithms.

Voltage scaling has been widely acknowledged as a
powerful and feasible technique for trading off energy
consumption for execution time. There is a rich literature
addressing variable-voltage scheduling for a set of
independent tasks with hard deadlines on a
single processor [6]. Hardwaresoftware co-synthesis with
variable-voltage scaling for single-processor core-
based systems is considered for independent tasks [7]. An
offline algorithm, which generates a minimum-energy
preemptive schedule [8] is also proposed for a set of
independent tasks. This schedule, is based on an earliest
deadline first (EDF) scheme, and tries to achieve a
uniform scaling of voltage levels of different tasks.
Heuristic is provided for a similar problem as in [8] for
fixed-priority static scheduling [9]. An energy-priority
heuristic is provided for non preemptive scheduling of
independent real-time tasks [10]. An iterative slack-
allocation algorithm [11] proposed based on the Lagrange
Multiplier method, points out that variations in power
consumption among tasks invalidate the conclusion in [8]
that a uniform scaling is optimal. Though most of the
works on energy efficient real-time scheduling concentrate
on independent tasks over uniprocessor, research is also
now being focused on dependent tasks over distributed
systems.

Real-time scheduling for tasks with precedence
relationships on distributed systems has been addressed in
[12][15]. There is also work addressing variable-voltage
scaling for such systems [16][19], [20]. Hybrid
search strategies are used for dynamic voltage scaling
[16]. It uses a genetic algorithm with simulated healing for
global search and hill climbing and Monte Carlo
techniques for local search. The energy efficient real-time
scheduling problem is formulated as a linear
programming (LP) problem [17] for continuous voltage
levels, which can be solved in polynomial time. Algorithm
based on a list-scheduling heuristic with a special priority
function to tradeoff energy reduction for delay is also
proposed for distributed systems [18]. The work proposed
in [19] uses a genetic algorithm to optimize task
assignment, a genetic-list scheduling algorithm to optimize
the task execution order, and an iterative slack
allocation scheme, which allocates a small time unit to the
task that, leads to the most energy reduction in each step.
The performance and complexity of this approach are
dependent on the size of the time unit, which however,
cannot be determined systematically. The usage of a small
time unit for task extension can lead to large
computational complexity.

For distributed systems, allocation/assignment and
scheduling have each been proven to be NP-complete. For
variable-voltage scheduling, the problem is more
challenging since the supply voltages for executing tasks
have to be optimized to maximize power savings.
This requires intelligent slack allocation among tasks. The
previous work using global/search strategy [16] and
integer linear programming (ILP) [17] can lead to large
computational complexity. The problem is further
complicated when slack allocation has to consider
variations in power consumption among different tasks.
Among previous works, those in [11], [17], and [19]
consider variations in power consumption among different
tasks

The Linear Programming (LP) based algorithms proposed
in the literature for battery-powered multiprocessor or
distributed DVS systems allocate slack greedily to tasks
based on decreasing or increasing order of their finish time
[20], or allocates evenly to all tasks [21]. These algorithms
use integer linear programming based formulations that
can minimize the energy [17].

The slack allocation algorithms assume that an assignment
of tasks to processors has already been made. Variable
amount of slack is allocated to each task so that the total
energy is minimized while the deadlines can still be met.
The continuous voltage case is formulated as Linear
Programming [17] problem where the objective is,
minimization of total energy consumption. The constraints
include deadline constraints, precedence constraints in
original DAG and precedence constraints among tasks on
the same processor after assignment.

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 37
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


Since the scheduling algorithm in [17] does not consider
the communication time among tasks, it is extended by
considering the communication time when representing
precedence relationships among tasks in the path based
algorithm proposed in [22]. This approach for energy
minimization is an iterative approach that allocates a small
amount of slack (called unit slack) to a subset of tasks. The
selection of tasks for this unit slack allocation is such that
the total energy consumption is minimized while the
deadline constraint is also met.

The above process is iteratively applied till all the slack is
used. These algorithms do not utilize the slack generated
dynamically due to the difference in worst case execution
time and actual execution time of real-time tasks. So the
existing algorithms have been modified in an efficient
way, to schedule the dynamic slack that is present between
the total execution time and deadline. New modified LP
and modified path-based algorithms are proposed, to
effectively use the slack so that energy efficiency of the
processor increases.
3. System Model
The Directed Acyclic Graph (DAG) represents the flow
among dependant tasks in a real-time application. In a
DAG a node represents a task and a directed edge between
nodes represents the precedence relationship among tasks.
The assignment of tasks in a DAG to the available
processors is done through scheduling algorithms, so that
the finish time of DAG is less than or equal to the
application deadline. Here we use the tasks execution
time at the maximum supply voltage during assignment to
guarantee deadline constraints. An assignment DAG
represents the direct workflow among tasks after processor
assignment. The direct precedence relationships of tasks in
an assignment DAG may change from its original DAG.


Figure 1 a) DAG b) Assignment on two processor c) Assignment
DAG

Figure 1(b) depicts the assignment of tasks for the DAG of
Figure 1(a). Figure 1(c) represents the assignment DAG,
which is the direct flow among tasks generated after
assignment.

The Number of cycles, N

, that a task needs to finish


remains a constant during voltage selection (VS), while
processors cycle time (CT) changes with the supply
voltage. For example if processors can operate on a
maximum voltage V
h
and minimum voltage V
l
, we
assume CT at V
h
is 1 time unit and CT at V
l
is 2 time unit.
This means slowing down the system increases execution
time of tasks with reduced energy.
4. Existing slack allocation algorithms
All slack allocation algorithms allocate variable amount of
slack to each task so that the total energy is minimized
while the deadlines can still be met. Several slack
allocation algorithms are discussed in the literature.
Among these, the following algorithms provide close to
optimal solutions. Hence, these two algorithms are
considered for improvement in this paper.
4.1 LP algorithm
The continuous voltage case is formulated as Linear
Programming (LP) [17] problem where the objective is
minimization of total energy consumption. The constraints
include deadline constraints and precedence constraints in
original DAG and assignment DAG.

This is presented as a two-phase framework that
integrates task scheduling and voltage selection together
to achieve the maximum energy saving of executing
dependent tasks on one or multiple variable voltage
processors. In the first phase, the EDF scheduling is used
for a single processor which is optimal in providing
slowdown opportunities and the priority based scheduling
is used for multiple processors that provide more slowing
down opportunities than a baseline scheduling. The
LP formulation in the second phase is the first exact
algorithm for the voltage selection (VS) problem. The LP
can be solved optimally in polynomial-time for the
continuous voltage or solved efficiently for general
discrete voltages. This algorithm compared to the baseline
scheduling along with scaling VS technique provides 14%
more slowing down opportunities. But it does not consider
the communication time among tasks, and hence extended
as modified LP algorithm for discrete voltage case with
resource reclaiming by considering the communication
time in assignment DAG.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 38
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


4.2 Path based algorithm
The path based algorithm [22] presented for continuous
voltage parallel and distributed system, is not only
considerably better than simplistic schemes but also
comparable to LP based algorithm, which provides near
optimal solutions.

The LP algorithm ignores the variable energy profiles of
tasks on different processors during slack allocation and
lead to poor energy reduction. Usage of this variable
energy profile can lead to more reduction in energy
requirements [19], [21]. However, because of the
dependency relationships among tasks in an assignment,
the sum of energy reduction of several tasks executed in
parallel may be higher than the highest energy reduction of
a single task. The path based algorithm effectively
addresses this issue and determines a set of multiple
independent tasks that together have the maximum energy
reduction. In Figure 1, the slack of time 5 to 6 is
considered for the slack allocation from start time to total
finish time. The slack can be allocated only to task 2.
However, the slack of time 8 to 9 at Phase 2 can be
allocated to a subset of tasks (e.g., 1, 2 & 3, or 4). The
execution of Phase1 precedes the execution of Phase2 to
expect more energy saving by reducing the possibility of
redundant slack allocation to the same tasks.

This algorithm compared to LP based algorithm, provides
near optimal solutions and also improves energy reduction
by 9-29 %.
5. Proposed Algorithms
Both the proposed algorithms integrate task scheduling
and voltage selection together and achieve maximum
energy saving on distributed system comprised of variable
voltage processors, with discrete voltage case. The
proposed algorithms are improved over the existing ones
by considering new slacks generated due to AET of tasks
at runtime. This is achieved through resource reclaiming
procedure added to the existing algorithms.
5.1. Modified LP Algorithm
This algorithm integrates task scheduling and voltage
selection together to minimize energy consumption of
dependent tasks on systems with a given number of
variable voltage processors. The algorithm should also
guarantee that after the VS technique is applied, tasks with
deadlines still finish before their deadlines.

If d

is the tasks delay , T

execution time of task at


highest voltage V
h
and S

is the tasks start time, then the


timing constraints for multiprocessors can be modeled as
S

+ d

dl

ith deadline
d

, S

, int,
For a feasible scheduling, the above constraints guarantee
that tasks with deadlines will finish before their deadlines.

The objective of VS is to minimize the sum of each task
s energy consumption by slowing down each task
without violating timing constraints. To trade the increase
of delay for energy saving, a relationship should be
established between d

and E

. In the continuous voltage


case, E

is a convex function of d

[17].

In the discrete voltage case, only a certain number of
voltages are available. This implies that a tasks delay can
only take discrete values. In this case the d

and E

are
computed as linear function of N
,i
, where N
,i
is the
number of cycles that task is executed at a discrete
voltage V
i
< V
h
.

The task scheduling policy used in this algorithm provides
the maximum slowing down potentials for voltage
selection to utilize energy saving. To provide more energy
saving opportunities in the scheduling for VS to utilize, we
use a priority based on tasks deadline, dependencies and
the usage of processors in the system. The latest finish
time (LFT) for a task is assigned based on their deadline or
tasks successors to meet their deadline. Leaf tasks LFT is
its deadline or overall DAGs deadline. Starting from leaf
tasks and traversing the DAG backward, we can assign
each task a latest finish time. Task latest finish time LFT


is defined as
LFT

= min (dl

, min (LFT
s
- T
s
)),
where LFT
s
and T
s
are the

latest finish time and execution
time of direct successor

tasks of

task in the DAG
respectively. Let us denote task s ready time when all its
predecessors finish as R

, its earliest start time when it is


ready and there is a processor available as EST

, and i
th

processors available time as PAT
i
. These values are
updated during the scheduling as we build the scheduling
step by step. At each step, task s priority PR

, is
evaluated as
PR

= LFT

+EST


EST

= max (R

, min (PAT
i
)), and i= 1 to N (N is
Number of processors in the system).

A task with smallest priority value is assigned to a
processor Pi, if Pi is available at the same time when task
is ready. If more than one processor is readily available
before the task is ready then select the latest among all
processors available when the task becomes ready. If no
processor is available, when task is ready select the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 39
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


processor that is available the earliest. This selection and
assignment of tasks repeat until all tasks are scheduled.

The static initial scheduling of tasks is based on the worst
case execution time of the tasks. But, it is possible that a
task executes less than its worst case computation time,
because of data-dependent loops and conditional
statements in the task code or architectural features of the
system, such as cache hits and branch predictions or both.
Hence the modified LP algorithm is improved over the
basic LP algorithm to provide resource reclaiming at
program runtime for scheduler to adjust the schedule
online according to the actual execution time (AET). This
newly generated slack time thus can be better utilized for
more energy savings. The algorithm is given below.

Modified LP Algorithm()
begin
Order the tasks in non decreasing order of priority in the
task queue.
For processors P=1 to N do
{N is the total number of processors}
begin
Get the first task
f
from task queue.
If
f
can be executed then
start execution.
While task queue is not empty do
begin
If a task is under execution on P then
begin
Wait for its completion
Invoke resource reclaiming
Get next task from queue
Apply VS for energy saving
end
end
end
end
5.2. Modified path based algorithm
This algorithm is developed for discrete voltage based
slack allocation for DAGs on distributed system. First
tasks are assigned to processors based on priority
scheduling discussed in modified LP algorithm, resulting
in an assignment DAG. Modified path based slack
allocation algorithm applied to it for energy minimization.
This is an iterative approach that allocates a small amount
of slack (called unit slack) in each iteration to a subset of
suitable tasks so that the total energy consumption is
minimized while the deadline constraint is also met.

The above process is iteratively applied till all the slacks
are used. The purpose of each iteration to find a weighted
maximal independent set of tasks, where the weight is
given by the amount of energy reduction by allocating unit
slack. The dependency relationships in an assignment
DAG [23] contains total slack which can be allocated to
the different tasks. For instance, in Figure 1, consider an
example in which one unit of slack can be allocated. The
total tasks that can be allocated for one unit of slack is one
or two:

If task
7
(or
1
) is allocated the slack; no other task can
use this slack to meet the deadline. Tasks
2
and
3
(or
2
&

4
,
2
&
6
,
4
&
5
,
5
&
6
) can use this slack concurrently
as they are not dependent on each other and both can be
slowed down. The appropriate option to choose between
the two choices depends on the energy reduction in task
7

versus the sum of energy reduction for tasks
2
and
3
. This
slack allocation algorithm considers the overall
assignment-based task dependency relationship [23], while
the most other algorithms ignore them. This algorithm has
two phases.

In phase1 slack available from start time to total finish
time based on a given assignment is considered for
maximum energy reduction by delaying appropriate tasks.
In this case the slack can be allocated to only a subset of
tasks that are not on the critical path. In phase2 slack
available from total finish time to deadline is considered
for further energy reduction. In this case the slack can
potentially be allocated to all the tasks.

For each of the two phases, this algorithm iteratively
allocates one unit of slack, called unitSlack. The size of the
unitSlack can be reduced to a level where further reducing
it does not significantly improve the energy requirements.
For Phase1, at each iteration only tasks with the maximum
available slack are considered since the number of slack
allocable tasks is limited and the amount of available slack
for each task is different. The maximum available slack of
task is slack

, and is defined as
slack

= LST

- EST

and
LST

= LFT

- T

, where LST

is the latest start


time of task .
The set of tasks which can share unitslack simultaneously
is found using compatible task matrix. If task
i
and task
j

are in the same assignment-based path, elements m
ij
and
m
ji
in compatible matrix M are set to zero. Otherwise the
elements are set to one. This matrix can be easily
generated by performing a transitive closure on the
assignment DAG and then taking compliment of that
matrix. Here set of tasks considered at each iteration may
be changed. This process is iteratively executed till there is
no task which can use slack until total finish time.

Meanwhile, at Phase2, all tasks are considered for slack
allocation at each iteration. The number of iterations at
Phase2 is equal to totalSlack divided by unitSlack, where
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 40
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


totalSlack is the slack available between actual deadline
and total finish time. At each iteration, one unitSlack is
allocated to one or more tasks that lead to maximum
energy reduction.
The energy reduction of a task is defined by the difference
between its original energy and its energy expected after
allocating a unitslack to the task. A branch and bound
algorithm is used to search all the compatible solutions to
determine the one that has maximum energy reduction.
The complete modified path based algorithm is stated
informally as below.

Modified Path based Algorithm()
begin
Assign tasks to N processors using Priority based
scheduling and obtain assignment DAG.
While unitslack exists in phase1 do
begin
Find tasks with maximum available slack
Generate compatible matrix to find slack
sharable tasks
Find task set with maximum energy reduction
using branch and bound search
Invoke resource reclaiming
end
While unitslack exists in phase2 do
begin
Delay all last tasks on processors to extend up to
deadline
Invoke resource reclaiming
end
end
6. Results and Discussions
To evaluate the efficiency of our algorithms in energy
saving, we conducted experiments through simulation on
various task graphs consisting of 2 to 20 tasks, generated
randomly. The comparison of energy consumption for the
above task graphs between the existing LP and modified
LP algorithms, as well as existing path based and modified
path based algorithms are analyzed. For each set of tasks
the number of processors is kept constant and the energy
consumption for a minimum of ten DAGs are noted. The
average values of all those DAGs were calculated. Each
point in the above graphs is the average value of such
DAGs. This method was repeated by changing the number
of processors (varied between 2 to 10) and comparisons
were made between the existing and proposed algorithms.
A few sample results of those comparisons are shown in
the graphs below.

From the graphs it is inferred that path based algorithms
provide more energy savings than LP based algorithms.
The modified algorithms outperform existing ones by
reducing on an average 5% more energy. This is due to the
addition of resource reclaiming technique in the proposed
algorithms.

a) Number of Processors = 3

b) Number of Processors = 10

Figure 2. Comparison of energy consumption(in %) between LP and
modified LP algorithms



a) Number of Processors = 5

Energy consumption, p=10
55
60
65
70
75
80
85
90
95
100
1 3 5 7 9 11 13 15 17 19
No of tasks
E
n
e
r
g
y


c
o
n
s
u
m
p
t
i
o
n

LP algorithm
Modified LP algorithm
Energy consumption, p=3
55
60
65
70
75
80
85
90
95
100
1 3 5 7 9 11 13 15 17 19
No of tasks
E
n
e
r
g
y


c
o
n
s
u
m
p
t
i
o
n

LP algorithm
Modified LP algorithm
Energy consumption, p=5
55
60
65
70
75
80
85
90
95
100
1 3 5 7 9 11 13 15 17 19
No of tasks
E
n
e
r
g
y

c
o
n
s
u
m
p
t
i
o
n
Path based algorithm
Modified path based
algorithm
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 41
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814



b) Number of Processors = 10

Figure 3. Comparison of energy consumption (in %) between path
based and modified path based algorithms

Table 1: Comparisons of overall energy consumption for all four
algorithms
Algorithm
Average Energy
Consumption (%)
LP 85
Modified LP 79
Path based 82
Modified Path based 77

From Table 1, it is noted that in existing algorithms, Path
based algorithm consumes less energy than LP algorithm.
This is due to the concept of unit slack introduced. It can
also be observed from the results that our proposed
algorithms have less energy consumption than the existing
ones by 5%, thus leading to more energy efficiency.
7. Conclusions
In this paper, two modified DVS algorithms for discrete
voltage case and distributed systems are presented. These
algorithms handle task graphs with precedence and timing
constraints along with communication time. The resource
reclaiming component is used to improve energy reduction
over the existing LP and path based optimal slack
allocation algorithms. Experimental results show that the
proposed algorithms are comparable to exiting algorithms
in providing optimal solutions as well as consume energy
5% less than the than the existing ones. The task graphs
considered in this paper are assumed to be free of resource
constraints which arise due to the use of non-sharable
resources by the tasks. So this algorithm can be extended
by considering resource constraints in addition to the
precedence and timing constraints for dependant tasks.


References
[1] T. D. Burd, T. Pering, A. Stratakos, and R. Brodersen, A
dynamic voltage scaled microprocessor system", IEEE J.
Solid-State Circuits, Vol. 35, pp. 1571{1580, 2000.
[2] T. Burd and R. Brodersen. Energy Efficient CMOS
Microprocessor Design. In Proceedings of 28th Annual
Hawaii International Conference on System Sciences.,
pages 288297, January 1995.
[3] Intel Xscale. [Online]. Available:
http://www.intel.com/design/intelxscale/
[4] Transmeta Crusoe. [Online]. Available:
http://www.transmeta.com
[5] AMD PowerNow! [Online]. Available: http://www.amd.com
[6] N. K. Jha, Low power system scheduling and synthesis, in
Proc. Int. Conf. Comput.-Aided Des., Nov. 2001, pp. 259
263.
[7] I. Hong, D. Kirovski, G. Qu, M. Potkonjak, and M. B.
Srivastava, Power optimization of variable-voltage core-
based systems, IEEE Trans. Comput.-Aided Design Integer
Circuits Syst., vol. 18, no. 12, pp. 17021714, Dec. 1999.
[8] F. Yao, A. Demers, and S. Shenker, A scheduling model
for reduced CPU energy, in Proc. Symp. Foundations
Comput. Sci., Oct. 1995, pp. 374382.
[9] P. B. Jorgensen and J. Madsen, Critical path driven co
synthesis for heterogeneous target architectures, in Proc.
Int. Workshop Hardware/ Software Code., Mar. 1997, pp.
1519.
[10] J. Pouwelse, K. Langendoen, and H. Sips, Energy priority
scheduling for variable voltage processors, in Proc. Int.
Symp. Low-Power Electron. and Des., Aug. 2001, pp. 26
31.
[11] A. Manzak and C. Chakrabarti, Variable voltage task
scheduling algorithms for minimizing energy, in Proc. Int.
Symp. Low Power Electron. and Des., Aug. 2001, pp. 279
282.
[12] Y. Kwok and I. Ahmad, Dynamic critical-path scheduling:
An effective technique for allocating task graphs to
multiprocessors, IEEE Trans. Parallel Distrib. Syst., vol. 7,
no. 5, pp. 506521, May 1996.
[13] P. Eles, A. Doboli, P. Pop, and Z. Peng, Scheduling with
bus access optimization for distributed embedded systems,
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 8,
no. 5, pp. 472491, Oct. 2000.
[14] G. Sih and E. A. Lee, A compile-time scheduling heuristic
for interconnection constrained heterogeneous processor
architectures, IEEE Trans. Parallel Distrib. Syst., vol. 4,
no. 2, pp. 175187, Feb. 1993.
[15] J. Liu, P. H. Chou, and N. Bagherzadeh, Communication
speed selection for embedded systems with networked
voltage-scalable processors, in Proc. Int. Workshop
Hardware/Software Code., May 2002, pp. 169174.
[16] N. Bambha, S. S. Bhattacharyya, J. Teich, and E. Zitzler,
Hybrid search strategies for dynamic voltage scaling in
embedded multiprocessors, in Proc. Int. Workshop
Hardware/Software Co-Design, Apr. 2001, pp. 243248.
[17] Y. Zhang, X. Hu, and D. Chen, Task scheduling and
voltage selection for energy minimization, in Proc. Des.
Autom. Conf., Jun. 2002, pp. 183188.
[18] F. Gruian and K. Kuchcinski, LEneS: Task-scheduling for
low-energy systems using variable voltage processors, in
Energy consumption, p=10
55
60
65
70
75
80
85
90
95
100
1 3 5 7 9 11 13 15 17 19
No of tasks
E
n
e
r
g
y

c
o
n
s
u
m
p
t
i
o
n

Path based algorithm
Modified path based
algorithm
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010 42
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


Proc. Asian South Pacific Des. Autom. Conf., Jan. 2001, pp.
449455.
[19] M. T. Schmitz and B. M. Al-Hashimi, Considering power
variations of DVS processing elements for energy
minimization in distributed systems, in Proc. Int. Symp.
Syst. Synthesis, Oct. 2001, pp. 250255.
[20] P. Chowdhury and C. Chakrabarti, Static task scheduling
algorithms for battery-powered DVS systems, IEEE Trans.
On Very large Scale Integration Systems, 13(2), pp. 226-
237, Feb. 2005.
[21] J. Luo and N. K. Jha, Power-conscious joint scheduling of
Periodic task graphs and aperiodic tasks in distributed real-
time embedded systems, Int. Conf. on Computer-Aided
Design, pp. 357-364, Nov. 2000.
[22] Jaeyeon Kang and Sanjay Ranka , "Dynamic Algorithms
for Energy Minimization on Parallel Machines, 16th
Euromicro Conference on Parallel, Distributed and
Network-Based Processing , pp. 399-406, 2008.
[23] Youlin Ruan, Gan Liu, Qinghua Li and Tingyao Jiang, An
Efficient Scheduling Algorithm for Dependent Tasks, Int.
Conference on Computer and Information Technology,
pp.456-459, May. 2004.


Mrs. Santhi Baskaran received her B.E.
degree in Computer Science and
Engineering from University of Madras,
Chennai, India in 1989 and M.Tech.
degree in Computer Science and
Engineering from Pondicherry University,
Puducherry, India in 1998. She served as
Senior Lecturer and Head of the Computer
Technology Department, in the
Polytechnic Colleges, Puducherry. India,
for eleven years, since 1989. She joined Pondicherry Engineering
College, Puducherry, India in 2000 and currently working as Associate
Professor in the Department of information Technology. Now she is
pursuing her PhD degree in Computer Science and Engineering. Her
areas of interest include Real-time systems, embedded systems and
operating systems. She has published research papers in International and
National Conferences. She is a Life member of Indian Society for
Technical Education and Computer Society of India.


Prof. Dr. P. Thambidurai is a Member of
IEEE Computer Society. He received his
PhD degree in Computer science from the
Alagappa University, Karaikudi, India in
1995. From 1999, he served as Professor
and Head of the Department of Computer
Science & Engineering and Information
Technology, Pondicherry Engineering
College, Puducherry, India, till August
2006. Now he is the Principal for
Perunthalaivar Kamarajar Institute of Engineering and Technology
(PKIET) an Government institute at Karaikal, India. His areas of interest
include Natural Language Processing, Data Compression and Real-time
systems. He has published over 50 research papers in International
Journals and Conferences. He is a Fellow of Institution of Engineers
(India). He is a Life member of Indian Society for Technical Education
and Computer Society of India. He served as Chairman of Computer
Society of India, Pondicherry Chapter for two years. Prof. P.Thambidurai
is serving as an Expert member to All India Council for Technical
Education (AICTE) and an Adviser to Union Public Service Commission
(UPSC), Govt. of India. He is also an Expert Member of IT Task Force
and Implementation of e-Governance in the UT of Puducherry.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814


43
Exploration on Selection of Medical Images employing New Transformation
Technique
J.Jaya
1
, K.Thanushkodi
2

1. Research Scholar, Anna university, Chennai.


2.Director,Akshya College of Engg & Tech, Coimbatore


Abstract
Transformation model plays a vital role in medical image
processing. This paper describes a new Transformation model
(NTM) which is hybrid of linear and non linear Transformations
techniques for the detection of tumor. In NTM, patient image is
compared with reference images, which is block based. An
image similarity measure quantifies the degree of similarity
between intensity patterns in two images. The choice of an image
similarity measure depends on the modality of the images to be
registered. In this paper contrast checking, sum of squared
intensity differences (SSD), calculation of white cells and point
mapping are used.
Key Words: New Transformation model (NTM), Contrast
checking, SSD, White cells, point mapping.
1. Introduction
Image Transformation technique is a basic task in image
processing used to match two or more images taken at
different sensors of from different view points.
Transformation algorithms can also be classified
according to the Transformation model used to relate the
reference image space with the target image space. The
first broad category of Transformation models includes
linear Transformations, which is also called as linear
Transformation. It is the combination of translation,
rotation, global scaling and shear components. Linear
Transformations are global in nature, thus not being able
to model local deformations. The second category includes
a non linear Transformation which is known as non rigid
Transformation. These Transformations allow local
warping of image features, thus providing support for
local deformations. Non linear Transformation approaches
include polynomial warping; interpolation of smooth basis
functions. In another category of image Transformation is
the process of aligning images from a single modality or
from different modalities. In this NTM, single modality is
described.


The incidence of brain tumors is increasing rapidly,
particularly in the older population than compared with
younger population. Brain tumor is a group of abnormal
cells that grows inside of the brain or around the brain.
Tumors can directly destroy all healthy brain cells. It can
also indirectly damage healthy cells by crowding other
parts of the brain and causing inflammation, brain
swelling and pressure within the skull. Over the last 20
years, the overall incidence of cancer, including brain
cancer, has increased by more than10%, as reported in the
National Cancer Institute statistics (NCIS), with an
average annual percentage change of approximately 1%.2-
6 between 1973 and 1985, there has been a dramatic age-
specific increase in the incidence of brain tumors [1].
Death rate extrapolations for USA for Brain cancer:
12,764 per year, 1,063 per month, 245 per week, 34 per
day, 1 per hour, 0 per minute, 0 per second[2]. The
National Cancer institute statistics reported as the average
annual percentage increases in primary brain tumor
incidence for ages 75-79, 80-84, and 85 and older were
7%, 20.4%, and 23.4%, respectively.5-8 Since 1970, the
incidence of primary brain tumors in people over the age
of 70 has increased sevenfold. Canadian Cancer Statistics,
National Cancer Institute of Canada (NCIC) in 2004, 5 per
100,000 deaths in men from brain tumor or cancer in
Canada and 4 per 100,000 deaths in women from brain
tumor in Canada.

Now a day, MRI is the noninvasive and very much
sensitive imaging test of the brain in routine clinical
practice. Magnetic resonance imaging (MRI) is a
noninvasive medical test that helps physicians diagnose
and treat medical conditions. MR imaging uses a powerful
magnetic field, radio frequency pulses and a computer to
produce detailed pictures of organs, soft tissues, bone and
virtually all other internal body structures. It does not use
ionizing radiation (x-rays) and MRI provides detailed
pictures of brain and nerve tissues in multiple planes
without obstruction by overlying bones. Brain MRI is
the procedure of choice for most brain disorders. It
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


44
provides clear images of the brainstem and posterior
brain, which are difficult to view on a CT scan. It is
also useful for the diagnosis of demyelization
disorders (disorders such as multiple sclerosis (MS)
that cause destruction of the myelin sheath of the
nerve). This paper is organized as follows. In the first
phase, film artifacts and unwanted portions of MRI Brain
image are removed. Secondly, the noise and high
frequency components are removed using weighted
median filter (WM). Finally, NTM is applied for tumor
selection.
2. Related Works
In medical imaging different approaches have been
implemented for the selection of tumor images. This
section describes different methods for MR brain image
Transformation.. Peter et al described a new high-
dimensional non-rigid Transformation with two properties.
Multi-modality and locality and he got best performance
than previous method [3].Wilbert described automated
application for the Transformation of MRI for Alzheimer
patients with rigid-body Transformation and non-rigid
body using classical match filter and (CMR) correlation
filter and he found root-mean-squared (RMS) error
through Match filter and correlation filter [4].Yeit et al
specified automatic Transformation method for MR
images using shape matching system with Gaussian
model, this method successfully found necessary points
fro register normal images and reference image[7].Thomas
et al designed a new automatic and interactive methods for
image Transformation for 3D MRI and SPECT
Comparison for shows an accuracy[5]. Wang et al
described a Free-form deformation based on optimization
used to speed up the Transformation process and avoid
local minima. This performance evaluate with simulated
images and real images [6]. Kovalev et al introduced a
technique with Free-Form Deformation for non-rigid
Transformation here Subdivision of NURBS is extended
to 3D and is used in hierarchical optimization to speed up
the Transformation and avoid local minima[8]. Peter et al
described a new high-dimensional non-rigid
Transformation with two properties. Multi-modality and
locality and he got best performance than previous method
.Wilbert explained comparison of Transformation methods
for MRI brain images used nonlinear Transformation and
warping models. it can get 31% more efficient than linear
Transformation.


3. Materials and Methods Used

3.1 Image Acquisition

The 0.5T intra-operative magnetic resonance scanner of
the Kovai Medical Center and Hospital (KMCH, Signa
SP, GE Medical Systems) offers the possibility to acquire
256*256*58(0.86mm, 0.86mm, 2.5 mm) T1 weighted
images with the fast spin echo protocol (TR=400,TE=16
ms, FOV=220*220 mm) in 3 minutes and 40 seconds.
The quality of every 256*256 slice acquired intra-
operatively is fairly similar to images acquired with a 1.5
T conventional scanner. MRI Images of a patient obtained
by MRI scan is displayed as an array of pixels (a two
dimensional unit based on the matrix size and the field of
view) and stored in Mat lab 7.0.



Fig1.MR Brain image on MAT LAB
















Fig2. Block diagram of NTM
3.2 Preprocessing

The MRI brain image consists of film artifacts or label on
the MRI such as patient name, age and marks. film
artifacts that are removed using tracking algorithm .Here,
starting from the first row and first column, the intensity
value of the pixels are analyzed and the threshold value
Image Acquisition
Preprocessing
Enhancement
Transformatio
Selected MRI
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


45
of the film artifacts are found. The threshold value,
greater than that of the threshold value is removed from
MRI. The high intensity value of film artifacts are
removed from MRI brain image. During the
removal of film artifacts, the image consists of salt and
pepper noise .The above image is given to enhancement
stage for removal of high intensity components. The
following figures explain the process of preprocessing
stage.

a )Before Preprocessing

(b)After Preprocessing
Fig3. Removal of Artifacts from MRI

3.3 Enhancement

Image enhancement methods inquire about how to
improve the visual appearance of images from Magnetic
Resonance Image (MRI). This proposed system describes
the information of enhancement using weighted median
filter for removing high frequency components such as
impulsive noise, salt and pepper noise, etc. The merit of
using weighted median filter is, it can remove salt and
pepper noise from MRI without disturbing of the edges. In
this enhancement stage, the weighted median filtering is
applied for each pixel of an 3 3, 5 5, 7 7, 9 9, 11
11window of neighborhood pixels are extracted and
analyzed the mean gray value of foreground , mean value
of background and contrast value.

Peak Signal-to-Noise Ratio (PSNR) and Average Signal-
to-Noise Ratio (ASNR) values of the weighted median
filters are calculated.
C = (f-b) / (f + b)
f = mean gray -level value of the foreground
b= mean gray-level value of the background
= (1/N)
i
(bi-b)
2

Noise level= standard deviation ( ) of the background
bi = Gray level of a background region
N= total number of pixels in the surrounding background
region (NB)

PSNR = (p-b) / , which is calculated as 0.924
ASNR =(f-b)/ , which is calculated as 0.929




Fig.4 plot for Contrast values derived from weighted
median filter



Fig5. Performance Evaluation of Enhancement Stage

4. Proposed New Transformation Model

4.1 Non.Rigid Transformation Model

In Non linear Transformation, Block based technique is
used. Here .both the given reference MR brain image (256
265) and the normal image (256 265) has been divided
into several blocks. Each and every block of both the
images is 64 64. After blocking, subtraction has been
done between the two images. This subtracted value is
then checked with the threshold value, in our method.
Then first block from both the images were subtracted and
the average value of all the pixels in that block were
calculated. This average value is then compared our
threshold value of 80,000 and if any of which is found to
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


46
cross this limit, those patient details will be stored in the
database as a doubtful case.

4.2 Rigid Transformation Model

It is one of the simplest forms of Transformation model.
The shape of a human brain changes very little with head
movement, so rigid body Transformations can be used to
model different head positions of the same subject.
Matching of two images is performed by finding the
rotations and translations that optimize some mutual
function of the images. Within modality, Transformation
generally involves matching the images by minimizing the
sum of squared difference between them. For between
modality Transformations, the matching criterion needs to
be more complex. For rigid body Transformation,
rotations and translations are adjusted. The changes could
arise for a number of different reasons, but most are
related to pathology. Because the scans are of the same
subject, the first step for this kind of analysis involves
registering the images together by a rigid body
Transformation. This is based on the similarity measures
parameters.


5. Implementation and Results Discussion

5.1 Non.Rigid Transformation Block based
technique

The following Table1 explains the NR technique .Here the
normal image is compared with the Tumor image. The
comparison is block based. If changes occur it is
processed by Rigid Transformation technique.

5.2 Non Rigid Transformation -Average Intensity
Measure

Average intensity measure for blocks of both normal and
target image was calculated and compared. If there is any
abnormality found in the normal image then it is stored in
segmented database. Otherwise it is stored in normal
database. In the following table, block 1 to block 4 of both
source and target image does not have difference in
average intensity but in block5 to block 8 has different
values. Those values are stored in segmented database.
Similarly block 9,10 has different values, and block11 to
16 has no difference in the intensity values.


Table 1.Block based Transformation for segmentation





Fig6. Block based Transformation for segmentation




Fig7.Segmentation by Block based Transformation images



5.3 Rigid Transformation Similarity Measures

This rigid method consists of the following similarity
measures, such as Contrast difference, Sum of square
difference, Correlation using point, similarity measure .An
application of rigid Transformation is the field of
Image
1 2 3 4 5 6 7 8
Normal 31 67 51 2 68 71 86 5
T1 31 67 51 2 77 91 86 5
T2 31 67 51 2 68 71 86 5
T3 31 67 51 2 68 101 91 5
Image
9 10 11
1
2 13 14 15 16
Normal 49 87 62 4 8 38 14 2
T1 57 97 62 4 8 38 14 2
T2 75 105 62 4 8 38 14 2
T3 49 90 62 4 8 38 14 2
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


47
morphometry, and involves Identifying shape changes
within single subjects by subtracting co registered images
acquired at Different times.
5.4 Contrast checking

Contrast checking is one of the methods in similarity
measure. Contrast for the reference images and normal
image is computed based on average background and
average foreground value of MRI. Contrast checking for a
given image is calculated by the given formula.
C = (f-b) / (f + b)
C = Contrast of the given image
f = mean gray -level value of the foreground
b= mean gray-level value of the background

Table 2.Contrast Analysis in Rigid Transformation technique
.












Fig8. plot of contrast value obtained by Rigid Transformation
technique.

5.5 Sum of Squared Difference (SSD)

Sum of Squared Differences (SSD) is one of the simplest
of the similarity measures which is calculated by
subtracting pixels within a square neighborhood between
the reference image I1 and the target image I2 followed by
the aggregation of squared differences within the square
window, and optimization. In this section, the sum of
squared difference (SSD) is applied for each pixel of an 3
3, 5 5, 7 7, 9 9, 11 11window of neighborhood
pixels in target image and reference image.
The average value of SSD is found for both the images.
Sum of square difference is given by :
SSD= Sum of square of pixels / Total number of pixels in
the given window.

Table 3. Calculation of SSD in Rigid Transformation technique




Fig9. Plot of SSD obtained by rigid Transformation technique

5.6 White cells Measure

The intensity of white cells are measured which is useful
for further steps.

Table 4 .White cells calculation in Rigid Transformation
technique








Image
name
Mean
Back
Ground
(b
Mean
Fore
ground
(f)
Contrast
(C)
Normal 0.8899 42.3447 0.9588
Target 1 0.9238 45.3103 0.9600
Target2 0.9238 45.1490 0.9599
Targetr 3 0.9238 44.7062 0.9595
Image name Sum of square
Difference
(SSD)
Average of SSD
Normal 195139692
2977.595398
Target 1 232541734
3548.305267
Target2 230918327
3523.534042
Targetr 3 224479823
3425.290268
Image name No of white pixels
Normal 418
Target 1 594
Target2 1113
Targetr 3 612
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 3, No 4, May 2010
www.IJCSI.org


48


Fig10 .plot of white pixels in rigid Transformation technique


5.7 Point mapping

Point mapping is the basic statistical approach to
Transformation and it is a match metric technique it gives
a measure of the degree of similarity between an image
and a template. Point similarity measures can be
derived from global measures and enable detailed relative
comparison of local image correspondence. For
performing Image Transformation, We have to get the
enhanced image in the same size as that of the original
image (image without tumor). Here, we have taken two
reference points first, in front view and second in the top
view of the image. The enhanced image has to be resized
to the original image size by fixing the same reference
points as in the original image. Since in our technique, the
size of the original image is 256*256, the enhanced image
has been resized to (256-x)*(256-y) by removing the extra
portions in the image.








Fig 11 Before Point Mapping










Fig 12 After Point Mapping


6. Conclusion

This paper proposed a New Transformation model for MR
brain image selection.. Initially, MR brain image is
acquired. Secondly the film artifacts and unwanted
portions of brain are removed using tracking algorithms
and the image is assigned as a new image. With this new
image the weighted median filtering is applied to remove
high frequency components. Finally the images are
entered in to Transformation. Here, the reference (tumor)
and normal image (patient) are involved to linear
Transformation method and non linear Transformation
method. In non linear method the block based technique is
implemented. The reference image and normal images are
split as several blocks of size 64 64. Intensity pair of
each block of those images is compared. If any changes
occur in those blocks then it will be assigned as a new
image and it is given to the next stage. In linear method,
the intensity pattern of both the images are analyzed using
similarity measures like contrast checking (CC), Sum of
Squared Difference (SSD) and measurement of T1
weighted image. The merit of the proposed technique is
very simple because MR images are registered using
similarity measures with block based technique. The
above methods produce accurate result than previous
methods and it produces the same output every time.

References
[1] Alexandra Flowers MD(2000). Brain Tumors in t he Older Person-
CancerControl.pp. 7:523-538
[2] Alexis Roche, Gregoire Malandain, Nicholas Ayache, Sylvain
prima(1999). Towards a better comprehension Medical Image
Transformation. Medical Image Computing and Computer-Assisted
Intervention-MICC. 1679: pp.555-566.
[3]Peter Rogeli, Stanislav, Kovacic, James C.Gee(2003). Point similarity
measures for non-rigid Transformation of multi-model data source.
Elsevier Science Inc.USA.92.
[4] Thomas Pfluger, Christian Vollmar, Axel Wismuller, Stefan Dresel,
Frank Berger, Patrick Suntheim,Gerda Leinsinger,Klaus Hahn(2000).
Quantitative Comparisonof Automatic and Interactive Methods for MRI-
SPECT image Transformation of the Brain based on 3-Dimensional
calculation of error. The journal of Nuclear Medicine.41: 1823-1829.
[5] Wilbert Mcclary(2000). Comparision of Methods of Transformation
for MRI Brain images. Engineering Research and technology Report.925:
PP423-4153.
[6]Wang J., Jiang T. (2007). Nonrigid Transformation of brain MRI using
NURBS" Pattern Recognition Letters. 28(2): 214-223.
[7]Yeit Han, Hyun Wook Park (2003). Automatic brain MR images
Transformation based on Talairach reference system. Proceedings of
international conference on image processing (ICIP).1: 1522-4880.

ACKNOWLEDGEMENT
The author wishes to thank Dr.M.Karnan, for his guideless in this area
and also extend wishes to thank Doctor Pankaj Metha for his Suggestion
on tumor recognition with his knowledge and experience in imaging area.
The MRI Image data obtained from KMCH Hospital, Coimbatore, Tamil
Nadu, India.
IJCSI CALL FOR PAPERS NOVEMBER 2010 ISSUE

Vol ume 7, I s s ue 6

The topics suggested by this issue can be discussed in term of concepts, surveys, state of the
art, research, standards, implementations, running experiments, applications, and industrial
case studies. Authors are invited to submit complete unpublished papers, which are not under
review in any other conference or journal in the following, but not limited to, topic areas.
See authors guide for manuscript preparation and submission guidelines.

Accepted papers will be published online and authors will be provided with printed
copies and indexed by Google Scholar, Cornells University Library,
ScientificCommons, CiteSeerX, Bielefeld Academic Search Engine (BASE), SCIRUS
and more.

Deadline: 30
th
September 2010
Notification: 31
st
October 2010
Revision: 10
th
November 2010
Online Publication: 30
th
November 2010


Evolutionary computation
Industrial systems
Evolutionary computation
Autonomic and autonomous systems
Bio-technologies
Knowledge data systems
Mobile and distance education
Intelligent techniques, logics, and
systems
Knowledge processing
Information technologies
Internet and web technologies
Digital information processing
Cognitive science and knowledge
agent-based systems
Mobility and multimedia systems
Systems performance
Networking and telecommunications
Software development and
deployment
Knowledge virtualization
Systems and networks on the chip
Context-aware systems
Networking technologies
Security in network, systems, and
applications
Knowledge for global defense
Information Systems [IS]
IPv6 Today - Technology and
deployment
Modeling
Optimization
Complexity
Natural Language Processing
Speech Synthesis
Data Mining

For more topics, please see http://www.ijcsi.org/call-for-papers.php

All submitted papers will be judged based on their quality by the technical committee and
reviewers. Papers that describe research and experimentation are encouraged.
All paper submissions will be handled electronically and detailed instructions on submission
procedure are available on IJCSI website (www.IJCSI.org).

For more information, please visit the journal website (www.IJCSI.org)





































IJCSI PUBLICATION 2010

www.IJCSI.org
IJCSI IJCSI
IJCSI PUBLICATION
www.IJCSI.org
The International Journal of Computer Science Issues (IJCSI) is a refereed journal for
scientific papers dealing with any area of computer science research. The purpose of
establishing the scientific journal is the assistance in development of science, fast
operative publication and storage of materials and results of scientific researches and
representation of the scientific conception of the society.
It also provides a venue for researchers, students and professionals to submit on-
going research and developments in these areas. Authors are encouraged to
contribute to the journal by submitting articles that illustrate new research results,
projects, surveying works and industrial experiences that describe significant advances
in field of computer science.
Indexing of IJCSI:
1. Google Scholar
2. Bielefeld Academic Search Engine (BASE)
3. CiteSeerX
4. SCIRUS
5. Docstoc
6. Scribd
7. Cornells University Library
8. SciRate
9. ScientificCommons
IJCSI IJCSI
IJCSI PUBLICATION
www.IJCSI.org
The International Journal of Computer Science Issues (IJCSI) is a refereed journal for
scientific papers dealing with any area of computer science research. The purpose of
establishing the scientific journal is the assistance in development of science, fast
operative publication and storage of materials and results of scientific researches and
representation of the scientific conception of the society.
It also provides a venue for researchers, students and professionals to submit on-
going research and developments in these areas. Authors are encouraged to
contribute to the journal by submitting articles that illustrate new research results,
projects, surveying works and industrial experiences that describe significant advances
in field of computer science.
Indexing of IJCSI:
1. Google Scholar
2. Bielefeld Academic Search Engine (BASE)
3. CiteSeerX
4. SCIRUS
5. Docstoc
6. Scribd
7. Cornells University Library
8. SciRate
9. ScientificCommons

Вам также может понравиться