Вы находитесь на странице: 1из 243

RECENT ADVANCES in ENVIRONMENT,

ENERGY, ECOSYSTEMS and


DEVELOPMENT












Proceedings of the 2013 International Conference on Environment,
Energy, Ecosystems and Development (EEEAD 2013)










Venice, Italy
September 28-30, 2013























RECENT ADVANCES in ENVIRONMENT,
ENERGY, ECOSYSTEMS and
DEVELOPMENT






Proceedings of the 2013 International Conference on Environment,
Energy, Ecosystems and Development (EEEAD 2013)






Venice, Italy
September 28-30, 2013










Copyright 2013, by the editors

All the copyright of the present book belongs to the editors. All rights reserved. No part of this publication
may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, without the prior written permission of the editors.

All papers of the present volume were peer reviewed by no less than two independent reviewers.
Acceptance was granted when both reviewers' recommendations were positive.


ISBN: 978-1-61804-211-8

RECENT ADVANCES in ENVIRONMENT,
ENERGY, ECOSYSTEMS and
DEVELOPMENT











Proceedings of the 2013 International Conference on Environment,
Energy, Ecosystems and Development (EEEAD 2013)










Venice, Italy
September 28-30, 2013


























Organizing Committee

General Chairs (EDITORS)
Professor Vincenzo Niola
Departement of Mechanical Engineering for Energetics
University of Naples "Federico II"
Naples, Italy
Professor Myriam Lazard
Institut Superieur d' Ingenierie de la Conception
Saint Die, France

Senior Program Chair
Professor Philippe Dondon
ENSEIRB
Rue A Schweitzer 33400 Talence
France

Program Chairs
Professor Filippo Neri
Dipartimento di Informatica e Sistemistica
University of Naples "Federico II"
Naples, Italy
Prof. Constantin Udriste,
University Politehnica of Bucharest,
Bucharest,
Romania
Professor Sandra Sendra
Instituto de Inv. para la Gestin Integrada de Zonas Costeras (IGIC)
Universidad Politcnica de Valencia
Spain
Tutorials Chair
Professor Pradip Majumdar
Department of Mechanical Engineering
Northern Illinois University
Dekalb, Illinois, USA

Special Session Chair
Professor Pavel Varacha
Tomas Bata University in Zlin
Faculty of Applied Informatics
Department of Informatics and Artificial Intelligence
Zlin, Czech Republic











Workshops Chair
Professor Ryszard S. Choras
Institute of Telecommunications
University of Technology & Life Sciences
Bydgoszcz, Poland

Local Organizing Chair
Assistant Professor Klimis Ntalianis,
Tech. Educ. Inst. of Athens (TEI), Athens, Greece

Publication Chair
Professor Gen Qi Xu
Department of Mathematics
Tianjin University
Tianjin, China

Publicity Committee
Professor Reinhard Neck
Department of Economics
Klagenfurt University
Klagenfurt, Austria
Professor Myriam Lazard
Institut Superieur d' Ingenierie de la Conception
Saint Die, France
International Liaisons
Professor Ka-Lok Ng
Department of Bioinformatics
Asia University
Taichung, Taiwan
Professor Olga Martin
Applied Sciences Faculty
Politehnica University of Bucharest
Romania
Professor Vincenzo Niola
Departement of Mechanical Engineering for Energetics
University of Naples "Federico II"
Naples, Italy
Professor Eduardo Mario Dias
Electrical Energy and Automation
Engineering Department
Escola Politecnica da Universidade de Sao Paulo
Brazil
Steering Committee
Professor Aida Bulucea, University of Craiova, Romania
Professor Zoran Bojkovic, Univ. of Belgrade, Serbia
Professor Metin Demiralp, Istanbul Technical University, Turkey
Professor Imre Rudas, Obuda University, Budapest, Hungary




Program Committee
Prof. Lotfi Zadeh (IEEE Fellow,University of Berkeley, USA)
Prof. Leon Chua (IEEE Fellow,University of Berkeley, USA)
Prof. Michio Sugeno (RIKEN Brain Science Institute (RIKEN BSI), Japan)
Prof. Dimitri Bertsekas (IEEE Fellow, MIT, USA)
Prof. Demetri Terzopoulos (IEEE Fellow, ACM Fellow, UCLA, USA)
Prof. Georgios B. Giannakis (IEEE Fellow, University of Minnesota, USA)
Prof. George Vachtsevanos (Georgia Institute of Technology, USA)
Prof. Abraham Bers (IEEE Fellow, MIT, USA)
Prof. David Staelin (IEEE Fellow, MIT, USA)
Prof. Brian Barsky (IEEE Fellow, University of Berkeley, USA)
Prof. Aggelos Katsaggelos (IEEE Fellow, Northwestern University, USA)
Prof. Josef Sifakis (Turing Award 2007, CNRS/Verimag, France)
Prof. Hisashi Kobayashi (Princeton University, USA)
Prof. Kinshuk (Fellow IEEE, Massey Univ. New Zeland),
Prof. Leonid Kazovsky (Stanford University, USA)
Prof. Narsingh Deo (IEEE Fellow, ACM Fellow, University of Central Florida, USA)
Prof. Kamisetty Rao (Fellow IEEE, Univ. of Texas at Arlington,USA)
Prof. Anastassios Venetsanopoulos (Fellow IEEE, University of Toronto, Canada)
Prof. Steven Collicott (Purdue University, West Lafayette, IN, USA)
Prof. Nikolaos Paragios (Ecole Centrale Paris, France)
Prof. Nikolaos G. Bourbakis (IEEE Fellow, Wright State University, USA)
Prof. Stamatios Kartalopoulos (IEEE Fellow, University of Oklahoma, USA)
Prof. Irwin Sandberg (IEEE Fellow, University of Texas at Austin, USA),
Prof. Michael Sebek (IEEE Fellow, Czech Technical University in Prague, Czech Republic)
Prof. Hashem Akbari (University of California, Berkeley, USA)
Prof. Yuriy S. Shmaliy, (IEEE Fellow, The University of Guanajuato, Mexico)
Prof. Lei Xu (IEEE Fellow, Chinese University of Hong Kong, Hong Kong)
Prof. Paul E. Dimotakis (California Institute of Technology Pasadena, USA)
Prof. M. Pelikan (UMSL, USA)
Prof. Patrick Wang (MIT, USA)
Prof. Wasfy B Mikhael (IEEE Fellow, University of Central Florida Orlando,USA)
Prof. Sunil Das (IEEE Fellow, University of Ottawa, Canada)
Prof. Panos Pardalos (University of Florida, USA)
Prof. Nikolaos D. Katopodes (University of Michigan, USA)
Prof. Bimal K. Bose (Life Fellow of IEEE, University of Tennessee, Knoxville, USA)
Prof. Janusz Kacprzyk (IEEE Fellow, Polish Academy of Sciences, Poland)
Prof. Sidney Burrus (IEEE Fellow, Rice University, USA)
Prof. Biswa N. Datta (IEEE Fellow, Northern Illinois University, USA)
Prof. Mihai Putinar (University of California at Santa Barbara, USA)
Prof. Wlodzislaw Duch (Nicolaus Copernicus University, Poland)
Prof. Tadeusz Kaczorek (IEEE Fellow, Warsaw University of Tehcnology, Poland)
Prof. Michael N. Katehakis (Rutgers, The State University of New Jersey, USA)
Prof. Pan Agathoklis (Univ. of Victoria, Canada)
Prof. P. Demokritou (Harvard University, USA)
Prof. P. Razelos (Columbia University, USA)
Dr. Subhas C. Misra (Harvard University, USA)
Prof. Martin van den Toorn (Delft University of Technology, The Netherlands)
Prof. Malcolm J. Crocker (Distinguished University Prof., Auburn University,USA)
Prof. S. Dafermos (Brown University, USA)
Prof. Urszula Ledzewicz, Southern Illinois University , USA.
Prof. Dimitri Kazakos, Dean, (Texas Southern University, USA)
Prof. Ronald Yager (Iona College, USA)
Prof. Athanassios Manikas (Imperial College, London, UK)

Prof. Keith L. Clark (Imperial College, London, UK)
Prof. Argyris Varonides (Univ. of Scranton, USA)
Prof. S. Furfari (Direction Generale Energie et Transports, Brussels, EU)
Prof. Constantin Udriste, University Politehnica of Bucharest , ROMANIA
Dr. Michelle Luke (Univ. Berkeley, USA)
Prof. Patrice Brault (Univ. Paris-sud, France)
Dr. Christos E. Vasios (MIT, USA)
Prof. Jim Cunningham (Imperial College London, UK)
Prof. Philippe Ben-Abdallah (Ecole Polytechnique de l'Universite de Nantes, France)
Prof. Photios Anninos (Medical School of Thrace, Greece)
Prof. Ichiro Hagiwara, (Tokyo Institute of Technology, Japan)
Prof. Metin Demiralp ( Istanbul Technical University / Turkish Academy of Sciences, Istanbul, Turkey)
Prof. Andris Buikis (Latvian Academy of Science. Latvia)
Prof. Akshai Aggarwal (University of Windsor, Canada)
Prof. George Vachtsevanos (Georgia Institute of Technology, USA)
Prof. Ulrich Albrecht (Auburn University, USA)
Prof. Imre J. Rudas (Obuda University, Hungary)
Prof. Alexey L Sadovski (IEEE Fellow, Texas A&M University, USA)
Prof. Amedeo Andreotti (University of Naples, Italy)
Prof. Ryszard S. Choras (University of Technology and Life Sciences Bydgoszcz, Poland)
Prof. Remi Leandre (Universite de Bourgogne, Dijon, France)
Prof. Moustapha Diaby (University of Connecticut, USA)
Prof. Brian McCartin (New York University, USA)
Prof. Elias C. Aifantis (Aristotle Univ. of Thessaloniki, Greece)
Prof. Anastasios Lyrintzis (Purdue University, USA)
Prof. Charles Long (Prof. Emeritus University of Wisconsin, USA)
Prof. Marvin Goldstein (NASA Glenn Research Center, USA)
Prof. Costin Cepisca (University POLITEHNICA of Bucharest, Romania)
Prof. Kleanthis Psarris (University of Texas at San Antonio, USA)
Prof. Ron Goldman (Rice University, USA)
Prof. Ioannis A. Kakadiaris (University of Houston, USA)
Prof. Richard Tapia (Rice University, USA)
Prof. F.-K. Benra (University of Duisburg-Essen, Germany)
Prof. Milivoje M. Kostic (Northern Illinois University, USA)
Prof. Helmut Jaberg (University of Technology Graz, Austria)
Prof. Ardeshir Anjomani (The University of Texas at Arlington, USA)
Prof. Heinz Ulbrich (Technical University Munich, Germany)
Prof. Reinhard Leithner (Technical University Braunschweig, Germany)
Prof. Elbrous M. Jafarov (Istanbul Technical University, Turkey)
Prof. M. Ehsani (Texas A&M University, USA)
Prof. Sesh Commuri (University of Oklahoma, USA)
Prof. Nicolas Galanis (Universite de Sherbrooke, Canada)
Prof. S. H. Sohrab (Northwestern University, USA)
Prof. Rui J. P. de Figueiredo (University of California, USA)
Prof. Valeri Mladenov (Technical University of Sofia, Bulgaria)
Prof. Hiroshi Sakaki (Meisei University, Tokyo, Japan)
Prof. Zoran S. Bojkovic (Technical University of Belgrade, Serbia)
Prof. K. D. Klaes, (Head of the EPS Support Science Team in the MET Division at EUMETSAT, France)
Prof. Emira Maljevic (Technical University of Belgrade, Serbia)
Prof. Kazuhiko Tsuda (University of Tsukuba, Tokyo, Japan)
Prof. Milan Stork (University of West Bohemia , Czech Republic)
Prof. C. G. Helmis (University of Athens, Greece)
Prof. Lajos Barna (Budapest University of Technology and Economics, Hungary)
Prof. Nobuoki Mano (Meisei University, Tokyo, Japan)

Prof. Nobuo Nakajima (The University of Electro-Communications, Tokyo, Japan)
Prof. Victor-Emil Neagoe (Polytechnic University of Bucharest, Romania)
Prof. E. Protonotarios (National Technical University of Athens, Greece)
Prof. P. Vanderstraeten (Brussels Institute for Environmental Management, Belgium)
Prof. Annaliese Bischoff (University of Massachusetts, Amherst, USA)
Prof. Virgil Tiponut (Politehnica University of Timisoara, Romania)
Prof. Andrei Kolyshkin (Riga Technical University, Latvia)
Prof. Fumiaki Imado (Shinshu University, Japan)
Prof. Sotirios G. Ziavras (New Jersey Institute of Technology, USA)
Prof. Constantin Volosencu (Politehnica University of Timisoara, Romania)
Prof. Marc A. Rosen (University of Ontario Institute of Technology, Canada)
Prof. Alexander Zemliak (Puebla Autonomous University, Mexico)
Prof. Thomas M. Gatton (National University, San Diego, USA)
Prof. Leonardo Pagnotta (University of Calabria, Italy)
Prof. Yan Wu (Georgia Southern University, USA)
Prof. Daniel N. Riahi (University of Texas-Pan American, USA)
Prof. Alexander Grebennikov (Autonomous University of Puebla, Mexico)
Prof. Bennie F. L. Ward (Baylor University, TX, USA)
Prof. Guennadi A. Kouzaev (Norwegian University of Science and Technology, Norway)
Prof. Eugene Kindler (University of Ostrava, Czech Republic)
Prof. Geoff Skinner (The University of Newcastle, Australia)
Prof. Hamido Fujita (Iwate Prefectural University(IPU), Japan)
Prof. Francesco Muzi (University of L'Aquila, Italy)
Prof. Les M. Sztandera (Philadelphia University, USA)
Prof. Claudio Rossi (University of Siena, Italy)
Prof. Christopher J. Koroneos (Aristotle University of Thessaloniki, Greece)
Prof. Sergey B. Leonov (Joint Institute for High Temperature Russian Academy of Science, Russia)
Prof. Arpad A. Fay (University of Miskolc, Hungary)
Prof. Lili He (San Jose State University, USA)
Prof. M. Nasseh Tabrizi (East Carolina University, USA)
Prof. Alaa Eldin Fahmy (University Of Calgary, Canada)
Prof. Ion Carstea (University of Craiova, Romania)
Prof. Paul Dan Cristea (University "Politehnica" of Bucharest, Romania)
Prof. Gh. Pascovici (University of Koeln, Germany)
Prof. Pier Paolo Delsanto (Politecnico of Torino, Italy)
Prof. Radu Munteanu (Rector of the Technical University of Cluj-Napoca, Romania)
Prof. Ioan Dumitrache (Politehnica University of Bucharest, Romania)
Prof. Corneliu Lazar (Technical University Gh.Asachi Iasi, Romania)
Prof. Nicola Pitrone (Universita degli Studi Catania, Italia)
Prof. Miquel Salgot (University of Barcelona, Spain)
Prof. Amaury A. Caballero (Florida International University, USA)
Prof. Maria I. Garcia-Planas (Universitat Politecnica de Catalunya, Spain)
Prof. Petar Popivanov (Bulgarian Academy of Sciences, Bulgaria)
Prof. Alexander Gegov (University of Portsmouth, UK)
Prof. Lin Feng (Nanyang Technological University, Singapore)
Prof. Colin Fyfe (University of the West of Scotland, UK)
Prof. Zhaohui Luo (Univ of London, UK)
Prof. Mikhail Itskov (RWTH Aachen University, Germany)
Prof. George G. Tsypkin (Russian Academy of Sciences, Russia)
Prof. Wolfgang Wenzel (Institute for Nanotechnology, Germany)
Prof. Weilian Su (Naval Postgraduate School, USA)
Prof. Phillip G. Bradford (The University of Alabama, USA)
Prof. Ray Hefferlin (Southern Adventist University, TN, USA)
Prof. Gabriella Bognar (University of Miskolc, Hungary)

Prof. Hamid Abachi (Monash University, Australia)
Prof. Karlheinz Spindler (Fachhochschule Wiesbaden, Germany)
Prof. Josef Boercsoek (Universitat Kassel, Germany)
Prof. Eyad H. Abed (University of Maryland, Maryland, USA)
Prof. F. Castanie (TeSA, Toulouse, France)
Prof. Robert K. L. Gay (Nanyang Technological University, Singapore)
Prof. Andrzej Ordys (Kingston University, UK)
Prof. Harris Catrakis (Univ of California Irvine, USA)
Prof. T Bott (The University of Birmingham, UK)
Prof. Petr Filip (Institute of Hydrodynamics, Prague, Czech Republic)
Prof. T.-W. Lee (Arizona State University, AZ, USA)
Prof. Le Yi Wang (Wayne State University, Detroit, USA)
Prof. George Stavrakakis (Technical University of Crete, Greece)
Prof. John K. Galiotos (Houston Community College, USA)
Prof. M. Petrakis (National Observatory of Athens, Greece)
Prof. Philippe Dondon (ENSEIRB, Talence, France)
Prof. Dalibor Biolek (Brno University of Technology, Czech Republic)
Prof. Oleksander Markovskyy (National Technical University of Ukraine, Ukraine)
Prof. Suresh P. Sethi (University of Texas at Dallas, USA)
Prof. Hartmut Hillmer(University of Kassel, Germany)
Prof. Bram Van Putten (Wageningen University, The Netherlands)
Prof. Alexander Iomin (Technion - Israel Institute of Technology, Israel)
Prof. Roberto San Jose (Technical University of Madrid, Spain)
Prof. Minvydas Ragulskis (Kaunas University of Technology, Lithuania)
Prof. Arun Kulkarni (The University of Texas at Tyler, USA)
Prof. Joydeep Mitra (New Mexico State University, USA)
Prof. Vincenzo Niola (University of Naples Federico II, Italy)
Prof. Ion Chryssoverghi (National Technical University of Athens, Greece)
Prof. Dr. Aydin Akan (Istanbul University, Turkey)
Prof. Sarka Necasova (Academy of Sciences, Prague, Czech Republic)
Prof. C. D. Memos (National Technical University of Athens, Greece)
Prof. S. Y. Chen, (Zhejiang University of Technology, China and University of Hamburg, Germany)
Prof. Duc Nguyen (Old Dominion University, Norfolk, USA)
Prof. Tuan Pham (James Cook University, Townsville, Australia)
Prof. Jiri Klima (Technical Faculty of CZU in Prague, Czech Republic)
Prof. Rossella Cancelliere (University of Torino, Italy)
Prof. L.Kohout (Florida State University, Tallahassee, Florida, USA)
Prof. D' Attelis (Univ. Buenos Ayres, Argentina)
Prof. Dr-Eng. Christian Bouquegneau (Faculty Polytechnique de Mons, Belgium)
Prof. Wladyslaw Mielczarski (Technical University of Lodz, Poland)
Prof. Ibrahim Hassan (Concordia University, Montreal, Quebec, Canada)
Prof. Stavros J.Baloyannis (Medical School, Aristotle University of Thessaloniki, Greece)
Prof. James F. Frenzel (University of Idaho, USA)
Prof. Mirko Novak (Czech Technical University in Prague,Czech Republic)
Prof. Zdenek Votruba (Czech Technical University in Prague,Czech Republic)
Prof. Vilem Srovnal,(Technical University of Ostrava, Czech Republic)
Prof. J. M. Giron-Sierra (Universidad Complutense de Madrid, Spain)
Prof. Zeljko Panian (University of Zagreb, Croatia)
Prof. Walter Dosch (University of Luebeck, Germany)
Prof. Rudolf Freund (Vienna University of Technology, Austria)
Prof. Erich Schmidt (Vienna University of Technology, Austria)
Prof. Alessandro Genco (University of Palermo, Italy)
Prof. Martin Lopez Morales (Technical University of Monterey, Mexico)
Prof. Ralph W. Oberste-Vorth (Marshall University, USA)

Prof. Vladimir Damgov (Bulgarian Academy of Sciences, Bulgaria)
Prof. Menelaos Karanasos (Brunel University, UK)
Prof. P.Borne (Ecole Central de Lille, France)


Additional Reviewers
Lukas Zach
Valeriu Prepelita
Ioannis Gonos
Shahram Javadi
Metin Demiralp
Valeri Mladenov
Dimitris Iracleous
Nikos Doukas
Filippo Neri
Nikos Karadimas
Aida Bulucea
Keffala Mohamed Rochdi
Mihaiela Iliescu
George Tsekouras
Nikos Bardis
Milan Stork
Vassiliki T. Kontargyri

Table of Contents

Keynote Lecture 1: Ant Decision Systems for Combinatorial Optimization with Binary
Constraints
16
Nicolas Zufferey

Keynote Lecture 2: A New Framework for the Robust Design of Analog Blocks Using Conic
Uncertainty Budgeting
17
Claudio Talarico

Keynote Lecture 3: On Mutual Relations Between Bioinspired Algorithms, Deterministic
Chaos and Complexity
18
Ivan Zelinka

Knowledge Management Innovation For Sustainable Development in the Context of the
Economic Crisis
21
Adrian Ioana, Augustin Semenescu, Cezar Florin Preda

Anaerobic Degradation of Dairy Wastewater in Intermittent UASB Reactors: Influence of
Effluent Recirculation
27
A. Silva, C. Couras, I. Capela, L. Arroja, H. Nadais

Fuels: A Survey on Sources and Technologies 34
Farzaneh Kazemi Qale Joogh, Milad Asgarpour Khansary, Ashkan Hosseini, Ahmad Hallaji
Sani, Navid Shaban Zadeh


Optimal Design of Wind/PV/Diesel/Battery Power System for Telecommunication
Application in a Remote Algeria
43
H. Zeraia, C. Larbes, A. Malek

Solar Hydrogen from Glycerol-Water Mixture 48
Chong Fai Kait, Ela Nurlaela, Binay K. Dutta

Air Quality in East Asia during the Heavy Haze Event Period of 10 to 15 January 2013 53
Soon-Ung Park, Jeong Hoon Cho

Design of Model Reference Controller of Variable Speed Wind Generators for Frequency
Regulation Contribution
62
Elvisa Becirovic, Jakub Osmic, Mirza Kusljugic, Nedjeljko Peric

Diffusion Dynamics of Energy Service Companies in the Residential Sector 70
Andra Blumberga, Dagnija Blumberga, Gatis ogla, Claudio Rochas, Marika Ro, Aiga
Barisa


Dynamical Characteristics in Time Series Between PM10 and Wind Speed 78
Deok Du Kang, Dong In Lee, Jae-Won Jung, Kyungsk Kim

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
13

Monitoring System of Environment Noise and Pattern Recognition 83
Luis Pastor Snchez Fernndez, Luis A. Snchez Prez, Jos J. Carbajal Hernndez

Benchmarking, Standard Setting and Energy Conservation of Olefin Plants in Iran 91
S. Gowharifar, B. Sepehrian, G. Nasiri, A. Khoshgard, M. Momenifar

Performance Evaluation of an Anaerobic Hybrid Reactor Treating Petrochemical Effluent 99
M. T. Jafarzadeh, N. Jamshidi, L. Talebiazar, R. Aslaniavali

Association Rules in the Measurement of Air Pollution in the City of Santiago de Chile 107
Santiago Zapata Caceres, Juan Torres Lopez

Effects of the Advance Ratio on the Evolution of Propeller Wake 113
D. G. Baek, J. H. Jung, H. S. Yoon

Economic and Emission Dispatch Problems using a New Hybrid Algorithm 119
Mimoun Younes, Fouad Khodja, Riad Lakhdar Kherfene

Modeling of Thermophilic Anaerobic Digestion of Municipal Sludge Waste using
Anaerobic Digestion Model No. 1 (ADM1)
127
Taekjun Lee, Young Haeng Lee

Sustainable Pneumatic Transport Systems of Cereals 129
Mariana Panaitescu, Gabriela Simona Dumitrescu, Andrei Alexandru Scupi

A New Consideration about Floating Storage and Regasification Unit for Liquid Natural
Gas
135
Mihai Sagau, Mariana Panaitescu, Fanel-Viorel Panaitescu, Scupi Alexandru-Andrei

Pattern Recognition on Seismic Data for Earthquake Prediction Purpose 141
Adel Moatti, Mohammad Reza Amin-Nasseri, Hamid Zafarani

The Natural Gas Addiction and Wood Energy Role in Latvia Today and Future 147
Ginta Cimdina, Andra Blumberga, Ivars Veidenbergs, Dagnija Blumberga, Aiga Barisa

The Small Hydropower Plant Income Maximization Using Games Theory 152
Antans Sauhats, Renata Varfolomejeva, Inga Umbrasko, Hasan Coban

The Eastern Baltic LNG Terminal as a Prospect to Improve Security of Regional Gas Supply 158
Kati Krbe Kaare, Ott Koppel, Ando Leppiman

Use of Simulator for Decision to Reuse of Industrial Effluents 165
Ana Cecilia Correia dos Santos, Elias Andrade Braga, Igor L. S. Rodrigues, Ricardo de Arajo
Kalid, Asher Kiperstok




Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
14

Environmental/Economic Power Dispatch Problem / Renewable Energy Using Firefly
Algorithm
170
Mimoun Younes, Riad Lakhdar Kherfene, Fouad Khodja

Hydrogen Production From Steam Gasification of Palm Kernel Shell Using Sequential
Impregnation Bimetallic Catalysts
177
Anita Ramli, Siti Eda Eliana Misi, Mas Fatiha Mohamad, Suzana Yusup

The Impact of the Economic Crisis on the Environmental Responsibility of the Companies 182
M. M. Miras, B. Escobar, A. Carrasco

Hydroinformatic Tools for Flood Risk Map Achievement 187
Erika Beilicci, Robert Beilicci, Ioan David

Ecological Aspects in Control of Small-scale Biomass Fired Boilers 194
Bohumil ulc, Cyril Oswald

The Streamers Dynamics Study by an Intelligent System based on Neural Networks 202
Fouad Khodja, Younes Mimoun, Riad Lakhdar Kherfane

Phenol Sulfonic Acid Oxidation in Aqueous Solution by UV, UV/H
2
O
2
and Photo-Fenton
Processes
207
N. Jamshidi, M. T. Jafarzadeh, A. Khoshgard, L. Talebiazar, R. Aslaniavali

The Power Options for Transmitting Systems using Thermal Energy Generator 212
Michal Oplustil, Martin Zalesak

The Impact of Climate Change on Farm Business Performance in Western Australia 216
L. Anderton, R. Kingwell, D. Feldman, J. Speijers, N. Islam, V. Xayavong, A. Wardell-Johnson

Evolution of Smart Buildings 223
Gabriel Iulian Fntn, Stefan Adrian Oae

Municipal Waste Water Toxicity Evaluation with Vibrio Fisheri 226
Helena Raclavska, Jarmila Drozdova, Silvie Hartmann

Inhibition of Activated Sludge Respiration by Heavy Metals 231
Silvie Hartmann, Hana Skrobankova, Jarmila Drozdova

Energy Simulation of Marine Currents through Wind Tunnel with use the Haar Wavelet
for Electromagnetic Brake Systems
236
Aldo A. Belardi, Antnio H. Piccinini

Authors Index 243
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
15

Keynote Lecture 1

Ant Decision Systems for Combinatorial Optimization with Binary Constraints



Professor Nicolas Zufferey
HEC - University of Geneva, Switzerland
E-mail: nicolas.zufferey-hec@unige.ch

Abstract: In this paper is considered a problem (P) which consists in minimizing an objective
function f while satisfying a set of binary constraints. Function f consists in minimizing the
number of constraints violations. Problem (P) is NP-hard and has many applications in various
fields (e.g., graph coloring, frequency assignment, satellite range scheduling). On the contrary
to exact methods, metaheuristics are appropriate algorithms to tackle medium and large sized
instances of (P). A specific type of ant metaheuristics is designed to tackle (P), where in contrast
with state-of-the-art ant algorithms, an ant is a decision helper and not a constructive
procedure.

Brief Biography of the Speaker: Swiss citizen, Nicolas Zufferey is Professor of Operations
Management at the University of Geneva. He holds a PhD in Operations Research from EPFL.
His research and publications relate to the heuristics, operations research, optimization,
logistics management and quantitative management methods.

The full paper of this lecture can be found on page 260 of the Proceedings of the 2013
International Conference on Applied Mathematics and Computational Methods, as well as in
the CD-ROM proceedings.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
16

Keynote Lecture 2

A New Framework for the Robust Design of Analog Blocks Using Conic Uncertainty Budgeting



Professor Claudio Talarico
Department of Electrical and Computer Engineering
Gonzaga University
Spokane, WA, USA
E-mail: talarico@gonzaga.edu

Abstract: In nanoscale technologies process variability makes it extremely difficult to predict
the behavior of manufactured integrated circuits (IC). The problem is especially exacerbated in
analog IC where long design cycles, multiple manufacturing iterations, and low performance
yields causes only few design to have the volume required to be economically viable. This paper
presents a new framework that accounts for process variability by mapping the analog design
problem into a robust optimization problem using a conic uncertainty model that dynamically
adjust the level of conservativeness of the solutions through the introduction of the notion of
budget of uncertainty. Given a yield requirement, the framework implements uncertainty
budgeting by linking the yield with the size of the uncertainty set associated to the process
variations depending on the design point of interest. Dynamically adjusting the size of the
uncertainty set the framework is able to find a larger number of feasible solutions compared to
other robust optimization frameworks based on the well known ellipsoidal uncertainty (EU)
model. To validate the framework, we applied it to the design of a 90nm CMOS differential pair
amplifier and compared the results with those obtained using the EU approach. Experimental
results indicate that the proposed Conic Uncertainty with Dynamic Budgeting (CUDB) approach
attain up to 18% more designs meeting target yield.

Brief Biography of the Speaker: Claudio Talarico is Associate Professor of Electrical and
Computer Engineering at Gonzaga University. He holds a PhD degree in electrical engineering
from University of Hawaii where he conducted research in the area of Embedded System-on-
Chip. Before joining Gonzaga University, he worked at Eastern Washington University,
University of Arizona, University of Hawaii, and in industry where he held both engineering and
management positions in the area of VLSI integrated circuits. The companies he worked for
include Infineon Technologies, in Sophia Antipolis, France, IKOS Systems in Cupertino, CA and
Marconi Communications, in Genova, Italy.

The full paper of this lecture can be found on page 49 of the Proceedings of the 2013
International Conference on Electronics, Signal Processing and Communication Systems, as
well as in the CD-ROM proceedings.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
17

Keynote Lecture 3

On Mutual Relations Between Bioinspired Algorithms, Deterministic Chaos and Complexity



Professor Ivan Zelinka
Technical University of Ostrava
Czech Republic
E-mail: ivan.zelinka@vsb.cz

Abstract: This lecture is focused on mutual intersection of three interesting fields of research
i.e. bioinspired algorithms, deterministic chaos and complexity, introducing a novel approach
joining bioinspired dynamics, complex networks and CML systems exhibiting chaotic behavior.
The first part will discuss a novel method on how dynamics of bioinspired algorithms can be
visualized in the form of complex networks. An analogy between individuals in the populations
in an arbitrary bioinspired algorithm and the vertices in a complex network will be discussed as
well as the relationship between the communications of individuals in a population and the
edges in a complex network. The second part will discuss the possibility of how to visualize the
dynamics of a complex network by means of coupled map lattices and to control by means of
chaos control techniques. The last part will discuss some possibilities on CML systems control,
especially by means of bioinspired algorithms. The spirit of this keynote speech is to create a
closed loop in the following schematic: bioinspired dynamics --> complex network --> CML
system --> control CML --> control bioinspired dynamics. Real-time simulations as well as
animations and pictures demonstrating the presented ideas will be presented through this
lecture.

Brief Biography of the Speaker: Ivan Zelinka is currently working at the Technical University of
Ostrava (VSB-TU), Faculty of Electrical Engineering and Computer Science. He graduated
consequently at Technical University in Brno (1995 - MSc.), UTB in Zlin (2001 - Ph.D.) and again
at Technical University in Brno (2004 - assoc. prof.) and VSB-TU (2010 - professor). Before
academic career he was an employed like TELECOM technician, computer specialist (HW+SW)
and Commercial Bank (computer and LAN supervisor).
During his career at UTB he proposed and opened 7 different lectures. He also has been invited
for lectures at 7 universities in different EU countries plus role of the keynote speaker at the
Global Conference on Power, Control and Optimization in Bali, Indonesia (2009),
Interdisciplinary Symposium on Complex Systems (2011), Halkidiki, Greece and IWCFTA 2012,
Dalian China. He is and was responsible supervisor of 3 grant of fundamental research of Czech
grant agency GAR, co-supervisor of grant FRV - Laboratory of parallel computing. He was also
working on numerous grants and two EU project like member of team (FP5 - RESTORM) and
supervisor (FP7 - PROMOEVO) of the Czech team.
Currently he is professor at the Department of Computer Science and in total he has been
supervisor of more than 30 MSc. and 20 Bc. diploma thesis. Ivan Zelinka is also supervisor of
doctoral students including students from the abroad. He was awarded by Siemens Award for
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
18

his Ph.D. thesis, as well as by journal Software news for his book about artificial intelligence.
Ivan Zelinka is a member of British Computer Socciety, Editor in chief of Springer book series:
Emergence, Complexity and Computation, Editorial board of Saint Petersburg State University
Studies in Mathematics, Machine Intelligence Research Labs (MIR Labs -
http://www.mirlabs.org/czech.php), IEEE (committee of Czech section of Computational
Intelligence), a few international program committees of various conferences and international
journals (Associate Editor of IJAC, Editorial Council of Security Revue,
http://www.securityrevue.com/editorial-council/). He is author of journal articles as well as of
books in Czech and English language.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
19


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
20
Knowledge Management Innovation For Sustainable
Development in the Context of the Economic Crisis



Adrian IOANA/University Politehnica of Bucharest
Science and Engineering Materials Faculty
UPB-SIM
Bucharest, Romania
adyioana@gmail.com
Augustin SEMENESCU/University Politehnica of
Bucharest
Science and Engineering Materials Faculty
UPB-SIM
Bucharest, Romania

Cezar Florin PREDA/University Politehnica of Bucharest
Science and Engineering Materials Faculty
UPB-SIM
Bucharest, Romania





Abstract The trade (qualitative and quantitative level of
trade) can promote the concept of sustainable development. The
concept of Sustainable Development involves the implementation
of theoretical and practical components for making decisions in
any situation in which features a man-type medium, be it the
environment, economic or social. The goals of sustainable
development include the harmonization of the economic, social
and environmental targets. This paper presents the main types of
the correlations: Trade Sustainable Development Economic
Crisis. The Sustainable Development (SD) concept is also
analyzed in direct correlation with the Corporate Social
Responsibility (CSR) concept. The SD concept involves the
implementation of theoretical and practical components for
making decisions in any situation in which a man-type medium,
be it the environment, economic or social features. The
Corporations (qualitative and quantitative level of trade) can
promote the concept of sustainable development. The goals of
sustainable development include the harmonization of the
economic, social and environmental targets. This paper presents
the main research on the main types of the correlations:
Corporate Social Responsibility (including trade) Sustainable
Development Economic Crisis.
Keywords Management Innovation, Sustainable
Development, Economic Crisis
I. INTRODUCTION
The world must quickly design strategies that will allow
nations to move from their current, often destructive, processes
of growth and development to sustainable development paths.
This will require policy changes in all countries, with respect to
both to their own development and to the impact on other
nations' development possibilities (Ammann, 2002, Ioana,
1998).
The concept of sustainable development designates all
forms and methods of socio-economic development, whose
foundation is primarily to ensure a balance between these
systems and socio-economic elements of natural capital.
Development is sustainable when it addresses the problem
of the large number of people who live in absolute poverty -
that is, who are unable to satisfy even the most basic of their
needs.
Poverty reduces people's capacity to use resources in a
sustainable manner (it intensifies pressure on the environment).
Most such absolute poverty is present in developing
countries (it has been worsened by the economic stagnation of
the 1980s).
A necessary but not a sufficient condition for the
elimination of absolute poverty is the relatively rapid rise of
per capita incomes in the Third World. It is therefore essential
that the stagnant or declining growth trends of this decade are
reversed.
Growth must be revived in developing countries because
that is where the links between economic growth, the
alleviation of poverty, and environmental conditions operate
most directly. Yet developing countries are part of an
interdependent world economy; their prospects also depend on
the levels and patterns of growth in industrialized nations.
Such growth rates could be environmentally sustainable if
industrialized nations can continue the recent shifts in the
content of their growth towards less material- and energy-
intensive activities and the improvement of their efficiency in
using materials and energy.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
21
Sustainable Development involves achieving the current
need without compromising the ability of future generations to
meet their own needs.
The standard theory of economic development involves
both quantitative change (increase in Gross Domestic Product)
and qualitative change (shift from pre-capitalist economy based
on agriculture to industrial capitalist economy (1, 3].
The theory of sustainable development involves both a
critique of quantitative measure of GDP and a different vision
of qualitative transformation. The goals of sustainable
development include the harmonization of the economic, social
and environmental targets.
The concept of sustainable development was born 37 years
ago, as a response to the emergence of environmental and
natural resources crisis, especially those related to energy. The
Conference on the Environment in Stockholm in 1972 marks
the moment when it is recognized for the first time that human
activities contribute to environmental deterioration, which
threatens the future of the planet [7, 11, 12].
Sustainable development has become an objective of the
European Union since 1997, when it was included in the
Maastricht Treaty, and in 2001, the Summit at Goteborg
adopted the Strategy for Sustainable Development of the
European Union, which was added an external dimension at
Barcelona in 2002.
Risk management in banking designates the entire set of
risk management processes and models allowing banks to
implement risk based policies and practices. They cover all
techniques and management tools required for measuring,
monitoring and controlling risks. The spectrum of models and
processes extends to all risks: credit risk, market risk, interest
rate risk, liquidity risk, operational risk and country risk.
II. ABOUT THE TRADE FOR SUSTAINABLE DEVELOPMENT

The most known definition of sustainable development is
given by the World Commission on Environment and
Development (WCED) report Our Common Future, also
known as the Brundtland Report [9]:
"Sustainable development is development which aims to meet
the needs of present without compromising the ability of
future generations to meet their own needs.
The concept of Sustainable Development involves the
implementation of theoretical and practical components for
making decisions in any situation which features a man-type
environment, be it the environment, economic or social.
In the human-environment correlation (more precisely the
human-environments correlation), the trade is of particular
importance [2, 5, 6]. This importance is that the trade may
affect (positively or negatively) all three types of environment
(the environment/ ambient, economic and social environment).
Figure schematically presents the importance of the trade for
the human-environments correlation.






















Fig. 1. The importance of the trade for the human-
environments correlation.
AM Environment (Ambient Environment);
EM Economical Environment; SM Social Environment.

The importance of trade in human-environments
relationship is revealed by the central position of the trade.
The double correlations trade-man and trade-
environments is also highlighting the importance of trade in
this relationship.
For the detailing of the human-environment relationship
schematically presented in figure 1, we may define two
specific types of trade:
Ecological Trade (Green Trade) in direct
correlation with the environment (ambient). The
Ecological Trade-Environment correlation consists
of: the trade that applies and extends the
requirements as to the protection of the environment
positively influences the latter ( ). Implicitly, under
these circumstances, the ecological trade is the
sustainable development generator (figure 2).
Social trade, in direct correlation with the social
environment. This correlation suggests that a trade
that puts the forefront of continually optimizing the
price/quality ratio in terms of customer (that is the
increase of this report without being affected quality)
is a trade positively affecting ( ) the social
environment.

The multiple correlation human (society) trade
environment sustainable development is schematically in
figure 3.









HUMAN
(SOCIETY)

TRADE
ENVIRON
MENT
(AM; EM;
SM)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
22



















Fig. 2. Ecological Trade Environment Sustainable Development correlation


































































Fig. 3. The multiple correlation Human (Society) Trade Environment Sustainable Development










HUMAN
(SOCIETY)


TRADE

ECONOMICAL
ENVIRONMENT


ECOLOGICAL
TRADE


SUSTAINABLE
DEVELOPMENT


SOCIAL
TRADE


ENVIRONMENT



SOCIAL
ENVIRONMENT


ECOLOGICAL
TRADE

ENVIRONMENT

SUSTAINABLE
DEVELOPMENT
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
23
The following positive influences (favorable, , ) are
identified:
Positive influence ( ) of the ecological trade on
environment.
Positive influence ( ) of the social trade on
social environment.
Clear positive influence ( ) of environment
on sustainable development.


I. The negative influence ( ) of economic crisis on the trade
(decrease of the sales volume).
II. The negative influence ( ) of economic crisis on
sustainable development.
III. The positive influence ( ) of trade (ecological trade) on
sustainable development


IV. THE COMMERCE, NONPERFORMING LOANS AND
ELEMENTS OF RISK MANAGEMENT

III. ABOUT CORRELATION: TRADE SUSTAINABLE
DEVELOPMENT ECONOMIC CRISIS

The main types of the correlations Trade (C) - Sustainable
Development (SD) Economic Crisis (EC) are presented
below (figure no. 4).




















Fig. 4. The main types of the correlations Trade (C)
Sustainable Development (SD) Economic Crisis (EC)
There are three types of correlations:

The nonperforming loans (NPL) are those loans for which
principal or interest is due and left unpaid for 90 days or more
(this period may vary by jurisdiction).
The NPL portfolio, along with the banks collection ratio
and the level of provisions recorded illustrate the quality of the
entire portfolio and the overall credit policy of the bank [4, 8,
10].
There are various reasons why the quality of bank loan
portfolios deteriorate and research reveals that most reasons
relate to the nature of the banks credit culture. Below are listed
the most usual drivers of loan portfolio deterioration:
Self dealing refers to an overextension of credits
to directors and large shareholders, while
compromising sound credit principles under the
pressure from related parties.
Compromise of credit principles refer to the
granting with full knowledge of loans under
unsatisfactory risk terms.
Anxiety over income outweighs the soundness of
lending decisions, underscored by the hope that
the risk will not materialize.
Incomplete credit information concerns loans
granted without proper appraisal or borrower
creditworthiness.
Complacency is typically manifested in a lack of
adequate supervision of old, familiar borrowers,
based on an optimistic interpretation of known
credit weaknesses because of survival in distressed
situations in the past.
Technical incompetence and poor selection of
risks include a lack of ability among credit officers
to analyze financial statements and obtain and
evaluate pertinent credit information.
Measures to counteract credit risks normally comprise
clearly defined policies that express the banks credit risk
management philosophy and the parameters within which
credit risk is to be controlled.
Among the policies targeted at limiting the credit risk can
be mentioned: policies on concentration and large exposures,
adequate diversification, lending to connected parties or over-
exposures.

C

SD

EC
(I)
(II)
(III)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
24
Bank regulators have paid close attention to risk
concentration by banks, the objective being to prevent banks
from relying excessively on a large borrower or group of
borrowers.
Modern prudential regulations usually stipulate that a bank
should not make investments, grant large loans, or extend other
credit facilities to any individual entity or related group of
entities in excess of an amount that represents a prescribed
percentage of the banks capital and reserves.
According to international practice, a single client is an
individual, a legal person or a connected group to which a bank
is exposed.
Single clients are mutually associated or control (directly or
indirectly) other clients, usually through a voting right of at
least 15-20 percent, a dominant shareholding or the capacity to
exercise a controlling influence on policy making and
management. These clients cumulative exposure may
represent a singular risk to a bank if financial interdependence
exists and their expected source of repayment is the same.
The second set of credit risk policies consist of the asset
classification method, which employs a periodic evaluation of
the collectability of the loan portfolio.
The general rule is that all assets for which a bank is taking
a risk should be classified, including loans and advances,
accounts receivable, investments, equity participations and
contingent liabilities.
Asset classification, by means of which assets are classified
at the time of origination and then reviewed and reclassified as
necessary (according to the degree of credit risk) a few times
per year, is a key risk management tool.
The periodical review considers loan service performance
and the borrowers financial condition. Assets classified as
standard or specially mentioned are typically reviewed
twice per year, while critical assets are reviewed at least each
quarter.
Banks determine classifications by themselves, but follow
standards that are normally set by regulatory authorities.
Standard rules for asset classification that are currently used are
listed below:
Standard (pass) are loans for which the debt
service capacity is considered to be beyond any
doubt.
In general, fully secured loans by cash or cash substitutes
(bank deposits, certificates, treasury bills etc) are usually
classified in this category.
Specially mentioned (watch) are assets with
potential weaknesses that may, if not checked or
corrected, to weaken the asset as a whole or
jeopardize the borrowers repayment capacity in
the future.
In this category are included, for example, credits given
through inadequate loan agreements, lack of control over the
collateral, lack of proper documentation.
Loans to borrowers operating under economic or market
conditions that may negatively affect the borrower in the future
are also included in this category.
Substandard regard well defined credit
weaknesses that jeopardize debt service capacity,
in particular when the primary sources for
repayment are insufficient and the bank must look
to secondary sources for repayment, such as
collateral, the sale of a fixed asset or refinancing.
In this category can be included term credits to
borrowers whose cash flow may not be enough to
meet currently maturing debts, as well as short
term loans and advances to borrowers for which
the inventory-to-cash cycle is insufficient to repay
the debt at maturity.
Doubtful are assets having the same weaknesses
as substandard assets, but their collection in full is
questionable on the basic of existing facts.
The possibility of loss is present, but certain factors that
may strengthen the asset exist as well.
Loss regard assets that are considered
uncollectible and of such little value that the
continued definition as bankable assets is not
warranted.
The inclusion in this category does not mean that the asset
has absolutely no recovery or salvage value, but rather that it is
neither practical nor desirable to defer the process of writing it
off, even though partial recovery may be possible in the future.
The third set of credit risk management policies are policies
regarding loss provisioning, by means of which allowances are
set up at an adequate level as to absorb anticipated loss.
Asset classification is the one providing a basis for
determining an adequate level of provisions for possible loan
losses. The aggregate level of provisions, together with general
loss reserves, indicates the capacity of a bank to effectively
accommodate credit risk.
In determining an adequate reserve, all significant factors
that affect the collectability of the loan portfolio should be
considered.
These factors include the quality of credit policies and
procedures, prior loss experiences loan growth, quality of
management in the lending area, loan collection and recovery
practices, changes in national and local economic and business
conditions, and general economic trends.
Assessments of asset value should be performed
systematically, consistently over time and in conformity with
objective criteria.
Policies on loan-loss provisioning range from mandated to
discretionary, depending on the banking system. In many
countries, in particular those with fragile economies, regulators
have established mandatory levels of provisions which are
related to asset classification.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
25
V. CONCLUSION
The concept of sustainable development designates all
forms and methods of socio-economic development, whose
foundation is primarily to ensure a balance between these
systems and socio-economic elements of natural capital.
Many correlations may be identified among trade,
sustainable development and economic crisis. Of these
correlations the following are most important:
Positive influence ( ) of the ecological trade on
environment.
The negative influence ( ) of the economic crisis
on sustainable development.
Obvious positive influence ( ) of environment
on sustainable development.
The satisfaction of human needs and aspirations is the
major objective of development.
The essential needs of a large number of people in
developing countries (for food, clothing, shelter, jobs) are not
being met yet, and beyond their basic needs these people have
legitimate aspirations for an improved quality of life.
A world in which poverty and inequity are endemic will
always be prone to ecological and other crises.
Sustainable development requires meeting the basic needs
of all people and extending to everybody the opportunity to
satisfy their aspirations for a better life.
The level of commerce depends on the specific credit
resource. There are three sets of policies specific to credit risk
management: policies aimed at limiting or reducing the credit
risk, policies of asset classification and policies concerning loss
provisioning.
In determining an adequate reserve, all significant factors
that affect the collectability of the loan portfolio should be
considered.
These factors include the quality of credit policies and
procedures, prior loss experiences loan growth, quality of
management in the lending area, loan collection and recovery
practices, changes in national and local economic and business
conditions, and general economic trends.
Assessments of asset value should be performed
systematically, consistently over time and in conformity with
objective criteria.
The main types of the triple correlation Commerce
Sustainable Development Risk Management reflect the
leading position of Sustainable Development concept.
In this context, Commerce must realize the balance
between the requirements of Sustainable Development and
Risk Management.
REFERENCES
[1] Ammann, M., Credit Risk Valuation: methods, models and
application, Springer Publishing House, Berlin, 2012, pp. 11-123.
[2] Cooper, G., The Origin of Financial Crises, Vintage Editure, USA,
2008, pp. 25-73.
[3] Hodorogel, Roxana Gabriela, The Economic Crisis and its Effects on
SMEs, Theoretical and Applied Economics Review, ISSN 1841-8678,
No 5/2009 (534), Bucharest, 2009, pp. 31-38.
[4] Ioana, A., Managementul activitii financiar-contabile si analize
economice. Teorie i Aplicaii., Ed. Politehnica Press, Bucureti,
2009, pp. 17-83.
[5] Ioana, A., Marketing Elements Mix in the Materials Industry,
Proceedings of the International Conference European Integration -
New Challenges for the Romanian Economy, 4th Edition, May, 30-
31.2008, Oradea, 2008, pp. 51-54.
[6] Ioana, A. (2007) Managementul produciei n industria materialelor
metalice. Teorie i aplicaii., Editura PRINTECH Bucureti, ISBN
978-973-758-1232, 232 pg., Bucureti.
[7] Ioana, A. (1998) The Electric Arc Furnaces (EAF) Functional and
Technological Performances with the Preheating of the Load and
Powder Blowing Optimization for the High Quality Steel Processing,
PhD Thesis, University Politehnica of Bucharest, 1998, pp. .
[8] Ioana, A., Mirea, V., Blescu, C. (2009) Analysis of Service Quality
Management in the Materials Industry Using the BCG Matrix Method,
Amphitheater Economic Review, Vol. XI, Nr. 26, June, Bucharest.
[9] Ioana, A., Nicolae, A., Blescu, C. (2009) Elements of Metallurgical
Marketing Mix (MMM), Metalurgia Review, ISSN 0461-9579, No
78/2009, Bucharest.
[10] Ioana, A., Semenescu, A., Preda, C.F. (2012) Management Strategic.
Teorie i Aplicaii. Editura Matrix Rom, Bucureti, ISBN 978-973-
755-8268, 204 pg, Bucureti.
[11] * * *, http://www.earthpolicy.com
[12] * * *, http://evado.ro



Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
26
Anaerobic degradation of dairy wastewater in
intermittent UASB reactors: influence of effluent
recirculation


A. Silva, C. Couras, I. Capela, L. Arroja, H. Nadais*
CESAM & Environmental and Planning Department
University of Aveiro
3810-193-Aveiro, Portugal
nadais@ua.pt


Abstract This work studied the influence of effluent
recirculation upon the kinetics of anaerobic degradation of dairy
wastewater in intermittent UASB (Upflow Anaerobic Sludge
Bed) reactors. Several laboratory-scale tests were performed with
different organic loads in a UASB reactor inoculated with
flocculent sludge from an industrial wastewater treatment plant.
The data obtained were used for determination of specific
substrate removal rates and specific methane production rates
and adjusted to kinetic models. A high initial substrate removal
was observed in all tests due to adsorption of organic matter onto
the anaerobic biomass which was not accompanied by biological
substrate degradation as measured by methane production.
Initial methane production was about 45% of initial soluble and
colloidal substrate removal rate. This discrepancy was observed
mainly in the first day of all experiments and was attenuated in
the second day. Effluent recirculation raised significantly the rate
of removal of soluble and colloidal substrate and methane
productivity as compared to literature results for batch assays
without recirculation.
Keywords UASB reactor; dairy wastewater; feedless period;
effluent recirculation; kinetics
I. INTRODUCTION (Heading 1)
Presently Upflow Anaerobic Sludge Bed (UASB) reactors
face significant challenges in what concerns their applicability
for the treatment of complex lipid-rich wastewater of which
dairy wastewater is an example. As an option to overcome
operating problems verified in continuous systems studies
have been developed on the intermittent operation of UASB
reactors used for treating dairy wastewater [1, 2] or for
treating proteinaceous wastewater [3], slaughterhouse
wastewater [4], domestic wastewater [5] or olive mill
wastewater [6]. The beneficial effects of discontinuous
feeding of fatty substrates on anaerobic systems have also
been confirmed by Palatsi et al. [7]. The intermittent operation
is composed of a succession of feed and feedless periods
where a feed period followed by a feedless period forms an
intermittent cycle. During the feed periods high substrate
removal rates are achieved which are not accompanied by the
expected methane production, leading to heavy non degraded
substrate accumulation onto the biological biomass that
constitutes the UASB sludge bed. The feedless period is
crucial for the degradation of the complex substrates (fats and
long chain fatty acids LCFA) that accumulate in the biomass
during the feed period, mainly by adsorption mechanisms [1,
2, 6]. During the feed periods the intermittent UASB reactor
works as a continuous reactor and during the feedless periods
it works as a batch reactor. It has been shown that effluent
recirculation during the feedless periods of intermittent
operation are very beneficial for reactor performance
especially in terms of methane production [8].
Insights on what happens during the feedless periods are
important to understand the functioning of the intermittent
UASB systems. Literature presents several results for the
degradation of dairy wastewater in batch reactors. Yet if
effluent recirculation is applied during the feedless periods the
hydrodynamic conditions may significantly alter the COD
(Chemical Oxygen Demand) removal mechanisms and
subsequent biological degradation observed in feedless periods
of intermittent systems. The importance of the hydrodynamic
conditions is related to mass transfer mechanisms [9] and to
adsorption phenomena responsible for the major percentage of
initial COD removal from complex wastewaters in anaerobic
systems [10, 11]. In this framework this investigation aimed at
evaluating the influence of effluent recirculation on the kinetics
of dairy wastewater degradation in the feedless periods of
intermittent UASB operation.
II. MATERIALS AND METHODS
In this work a lab-scale UASB reactor was used with a
working volume of 6 litres topped with a gas-solid-liquid
separator and operated at mesophilic temperature (351 C) by
means of a water jacket connected to a thermostatic bath. The
UASB reactor is shown in Fig. 1. At the beginning of each test
the reactor was seeded with approximately 4 litres of
flocculent biomass adapted to dairy wastewater from an
industrial wastewater treatment plant.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
27

















Fig. 1. Laboratory-scale UASB reactor used in this work.

The feed was prepared from dilution of semi-skimmed milk
and supplementing with nutrients and alkalinity [1]. Table I
presents the composition of the milk used for preparing the
feed.

TABLE I. CHARACTERIZATION OF THE MILK USED FOR FEED
Parameter (g/L) Value
Proteins 32
Carbon hydrates 48
Total lipids 16
Saturated lipids 10
Calcium 1.2
COD 147.47
COD = Chemical Oxygen Demand.

The reactor was operated in a discontinuous mode where
the feed was pumped into the reactor and then the produced
effluent was recirculated, without any extra feeding, at
volumetric flow of 0.5 L/h. Table II presents the experimental
conditions for the five tests performed in this work.
TABLE II. EXPERIMENTAL SET-UP
Test Organic load
(g COD/L)
Biomass
(gVSS/L)
1 0.333 4.763
2 0.666 4.763
3 4.460 4.456
4 8.910 5.127
5 17.270 4.317

After recirculation started the monitoring plan was
implemented consisting of daily analysis of total COD, paper
filtered COD (CODpf), membrane filtered COD (CODmf),
total and volatile suspended solids (TSS and VSS), pH and
volatile fatty acids (VFA). Paper filtered COD samples
(CODpf) were prepared using paper filters with a pore
diameter of 1,2 m (Whatman Inc. Reeve Angel, grade 403,
4,7 cm). Membrane filtered COD samples (CODmf) were
prepared with membrane filters with a pore diameter of 0,45
m (Schleicher & Schuel Purabind, 4,7 cm). Membrane
filtered COD represents the soluble COD fraction whilst the
paper filtered COD represents the soluble and colloidal COD
fraction [4].
The produced biogas was measured by a water
displacement system. Methane content in biogas was
monitored using a gas chromatograph Shimadzu GC 9a,
equipped with a Supleco Molcular Sieve 5 A column and a
Thermal Conductivity Detector (T=100C). Injection
temperature was 45C and Helium was used as carrier gas
(P=4.4 kg/cm
2
). Volatile fatty acids determination was carried
out in a gas chromatograph Chrompack CP 9001 equipped with
a Chrompack CP sil5 CB column and a Flame Ionization
Detector (T=300C). The injection temperature was 270 C and
Helium was used as carrier gas with a volumetric flow of 8
ml/min.

III. RESULTS AND DISCUSSION
The profiles of CODpf, cumulative methane production,
removal of CODpf and methanization of removed CODpf
were obtained for all the tests performed. Figs. 2 to 5 present
results for the two higher loads tested (8.91 g/L and 17.27 g
COD/L).

0
20
40
60
80
100
120
140
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10
C
u
m
u
l
a
t
i
v
e

C
H
4

(
g
C
O
D
)
C
O
D
c
o
l
l
+
s
o
l

(
g
/
L
)
Time (days)
CODsol (g/L)
CODcoll+sol (g/L)
Cumulative CH4 (gCOD)

Fig. 2. COD and CH4 profile for test 4 (8.91 g COD/L).
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
28
0
50
100
150
200
250
0
2
4
6
8
10
12
14
16
18
20
0 5 10
C
u
m
u
l
a
t
i
v
e

C
H
4

(
g

C
O
D
)
C
O
D

(
g
/
L
)
Time (days)
CODsol (g/L)
CODcoll+sol (g/L)
Cumulative CH4 (gCOD)

Fig. 3. COD and CH4 profile for test 5 (17.27 g COD/L).
For all the organic loads tested a significant decrease in
CODpf was observed during the first day of the tests with 75%
- 90% CODpf removal for all the tests except the higher load
(only 43% CODpf removal in the first day). From the second
day onwards the CODpf values are approximately constant in
time except for the test with the higher load (17.27 g
COD/L.d) in which an important decrease of CODpf was
observed in the second day (Fig. 3). The values of volumetric
methane production present a tendency towards stabilization
only from the third day onwards for all the tests except for the
lower load (0.33 g COD/L.d, data not shown). With this lower
load the tendency to diminish the methane production was
observed only from the fifth day of the test (data not shown).

0
20
40
60
80
100
0 5 10
E
f
f
i
c
i
e
n
c
y

(
%
)
Time (days)
COD removal
Methanization

Fig. 4. COD removal and methanization efficiencies for test 4 (8.91 g
COD/L).
0
20
40
60
80
100
0 5 10
E
f
f
i
c
i
e
n
c
y

(
%
)
Time (days)
COD removal
Methanization

Fig. 5. COD removal and methanization efficiencies for test 5 (17.27 g
COD/L).
The evolution of the CODpf removal as a function of the
applied load (Fig. 6) shows that in the first and second days
the COD removal is very similar with exception of the higher
load. The additional COD removal in the second day is very
small in comparison with what was observed in the first day.
For the higher load by comparing the percentage removal
attained in the first and the second days it is possible to see
that not all the substrate is removed in the first day.
0
10
20
30
40
50
60
70
80
90
100
0 5 10 15 20
C
O
D

r
e
m
o
v
a
l

(
%
)
Load (g COD/L)
CODremoval (day1)
CODremoval (day1+day2)

Fig. 6. Evolution of COD removal with applied load..
A linear relation was found between the applied load and the
volumetric methane production obtained in the first day (Fig.
7) except for the higher load were a decrease in the relation
CH
4
/load was observed. This discrepancy is due to the fact
that not all the organic matter is available for the
microorganisms to degrade since it is adsorbed onto the
biomass particles, causing a lower methane production than
would be expected from the observed COD removal. These
results confirm the rapid adsorption of organic substrate onto
the biological sludge reported by Hwu [10] and by Nadais et
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
29
i
s
max
K
S
S
K
1
q
q
+ +
=
S K
s
+
=
max
q
q
al. [11]. Yet when correlating the total volume of CH
4

produced in the first and second days with the applied loads a
linear correlation is observed. Fig. 8 presents the values of
methanization percentage of the removed CODpf attained in
the first and in the second days of the tests as functions of the
applied loads. The differences observed in the methanization
of the removed substrate between the first and the second days
indicate that a part of the COD removed during the first day is
methanized only in the second day. This result is in
accordance of the proposed duration of two days for the
feedless period of intermittent operation of UASB reactors
treating dairy wastewater [1].
0
10
20
30
40
50
60
70
0 5 10 15 20
C
u
m
u
l
a
t
i
v
e

C
H
4

(
L
)
Load (gCOD/L)
CH4 (day1)
CH4 (day1+day2)

Fig. 7. Correlation between applied load and methane production .
0
10
20
30
40
50
60
70
80
90
100
0 5 10 15 20
M
e
t
h
a
n
i
z
a
t
i
o
n

(
%
)
Load (g COD/L)
Methanization (day1)
Methanization (day1+day2)

Fig. 8. Evolution of methane production with applied load.
The methane content in the produced biogas varied from 50%
to 90% for all the tests being higher by the end of each test.
The soluble COD fraction (CODmf) is the fraction available
for microorganism metabolism and is around 20% to 40% of
the CODpf (colloidal + soluble COD) in the beginning of tests
(see Figs. 2 and 3). In all the tests the average pH values
varied between 7 and 8 the lowest value reached being 6.5.
The VFA concentrations determined in all the tests never
surpassed 2 mg HAc/L, always being under the threshold
toxicity limit of 3 g HAc/L suggested by Malina and Pohland
[12]. As an example Fig. 9 presents the VFA profile for test 4
(load of 8.91 g COD/L), where it can be seen that a significant
percentage of the produced VFA is butyric acid, an
intermediate substrate related to the degradation of fatty
matter and LCFA in anaerobic systems [13].

0
100
200
300
400
500
600
700
800
900
1000
1 2 3 6 7 8
A
c
i
d

(
m
g

C
O
D
/
L
)
Time (days)
N-caproic N-valeric
I-valeric N-butyric
I-butyric Propionic
Acetic

Fig. 9. VFA profile for test 4.
The specific CODcolloidal+soluble removal rates (qCODpf)
and the specific methane production rates (qCH
4
) were
obtained by the initial velocity method (t = 1 day) and were
adjusted to the Monod model (1) and to the uncompetitive
inhibition model or Haldane model (2), both described in [14].
The least squares method was applied and commercial
software, Scientist version 2.0 1994, was used, with an
integration method based on the Powell algorithm and initial
values search by the double simplex method. The quality of
the fitting was assessed by the coefficient of determination r
2
,
see Fig. 10 and Table III.


(1)




(2)




where: q is the specific substrate removal rate (g COD/g
VSS.d); q
max
is the maximum substrate specific removal rate
(g COD/g VSS.d); K
s
is the half-velocity constant (g/L), K
i
is
the Haldane inhibition constant (g/L) and S is the substrate
concentration (g/L).
According to the values of r
2
the model that provided a better
fit of the experimental data was the Monod model. The
specific rate of methane production (qCH
4
) is approximately
45% of the specific CODpf removal rate (see Fig. 11) which is
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
30
justified by the fact that this rates were calculated using the
initial velocity method (t = 1 day) and there is a lag between
initial COD removal and methane production.

0,0
0,2
0,4
0,6
0,8
1,0
1,2
1,4
1,6
1,8
2,0
0 5 10 15 20
q

(
g
C
O
D
/
g
V
S
S
.
d
)
Load (gCOD/L)
qCH4 (experimental) qCODpf (experimental)
qCH4-Monod qCODpf-Monod

Fig. 10. Fitting of experimental data to the Monod model.
y = 0,4518x + 0,0053
R = 0,9899
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 0,5 1 1,5 2
q
C
H
4

(
g
/
g
.
d
)
qCODcoll+sol (g/g.d)

Fig. 11. Relation between specific COD removal rate and specific methane
production rate..
TABLE III. KINETIC PARAMETRES
Parameter CODpf
1)
CH
4

1)
CODpf
2)

q
max
3.044 1.1750 2.4
K
s
9.9323 6.7140 9.6
r
2
0.9941 0.9866 0.9974
1) This work; 2) Reference [15].

Fig. 12 and Table III present a comparison of the specific
COD removal rates obtained in this work and those obtained
in batch reactors with no recirculation with a biomass content
of 5 g VSS/L [15]. It can be seen that for loads above 5 g
COD/L recirculation improves the COD removal rate in about
30% for tests performed with the same VSS content compared
to the results with no recirculation. This means that the
recirculation of the treated effluent and the hydrodynamic
conditions have a significant beneficial influence upon the
kinetics of the degradation process in discontinuous anaerobic
systems.
0,00
0,50
1,00
1,50
2,00
0 5 10 15 20 25
q
C
O
D

(
g
/
g
.
d
)
Load (g COD/L)
qCOD (this work)
qCOD (5 g VSS/L)[15]

Fig. 12. Comparison of data from this work and from literature [15].
Figs. 13 and 14 present the COD balances for tests 4 (organic
load of 8.91 g COD/L) and for a test performed in similar
conditions but with no effluent recirculation performed with
an organic load of 9 g COD/L and 5 g VSS/L, [1].
Surprisingly it can be seen that methane production is more
rapid in the test with no recirculation. Yet initial adsorption
(retained COD) is more pronounced in the test with effluent
recirculation probably due to a more complete contact
between the substrate and the biomass. Although initial
adsorption is higher with effluent recirculation also the
substrate degradation is higher for this condition leading to
higher methanization efficiency.
0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5 6 7 8
C
O
D

(
%
)
Time (days)
Retained
Coll+SNA
VFA
Methane

Fig. 13. COD balance for test 4 (8.91 g COD/L); coll = colloidal, SNA =
soluble not acidified.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
31
0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5 6 7 8
C
O
D

(
%
)
Time (days)
Retained
Coll+SNA
VFA
Methane

Fig. 14. COD balance for a batch test without recirculation with a load of 9 g
COD/L and 5 g VSS/L (adapted from [15]); coll = colloidal, SNA = soluble
not acidified.

These results are in agreement with what was reported
by Nadais et al. [8], that observed an improvement of
intermittent UASB reactor performance when effluent
recirculation was applied during the feedless periods.
(methanization raised to 95% as compared to 80-88% attained
with no effluent recirculation).
The results obtained in this work also suggest that for
organic loads above 10 g COD/L the feedless periods of
intermittent operation should be longer than the feed periods as
has been suggested by Coelho et al. [2]. On the other hand it
was observed that the monitoring of high rate reactors treating
complex fat containing wastewater based on the COD of the
produced effluent may be misleading in what concerns the real
biological degradation [16, 17].

IV. CONCLUSIONS
In laboratory experiments of UASB reactors with total effluent
recirculation treating dairy wastewater there is a rapid COD
removal in the first day of the tests, evidenced by the decrease
in CODcolloidal+soluble, which removal is not followed by a
biological degradation evidenced by CH
4
production. This is
due to adsorption of the organic matter onto the surface of the
biological sludge since adsorption is faster than biological
degradation. The CH
4
specific production rate, calculated by
the initial velocity method (t=1 day) was about 45% of the
specific CODcolloidal+soluble removal rate. This confirms
that the monitoring of high rate reactors based on the COD of
the liquid phase may be misleading [16, 17]. The discrepancy
between the initial COD removal and the CH
4
production was
observed mostly in the first day of the tests fading on the
second day which suggests that a period for intermittency in
UASB reactors should be higher than one day and possible
two days.
In what concerns the influence of the hydrodynamic
conditions upon the behavior of high rate reactors treating
milk wastewaters it can be said that effluent recirculation
during feedless periods improved significantly (up to 30%) the
specific CODcolloidal+soluble removal rate in comparison to
what was observed in classical batch reactors with no
recirculation. A more complete substrate degradation was also
observed with effluent recirculation.
ACKNOWLEDGMENT
This work was performed with funding from FCT-Fundao
para a Cincia e Tecnologia, Portugal
(PTDC/AMB/65025/2006).

REFERENCES

[1] H. Nadais, I. Capela, L. Arroja, A. Duarte, Optimum cycle time for
intermittent UASB reactors treating dairy wastewater, Water Research,
vol. 39, no. 8, pp. 1511-1518, 2005.
[2] N. Coelho, A. Rodrigues, L. Arroja, I. Capela, Effect of non-feeding
period length on the intermittent operation of UASB reactors treating
dairy effluents. Biotechnology Bioengineering, vol. 96, no 2, pp. 244
249, 2007.
[3] M. Vias, C. Galain, M. Lois, Treatment of proteic wastewater in
continuous and intermittent UASB reactors,. VI International
Symposium on Anaerobic Digestion, So Paulo, Brazil, vol. 1, pp. 321-
327, 1991.
[4] S. Sayed, Anaerobic treatment of slaughterhouse wastewater using the
UASB process, Ph.D Thesis, Agricultural University of Wageningen,
Wageningen, The Netherlands, 1987.
[5] S. Sayed, and M. Fergala, Two-stage UASB concept for treatment of
domestic sewage including sludge stabilisation process, Water Science
and Technology, vol. 32, no 11, pp. 5563, 1995.
[6] M.R. Gonalves, J.C. Costa, I.P. Marques, M.M. Alves, Strategies for
lipids and phenolics degradation in the anaerobic treatment of olive mill
wastewater, Water Research, vol. 46, no 6, pp. 16841692, 2012.
[7] J. Palatsi, M. Laureni, M.V. Andrs, X. Flotats, H.B. Nielsen, I.
Angelidaki, Recovery strategies from long-chain fatty acids inhibition
in anaerobic thermophilic digestion of manure, Bioresource
Technology, vol. 100, no 20, pp. 45884596, 2009.
[8] H. Nadais, I. Capela, L. Arroja, Intermittent vs continuous operation of
upflow anaerobic sludge bed reactors for dairy wastewater and related
microbial changes, Water Sci. Technol., vol. 54, no 2, pp. 103109,
2006.
[9] M.A. Pereira, O.C. Pires, M, Mota, M.M., Alves, Anaerobic
biodegradation of oleic and palmytic acids: evidence of mass transfer
limitations caused by long chain fatty acid accumulation onto the
anaerobic sludge. Biotechnol. Bioeng., vol. 92, no 1, 15-23, 2005.
[10] C.S. Hwu, Enhancing anaerobic treatment of wastewaters containing
oleic acid, PhD thesis, Agricultural University of Wageningen,
Wageningen, The Netherlands, 1997.
[11] H. Nadais, I. Capela, L. Arroja, A. Duarte, Biosorption of milk
substrates onto anaerobic flocculent and granular sludge, Biotechnol.
Prog., vol. 19, pp. 10531055, 2003.
[12] J.E. Malina and F. Pohland, Design of anaerobic processes for the
treatment of industrial and municipal wastes, Technomic Publishing
Company, Inc. Lancaster, Pennsylvania, USA, 1992.
[13] G. Silvestre, A. Rodrguez-Abalde, B. Fernndez, X. Flotats, A.
Bonmat, Biomass adaptation over anaerobic co-digestion of sewage
sludge and trapped grease waste,Bioresource and Technology, vol. 102,
pp. 68306836, 2011.
[14] B. Desjardins and P. Lessard, Modlisation du procd de digestion
anarobie, Sciences et Techniques de L'eau, vol. 25, no 2, pp. 119-136,
1992.
[15] H. Nadais, I. Capela, L. Arroja, A.. Duarte, Kinetic analysis of
anaerobic degradation of dairy wastewater, Proc. 9
th
World Congress
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
32
on Anaerobic Digestion-2001, Antwerp, Belgium, 2-5 September, 203-
208, 2001.
[16] H. Nadais, Dairy wastewater treatment with intermittent UASB
reactors, Ph.D Thesis, University of Aveiro, Aveiro, Portugal, 2002 (in
portuguese).
[17] J. Jeganathan, G. Nakhla, A. Bassi, Long-term performance of high-
rate anaerobic reactors for the treatment of oily wastewater, Environ.
Sci. Technol., vol. 40, pp. 64666472, 2006

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
33
Fuels: A Survey on Sources and Technologies


Farzaneh Kazemi Qale Joogh, Milad Asgarpour
Khansary
Department of Chemical Engineering,
University of Mohaghegh Ardebili,
Ardebil, Iran.
Milad Asgarpour Khansary, Ashkan Hosseini, Ahmad
Hallaji Sani
School of Chemical Engineering,
University of Tehran,
Tehran, Iran.
Navid ShabanZadeh
Petroleum Engineering,
Politecnico di Torino,
Torino, Italy.

Corresponding Author Email: miladasgarpour@ut.ac.ir


Abstract Industry uses cheaply available fossil feed-stocks
such as petroleum, coal and natural gas based refineries to
produce a wide variety of products to meet the increasing
demand of the population such as fuel, fine chemicals,
pharmaceuticals, detergents, plastics, pesticides and fertilizers,
lubricants, solvent, waxes, asphalt, synthetics fibers, etc. Biomass,
an alternative renewable and carbon neutral source of fuel,
attracted significant interest in current researches. It ranks third
as an energy resource in the world, after coal and oil. About 14%
of the worlds annual energy consumption is provided by biomass
which is equivalent to 1250 million tons oil. Environmentally, use
of biomass fuels has substantial benefits. As biomass absorbs
carbon dioxide during growth and emits it during combustion, it
helps the atmospheric carbon dioxide recycling and does not
contribute to the greenhouse effect. In this paper fuels derived
from biomass, natural gas and oil are introduced and discussed.
Keywords biomass and fossil fuels; biomass conversion;
energy sources; greenhouse gas
I. INTRODUCTION
The development of petroleum, coal, and natural gas based
refinery, the cheaply available fossil feed stocks, attracts most
research emphasis in twentieth century. These feed stocks to
meet the growing demand of the population are used to
produce various products such as fuel, fine chemicals,
pharmaceuticals, detergents, synthetic fiber, plastics, pesticides
and fertilizers, lubricants, solvent, waxes, coke, asphalt, etc. in
industry [1, 2, 33].
Rising use of fossil fuels isnt sustainable as it increases
greenhouse gas (GHG) emissions which consequently lead to
environmental impact on global warming [3]. In addition, the
industrialized countries have agreed to reduce their emission of
greenhouse gases, based on emission levels in 1990, by 5% by
20082012 according to the Kyoto protocol. In order to
achieve this goal, its essential to increase the efficiency in
energy use and to replace fossil fuels by biomass and other
renewable energy sources. The European commissions white
paper for a community strategy and action plan sets out a
strategy to double the share of renewable energy in gross
domestic energy consumption in the European union by 2010
[4]. In 2003, 15% of Swedens energy use was provided by
biofuels and this figure is expected to rise [5, 33].
So it can be inferred that there is significant interest in
finding alternative renewable sources of fuel that are
potentially carbon neutral, namely biomass [6-8], [3]. Biomass,
compared to fossil fuels, is a widely available and renewable
fuel that has advantages such as low sulphur and ash content
[9]. Biomass is the third energy resource in the world, after
coal and oil [10]. It is the most important source of energy in
developing countries and primary source of energy for more
than half the worlds population and provides about 14% of the
worlds annual energy consumption equivalent to 1250 million
tons oil [11-15].
Use of biomass fuels has substantial benefits in view of
environmental concern [16]. Biomass helps the atmospheric
carbon dioxide recycling and does not contribute to the
greenhouse effect as it absorbs carbon dioxide during growth,
and emits it during combustion. Therefore, biomass consumes
the same amount of CO
2
from the atmosphere during growth as
is released during combustion [17].which means combustion of
biomass is carbon neutral. Biomass can be converted into
liquid, solid and gaseous fuels through some physical, chemical
and biological conversion processes, [11], [18, 19, 33]. In this
paper, fuels derived from biomass, natural gas and oil are
introduced and discussed.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
34
II. BIOMASS
Biomass can be found all over the world. Almost any
organic material can be regarded as a potential energy source.
Low value products such as sewage or other residues can be
transformed to useful fuels. As the oil price rises, the interest
for alternative energy sources increases. Correctly managed,
biomass is a renewable and sustainable fuel that can reduce
significantly net carbon emissions when compared with fossil
fuels. Thus it can be assumed as an attractive clean
development mechanism option for reducing greenhouse gas
(GHG) emission [20, 33].
Biomass can be converted into liquid, solid and gaseous
fuels with the help of some physical, chemical and biological
conversion processes [18, 19]. A wide variety of biomass
feedstocks can be used to produce fuels such as; wood, short-
rotation woody crops, agricultural wastes, short-rotation
herbaceous species, wood wastes, bagasse, industrial residues,
waste paper, municipal solid waste, sawdust, bio-solids, grass,
waste from food processing, aquatic plants and algae animal
wastes, and a host of other materials. Main current biomass
technologies [21] are: destructive carbonization of woody
biomass to charcoal, gasification of biomass to gaseous
products, pyrolysis of biomass and solid wastes to liquid, solid
and gaseous products, supercritical fluid extractions of biomass
to liquid products, liquefaction of biomass to liquid products,
hydrolysis of biomass to sugars and ethanol, anaerobic
digestion of biomass to gaseous products, biomass power for
generating electricity by direct combustion or gasification ,
pyrolysis, co-firing of biomass with coal, biological conversion
of biomass and waste (biogas production, wastewater
treatment), biomass densification (briquetting, pelleting),
domestic cook stoves and heating appliances of fuel wood,
biomass energy conservation in households and industry, solar
photovoltaic and biomass based rural electrification,
conversion of biomass to a pyro-lytic oil (biofuel) for vehicle
fuel, and conversion of biomass to methanol and ethanol for
internal combustion engines.
A. FT-Diesel
FT-Diesel stands for Fisher-Tropsch-Diesel, a method of
manufacturing long hydrocarbon chains (-CH
2
-) from synthesis
gas. The fuel is similar to petroleum diesel, but has much lower
content of noxious substances in the emissions. The ratio
between hydrogen and carbon dioxide in synthesis gas is
crucial for a high efficiency in the following FT-synthesis [22].
Table 1 shows a short summary of important properties of FT-
Diesel.
In a FT-synthesis from synthesis gas, one mole CO reacts
with two mole H
2
under the presence of a catalyst, often iron or
cobalt, creating mainly long chains of -CH
2
- molecules
(paraffin) and one mole H
2
O per carbon unit as shown in (1).
Water gas shift reaction (WGS) can be used, if the presence of
hydrogen is too small which transforms carbon monoxide and
water into carbon dioxide and hydrogen as shown in (2).
2
2 2 2
CO H CH H O + + (1)
2 2 2
CO H O H CO + + (2)
Several products such as hydrocarbons (C
1
-C
4
), gasoline (C
5
-
C
11
), diesel (C
12
-C
20
), and waxes (>C
20
) are generated in the
polymerization. There are two constrain which should be
reached before the synthesis gas is transported to the FT-
reactor; (i) the combined sulphur and particle amount has to be
less than 1 ppm and (ii) the combined presence of nitrogen,
carbon dioxide, and methane needs to be below 10%. There
are two technologies of FT-processes; low temperature-
Fischer-Tropsch, which is used to create greater polymer as
diesel, and high-temperature-Fischer-Tropsch to achieve a
high amount of lighter hydrocarbons.
The outcome of the process is controlled by the Anderson-
Schultz-Flory distribution of hydrocarbons and is presented in
(3). The propagation and termination rates are depending on
pressure, temperature, and how long the polymer chain has
been in the process. The output of the desired product can be
maximized by controlling these parameters. The highest
exchange is achieved for methane but the highest exchanged
for a liquid fuel is diesel which motivates the production of
diesel instead of gasoline.
2 1
(1 )
n
W n
n


= (3)
/ ( ) K K K
p p t
= + (4)
W
n
, n, , K
p
and K
t
stand for weight fraction of C
n
, carbon
number, probability of chain growth, propagation rate and
termination rate, respectively.
The FT-process is exothermal, thus in order to receive the
desirable outcome it requires efficient cooling and temperature
control. The gas which leaves the reactor is separated into
methane, ethane, ethene, and unreacted synthesis gas. The
unreacted gas can be reinserted in the reactor but commonly it
is burned.
The similarity to petro-diesel is a great advantage because
the vehicle fleet and infrastructure already exist. FT-diesel is
fully compatible with ordinary diesel engines and there is no
need for any modifications. As the product is sulphur free and
only contains low amount of other impurities the emission is
cleaner than that of petro-diesel. But the production cost is
higher than that of petro-diesel which is the industrial term for
diesel produced from oil. Therefore FT-diesel requires
economic assistance from the governments in order to make a
commercial breakthrough [33].
TABLE I. SHORT SUMMARY OF MOST IMPORTANT PROPERTIES OF SOME
FUELS
Fuel
Properties
Alias
LHV
(MJ/lit)
a

Density
(kg/m
3
)
a

Octane
FT-diesel GTL, BTL, CTL 43.9 0.77-0.88 70-80
Biodiesel
FAME, RME,
SME, B100
37-38
b

0.88103
b

51-58
b

DME
Methoxymethane,
Wood Ether,
Dimethyl Ether
27.6 0.66103 >55
Methanol
Hydroxymethane,
Methyl Alcohol,
19.9 0.79103
110-
112
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
35
Fuel
Properties
Alias
LHV
(MJ/lit)
a

Density
(kg/m
3
)
a

Octane
Carbinol
Ethanol
Hydroxyethane,
Ethyl Alcohol,
Grain Alcohol
26.8 0.78103
105-
109
Hydrogen H2 120 0.09 106
Bio-
methane
Biogas, CBM,
CMG, SNG
45.4 0.72 >102
Gasoline Petrol, Gas 43.2 0.72-0.77 90-95
Diesel 43.4 0.82-0.84 45-53
a.
All units are in SI. MJ/lit = mega joule per liter, kg/m3= kilogram per cubic meter.
b.
Values refer to RME and
c.
SME, respectively.
B. FT-Diesel
Today biodiesel is also known as fame (fatty acid methyl
ester) and can be divided into RME (rape seed oil methyl ester)
or SME (sunflower oil methyl ester) as illustrated in Table 1.
Biodiesel can be produced out of vegetable oils or animal
waste. The most common oils used to produce biodiesel are
rape seed oil and sunflower oil.
Fatty acids methyl esters are one of two primary platform
chemicals produced by the ole-chemical industry. Using cheap
base catalysts (NaOH or KOH) methanol at low temperatures
(60

C to 80

C) and pressures (1.4 atm), methyl esters from


triglycerides can be produced in both batch and continuous
systems. The other major platform chemical, fatty acids, can
also be used to produce methyl esters. In one of two following
ways fats are hydrolyzed to free fatty acids and glycerol.
(i) continuous, high pressure, counter current systems at 20
to 60 bars and 250

C with or without catalysts, which are


typically zinc oxide, lime, or magnesium oxide added to water,
and (ii) countercurrent systems at atmospheric pressure with
small amounts of sulfuric/sulfonic acids in steam.
Using sulfuric acid, strong mineral acids, or a sulfonated
ion exchange resin, and methanol methyl esters are produced
from fatty acids in counter current systems at 80

C to 85

C
under mild pressures. If a feedstock contains triglycerides and
free fatty acids, acid esterification is performed on the entire
feedstock first, followed by trans-esterification to convert the
remaining triglycerides. To obtain high yields and low
processing problems, water should be managed correctly.
For all processes, generally, yields of glycerides and fatty
acids to esters exceed 97% and with careful management of
equilibrium conditions can reach 99%. As temperatures and
pressures increase, the trans-esterification reaction becomes
auto-catalyzed. Henkle used this process with crude soy oil in
the 1970s. Conditions may not be supercritical for methanol
but may employ high enough temperatures and pressures to
auto-catalyze the reaction [23]. In 1991 the first industrial plant
for producing biodiesel opened in Austria and by 1998, 21
countries had commercial projects.
With only small modifications as gasket and filter changes,
biodiesel can be used in conventional diesel engine. So a diesel
vehicle, for a small amount of money, can be converted to run
on biodiesel which contributes with the advantageous
environmental factors that is associated with biomass based
fuels. Biodiesel can also be used as a blender to petro-diesel
and is then referred to as e.g. B80, where the number
corresponds to the percentage of biodiesel present in the fuel.
C. DME
DME is the organic compound with the formula ch3och3, a
colorless gas that is a useful precursor to other organic
compounds and an aerosol propellant. Combusted, DME
produces minimal NOx and CO, though HC and soot formation
is significant. DME can act as a clean fuel when burned in
engines properly optimized for DME. Table 1 shows a short
summary of important properties of dimethyl ether (DME).
DME is produced out of synthesis gas in the following
reaction chain.
3 3
2 3 3 2
CO H CH OCH CO + + (4)
2 4
2 3 3 2
CO H CH OCH H O + + (5)
2 4
2 3
CO H CH OH + (6)
2
3 3 3 2
CH OH CH OCH H O + (7)
2 2 2
H O CO H CO + + (8)
The DME synthesis (Eq. 4) can be separated into methanol
synthesis (Eq. 6) followed by the dehydration reaction (Eq. 7)
and the shift reaction (Eq. 8). If the shift reaction is slow also
the second DME synthesis (Eq. 5) is active. Reaction 5 can be
divided into Eq. 6 and 7. A more detailed description of the
production of DME is found in [24, 33].
DME is suitable for DICI-engines due to high octane rating.
In order to make the engine compatible with DME, mainly
modifications of the fuel injector system is necessary. Asia has
the highest growth in fuel usage where, the highest interest of
DME is found in there. With very low levels of NOx and soot,
DME has exceptional emission properties. That is because
there are no carbon-carbon bindings in the molecules and the
high amount of oxygen (35%) which has fairly low burning
temperature. And this leads to the main contributor to the low
NOx formation. By using conventional methods, the emissions
can be reduced further, for example EGR which is highly
suitable for DME due to the lack of soot formation. But the
relatively low energy density of DME and the fact that DME is
a gas under normal conditions result in a bigger and
pressurized fuel tank which increases the retail price of the
vehicle and causes trouble for vehicles where space is limited.
According to [25, 33] DME liquefies under 5 bars pressure
at 20

C which compared to other gaseous fuels is low and


thereby a tank defined for DME is substantially cheaper and
smaller than tanks defined for methane or hydrogen.
D. Methanol
Methanol, also known as methyl alcohol, wood alcohol,
wood naphtha or wood spirits, is a chemical with the formula
CH
3
OH (often abbreviated MeOH). It is the simplest alcohol,
and is a light, volatile, colorless, flammable liquid with a
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
36
distinctive odor very similar to, but slightly sweeter than,
ethanol (drinking alcohol).at room temperature, it is a polar
liquid, and is used as an antifreeze, solvent, fuel, and as a
denaturant for ethanol. It is also used for producing biodiesel
via trans-esterification reaction. Most important properties of
methanol are shown in table 1.
Methanol is produced [24] from synthesis gas as presented
in reactions 9 to 11.
2
2 3
H CO CH OH + (9)
2 2 2
CO H O H CO + + (10)
3
2 2 3 2
H CO CH OH H O + + (11)
First to help the upcoming reactions to occur, the synthesis
gas is compressed. In the reaction chamber pellets of coppers is
used as a catalyst causing the reactions shown above. After the
reaction chamber the gas contains methanol, water and
unreacted substances. By using methanol letdown, the
unreacted substances are separated, a process where the
unreacted gases rise to the top and are guided back to the
reaction chamber. The distillation is done in two phases, at first
all the substances that have a boiling point lower than methanol
are removed by heating the mixture to a temperature just below
the boiling point of methanol. In the second stage the
remaining mixture is heated just above the boiling point of
methanol. Methanol is drawn off the top and the water which
has the highest boiling temperature at the bottom. Byproducts
are drained in the middle.
Methanol can directly be used in a fuel cell referred to as
DMFC (direct methanol fuel cells) which is still in a
developing phase. Thus methanol usage is likely to increase
drastically if it is successful. The fuel can also be used in
common combustion engines and is a strong candidate for
replacing gasoline usage.
E. Ethanol
Ethanol, best known as the type of alcohol found in
alcoholic beverages, also called ethyl alcohol, pure alcohol,
grain alcohol, or drinking alcohol, is a volatile, flammable,
colorless liquid. It is a psychoactive drug and one of the oldest
recreational drugs. It is also used in thermometers, as a solvent,
and as a fuel. In common usage, it is often referred to simply as
alcohol or spirits. A short summary of important properties of
ethanol is shown in Table 1.
Here we only consider ethanol from cellulose. A more
detailed text about the production of ethanol from cellulose,
starch or sugar can be found in [26].
There are three basic types of processes for production of
ethanol from cellulose; acid hydrolysis, enzymatic hydrolysis,
and thermochemical, with variations for each. The first is the
most common. Virtually any acid can be used; however,
sulfuric acid is most commonly used since it is usually the least
expensive.
(i). Acid hydrolysis
Two basic types of acid processes are used: dilute acid and
concentrated acid, each with variations.
Dilute acid processes are carried out under high
temperature and pressure, and the reaction lasting in the range
of seconds or minutes, which facilitates continuous processing.
Most dilute acid processes are limited to a sugar recovery
efficiency of around 50%. The reason for this is that at least
two reactions are part of this process. Cellulosic materials are
converted to sugar in the first reaction and the second reaction
converts the sugars to other chemicals. Unfortunately the
conditions that cause these two reactions to occur are the same.
Thus, once the cellulosic molecules are broken apart, the
reaction proceeds rapidly to break down the sugars into other
products. Not only does sugar degradation reduce sugar yield,
but the furfural, a chemical used in the plastics industry, and
other degradation products can be poisonous to the
fermentation microorganisms.
The fast rate of reaction is the biggest advantage of dilute
acid processes, which facilitates continuous processing. On the
other hands, their low sugar yield is the biggest disadvantage.
For rapid continuous processes, feed stocks must also be
reduced in size so that the maximum particle dimension is in
the range of a few millimeters, in order to allow adequate acid
penetration
The concentrated acid process is carried out in relatively
mild temperatures and the only pressures involved are usually
only those created by pumping materials from vessel to vessel.
Usda developed one concentrated acid process first and further
refined by Purdue university and the Tennessee valley
authority. Tva developed a concentrated acid process in which
corn Stover is mixed with dilute sulfuric acid (10%), and
heated to 100

C for 2 to 6 hours in the first (or hemicellulose)


hydrolysis reactor. The low temperatures and pressures
minimize the degradation of sugars. The hydrolyzed material in
the first reactor is soaked in water and drained several times in
order to recover the sugars. The solid residue from the first
stage is then dewatered and soaked in a 30% to 40%
concentration of sulfuric acid for 1 to 4 hr. as a pre-cellulose
hydrolysis step. Then this material is dewatered and dried with
the effect that the acid concentration in the material is
increased to about 70%. After reacting in another vessel for 1
to 4 hr. at 100

C, the reactor contents are filtered to remove


solids and recover the sugar and acid. To provide the acid for
the first stage hydrolysis, the sugar/acid solution from the
second stage is recycled to the first stage. The sugars from the
second stage hydrolysis are thus recovered in the liquid from
the first stage hydrolysis [33].
The primary advantage of the concentrated process is the
high sugar recovery efficiency (over 90% of both
hemicellulose and cellulose sugars). Relatively low cost
materials such as fiberglass tanks and piping can be used due to
low temperatures and pressures. Unfortunately, it is a relatively
slow process and needs cost effective acid recovery systems to
been develop which is difficult to achieve. Without acid
recovery, large quantities of lime must be used to neutralize the
acid in the sugar solution. This neutralization forms large
quantities of calcium sulfate, which requires disposal and
creates additional expense.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
37
(ii). Enzymatic hydrolysis
Another basic method of hydrolysis is enzymatic
hydrolysis. Enzymes are naturally occurring plant proteins that
cause certain chemical reactions to occur. However, for
enzymes to work, they must obtain access to the molecules to
be hydrolyzed. For enzymatic processes to be effective, some
kind of pretreatment process is thus needed to break the
crystalline structure of the lignocellulose and remove the lignin
to expose the cellulose and hemicellulose molecules. Either
physical or chemical pretreatment methods may be used
depending on the biomass material. Physical methods may use
high temperature and pressure, milling, radiation, or freezing
which require high energy consumption. The chemical method
uses a solvent to break apart and dissolve the crystalline
structure.
The enzymes currently available require several days to
achieve good results due to the tough crystalline structure.
Long process times tie up reactor vessels for long periods, so
these vessels have to either be quite large or many of them
must be used. Currently the cost of enzymes is also too high
and research is continuing to bring down the cost of enzymes.
However, if less expensive enzymes can be developed,
enzymatic processes hold several advantages: their efficiency is
quite high and their byproduct production can be controlled,
their mild process conditions do not require expensive
materials of construction, and their process energy
requirements are relatively low.
(iii). Thermochemical processes
There are two ethanol production processes that currently
employ thermochemical reactions in their processes.
In the first system biomass materials are first thermo
chemically gasified and the synthesis gas (a mixture of
hydrogen and carbon oxides) bubbled through specially
designed fermenters. A microorganism is introduced into the
fermenters under specific process conditions, which is capable
of converting the synthesis gas, to cause fermentation to
ethanol.
No microorganisms are used in second thermochemical
ethanol production process. In this process, biomass materials
are first thermo-chemically gasified and the synthesis gas
passed through a reactor containing catalysts, which cause the
gas to be converted into ethanol. Using synthesis gas-to-
ethanol processes, ethanol yields up to 50% have been
obtained. Some processes that first produce methanol and then
use catalytic shifts to produce ethanol have obtained ethanol
yields in the range of 80%. Unfortunately, like the other
processes, finding a cost-effective all-thermochemical process
has been difficult.
The first industrial use of ethanol was in 1876, when it was
used in a combustion engine that worked in an Otto cycle. The
ethanol driven automobiles grew strong until after the Second
World War when fuels from petroleum and natural gas became
available in large quantities and to a low cost. The profit in
producing fuel out of agriculture crops sank and many of the
former ethanol producing plants converted to the beverage
alcohol industry. In the 1970s economic problems for Brazil
caused a new interest in ethanol. Brazil is a big sugar producer
which makes it suitable for the ethanol industry. Nowadays
40% of the gasoline demand in Brazil is replaced with ethanol.
Ethanol production process can use any biological
feedstock that contains sugar or material that can be converted
into sugar, such as starch or cellulose. Starch or sugar
containing feedstock consists in the human food chain causes a
high market price. Thus, using material containing cellulose,
for example paper, cardboard, wood, and other fibrous plant
material, could reduce the price of ethanol.
Ethanol can be used as a blend in gasoline which is used to
reduce emissions and increase the octane rating of the fuel.
There are vehicles that can use gasoline, ethanol or any blend
of them. The higher octane rating of ethanol compared to
gasoline allows a higher compression ratio of the engine that
leads to a higher efficiency. But on the other hand, gasoline
limits the compression ratio which leads to a higher fuel
consumption when using ethanol compared to a vehicle only
dedicated for ethanol usage.
F. Hydrogen
Hydrogen gas (di-hydrogen or molecular hydrogen) is
highly flammable and will burn in air at a very wide range of
concentrations between 4% and 75% by volume [27].Table 1
summarize most important properties of hydrogen [33].
Although hydrogen is the most plentiful gas in the universe
it does not exist naturally on earth. Hydrogen is almost always
combined with other elements such as carbon or oxygen
because of its likeliness to react with other molecules.it can
however be produced in multiple ways using fossil fuels,
biomass, wind power etc. There are a substantial number of
papers presenting production, usage and storage techniques. So
we only mention them.
(i). Biological water splitting
Hydrogen produced by means of some photosynthetic
microbes in their activities from water by using light energy.
(ii). Photo-electrochemical water splitting
Photovoltaic industry is being used for photo-
electrochemical (PEC) light harvesting systems that generates
sufficient voltage to split water and are stable in a
water/electrolyte environment.
(iii). Reforming of biomass and wastes
Hydrogen is separated form synthesis gas produces through
pyrolysis or gasification of biomass.
(iv). Solar thermal methane splitting
Splitting methane into hydrogen and carbon needs high
temperature which is obtained from highly concentrated sun
light.
Hydrogen has the potential to be the fuel of the future due
to the clean burning, and as its emission is only water.
Hydrogen, due to its volumetric low energy content at ambient
pressure and temperatures, is rarely used in vehicles. Which
means hydrogen must be liquefied or compressed during
transportation. This, together with and the explosiveness of the
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
38
hydrogen sets high standard to the containers and tanks, both in
vehicles and during transportation. Besides the great emission
properties hydrogen is interesting because it has the highest
energy content per weight unit of any known fuel.
G. Bio-methane
Bio-methane is most often purified biogas and therefore it
is commonly referred to as biogas but in order to distinguish
them, purified biogas is consequently referred to as bio-
methane. Table 1 illustrates some properties of bio-methane.
The primary task of the purification process, which is
performed by decreasing the amount of carbon dioxide, which
increases the ratio of methane in the gas, is to extend its energy
content. The content of methane typically extends 97% in the
finished product. Traces of other substances must be removed
in order to make it compatible with engines such as hydrogen
sulphur, water vapor, nitrogen, oxygen, particles, halogenated
hydrocarbons, ammonia, and organic silicon compounds. Often
bio-methane is odorized as a safety measure to detect leaks in
the systems in which it is being used. There is a way of
producing bio-methane with synthesis gas as an intermediate
product. This process follows the same path as the production
of FT-diesel which is described before. Detailed description of
bio-methane production is collected in [28, 33].
The usage of bio-methane is yet small. The insufficiency of
filling stations is one of the reasons that gas propelled vehicles
have not made a commercial breakthrough. But, this does not
affect local vehicle fleets as they most often stays in certain
regions, always having relatively close to a filling station. The
future potential of bio-methane, regarding waste material as
feedstock, is discussed in the Finnish study, [29] which states
that 20% of all traffic energy consumption in Finland can be
replaced by bio-methane.
III. NATURAL GAS
Natural gas is a clean and highly useful energy source. The
gas is generated in a similar way as oil. The composition of
crude natural gas varies considerable depending on extraction
location. However Table 2 shows typical composition of it.
The high amount of hydrocarbons makes natural gas an
excellent energy source. Some of the fuels that can be
generated out of natural gas are described as follows.
TABLE II. . COMPOSITION OF CRUDE NATURAL GAS AND BIOGAS
Component Alias
Percentage (%)
Crude Natural
Gas
Biogas
Methane CH4 70-90 50-75
Ethane C2H6 0-20 -
Propane C3H8 0-20 -
Butane C4H10 0-20 -
Carbon dioxide CO2 0-8 25-45
Oxygen O2 0-0.2 < 2
Nitrogen N2 0-5 < 2
Component Alias
Percentage (%)
Crude Natural
Gas
Biogas
Hydrogen
Sulphide
H2S 0-5 0-5
Rare gases Ar, He, Ne, Xe Trace -
Water H2O - 2-7
Hydrogen H2 - < 1

A. Refined Natural Gas
Liquefied natural gas (LNG) & compressed natural gas
(CNG) are two type of refined natural gas. Natural gas that
comes up through the depths has its source in oil wells, gas
wells or condensate wells, each of them with its impurities. For
home uses, the methane content of it should be raised up to
97%. For a detailed description of natural gas extraction,
production, transportation, storage etc. one can refer to [30].
(i). Oil and condensate removal
Natural gas is dissolved in oil as a result of high pressure
when the pressure is reduced, it is separated automatically.
Conducting the gas into a closed tank and let gravity separates
the different hydrocarbons is the most common way of
separating oil and natural gas. In some cases more complex
technologies is necessary, for example the low-temperature
separator which is often used when light crude oil is mixed
with high pressured gas. It uses rapidly changes in pressures in
order to quickly change the temperature of the gas. This causes
the oil and water to condensate and leaving the desired
components in its gaseous form.
(ii). Water removal
By using either an adsorption or absorption process water
vapor is removed from natural gas. Absorption can be done
using glycol that has the tendency to absorb water molecules. It
also absorbs some of the methane which can be recovered
using a flash tank. If a large amount of natural gas should be
refined there are advantages using a process called solid-
desiccant dehydration in which the wet natural gas is fed at the
top of a high tower which is filled with desiccants, e.g.
Alumina. As the gas goes down through the tower the water
molecules adsorbs to the desiccants. In general due to that the
desiccant will get saturated with water, there are two or more
of these towers in the facilities that use this dehydration
technology.
(iii). Separation of natural gas liquids
There are basically two ways of separating natural gas
liquids (NGL) from natural gas; the absorption method and the
cryogenic expander process.
The principal of absorption method here is similar to the
absorption method in the water removal process. Absorption
oil is used to withdraw the NGL of the natural gas. The oil is
heated to a temperature that lies between the boiling point for
the NGL and the boiling point of the absorption oil, causing
them to separate, to recover the NGL. This method can recover
about 75% of the butane and 85% to 90% of the heavier
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
39
molecules. It is able to target a particular hydrocarbon in order
to maximize the outcome.

The cryogenic expander process, which works like a low-
temperature separator, can recover the ethane. The gas is
chilled and then a rapid reduction of pressure allows the
temperature to quickly decrease causing the ethane and other
lighter hydrocarbons to condensate. This process has can
recover about 90-95% of the ethane.
(iv). Sulphur and carbon dioxide removal
The most important cleaning of natural gas is removal of
sulphur as it is highly corrosive and can cause serious damage
to the machinery and also humans. An amine solution is used
to remove sulphur and carbon dioxide.
The main usage of natural gas is heating and electricity
generation while only a minor part is used for running vehicles.
However its expected that usage of natural gas is increased in
future. United States has a well-developed natural gas
infrastructure where natural gas, in year 2000, accounted for
24% of the total energy usage and about 3% of the natural gas
was used for fueling vehicles.
B. Liquefied Petroleum Gas (LPG)
Liquefied petroleum gas (also called LPG, GPL, LP gas,
auto-gas, or liquid propane gas) is a flammable mixture of
hydrocarbon gases used as a fuel in heating appliances and
vehicles. LPG brought and sold in various types include mixes
that are primarily propane (C
3
H
8
), primarily butane (C
4
H
10
)
and, most commonly, mixes including both propane and
butane, depending on the season in winter more propane, in
summer more butane. Propylene and butylene are usually also
present in small concentration.
LPG is synthesized by refining petroleum or "wet" natural
gas, and is usually derived from fossil fuel sources, being
manufactured during the refining of crude oil, or extracted
from oil or gas streams as they exiting from the ground. As it
burns cleanly with no soot and very few sulfur emissions, it
causing no ground or water pollution hazards. It currently
provides about 3% of the energy demands. The specific caloric
value of LPG is 46.1 MJ/kg, which is comparable with 42.5
MJ/kg for fuel-oil and 43.5 MJ/kg for premium grade petrol
(gasoline). But, it has energy density per volume unit of 26
MJ/lit which is lower than either that of petrol or fuel-oil
When LPG is used to fuel internal combustion engines, it is
often referred to as auto-gas or auto propane. As LPG provides
less upper cylinder lubrication than petrol or diesel, LPG fueled
engines are more prone to wearing valves if not suitably
modified. So, all automobile engines arent suitable for use
with LPG as a fuel. Its advantage is that it is non-toxic, non-
corrosive and free of tetra-ethyl lead or any additives, and has a
high octane rating. It burns more cleanly than petrol or fuel-oil
and is especially free of the particulates from the latter. Other
products of natural gas are dimethyl ether, hydrogen and
methanol that were described in section 2.3, 2.6 and 2.4
respectively.
IV. OIL
Oil has been the primary energy source in automobile
propulsion systems for long time. The top five producers of oil
in 2004 are, in a decreasing order, [31]: Saudi Arabia, Russia,
United States, Iran, and Mexico. Fuels which have its origin in
oil are discussed here as follows.
A. oil based fuels
Gasoline and diesel are two types of oil based fuels. And
Table 1 summarizes some properties of gasoline and diesel. Oil
refinery can be described in a simple and very direct way. As
the components of the raw oil have a wide variety of boiling
point, it is easy separated to separate them. The raw oil is
heated in a furnace before it is transported to chamber within
fractional distillation occurs. The chamber is separated
horizontally in five cells, and in each cell temperature
decreases which lead to liquefaction of the desired gas in each
cell, and which consequently can be transported away easily.
The cell placed at the bottom drains paraffin waxes followed
by industrial oil, diesel, kerosene, gasoline and petroleum gas
which is drained at the top of the chamber. In order to optimize
the outcomes several after treatments are used but this is not
described in this paper.
Gasoline and diesel has been the primary source of vehicle
fuel for a long time and that will probably be. They are
inexpensive to produce compared to alternative fuels but they
suffer of heavy taxes which causes a market price the
alternative fuels can match. Vehicles use diesel generally
generates less greenhouse gases than vehicles fueled with
gasoline, but due to a high rate of particles and nitrogen oxide
the emissions are unhealthier to humans.
B. Jet Fuel
Jet fuel consists of a large number of different
hydrocarbons. The variation of their sizes (molecular weights
or carbon numbers) is limited by the requirements for the
product, for example, the freezing point or smoke point. Piston-
engine powered aircraft uses a fuels which (usually a high-
octane gasoline known as avgas) has a low flash point to
improve its ignition characteristics. Turbine engines can
operate with a wide range of fuels, and typically fuels with
higher flash points are used in jet-aircraft engines, which are
less flammable and therefore safer to transport and handle.
Jet fuel is a type of aviation fuel designed for use in aircraft
powered by gas-turbine engines. It is clear to straw-colored in
appearance. Jet A and Jet A-1 are the most commonly used
fuels for commercial aviation, which are produced to a
standardized international specification. The only other jet fuel
commonly used in civilian turbine-engine powered aviation is
jet b which is used for its enhanced cold-weather performance.
C. Biogas
Biogas is generated by anaerobic digestion of organic
matter which occurs naturally in swamps, rubbish dumps,
septic tanks, and the arctic tundra. The reaction concerning
cellulose (C
6
H
10
O
5
) as feed-stock can be summarized in
reaction (12) and can be divided into the reactions (13) to (14),
according to [32, 33]. The digestion process is highly complex
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
40
and is not described in this paper. A short summary of
important properties of biogas is shown in Table 2.
3 3
5 6 10 2 2 4
C H O H O CO CH + + (12)
5 6 10 2 6 12 6
C H O H O C H O + (13)
2 2
5 6 12 6 2 2
C H O C H OH CO + (14)
2 2 2
5 2 2 3 4
C H OH CO CH COOH CH + + (15)
2 2 2
3 2 4
CH COOH CO CH + (16)
Biogas can be used to produce heat and electricity for
nearby societies and cities in facilities. By further treatment
and refining it can be converted to useful fuels as bio-methane,
methanol etc. Facilities for refining biogas are often in direct
contact with biogas production plants, as biogas has low energy
content and feed-stock transportation is difficult.
Liquefied petroleum gas (LPG) is another product of oil
that mentioned in section 3.2.
V. CONCLUSION
Humans have been consuming fossil fuels since the
industrial revolution, because they were readily available and
cheap to produce. But rising use of fossil fuels is unsustainable
because of the attributed increase in greenhouse gas (GHG)
emissions from using this fuels and environmental impact of
these emissions on global warming. In addition, according to
the Kyoto protocol, the industrialized countries have agreed to
decrease their emission of greenhouse gases by 5% by 2008
2012, based on emission levels in 1990. In order to achieve
this, it will be necessary to increase the efficiency in energy use
and to replace fossil fuels by biomass and other renewable
energy sources. Thus there is significant interest in identifying
alternative renewable sources of fuel that are potentially carbon
neutral.
Currently the trend has been towards renewable fuels, such
as biofuels. Biomass comes from renewable organic materials
such as resides from agriculture, like crops, animal waste and
even human waste and industrial sectors of wood manufacture
and food industries. Biomass is available everywhere, making
its supply unlimited. It helps to sustain climate change, increase
energy efficiency, and decrease greenhouse gas emission. Use
of biomass fuels has substantial benefits in view of
environmental concern. Biomass absorbs carbon dioxide
during growth, and emits it during combustion. Therefore,
biomass helps the atmospheric carbon dioxide recycling and
does not contribute to the greenhouse effect. A wide variety of
products can be produced from biomass feed-stocks and fuel
current vehicles with some modification in them.
REFERENCES

[1] M. Bender, Potential conservation of biomass in the production of
synthetic organics, Res. Conserv. Recyc, vol. 30, pp. 49-58, 2000.
[2] M.F. Demirbas, Current Technologies for Biomass Conversion into
Chemicals and Fuels, Energy Sources, A: Recov. Utili. Environ. Eff,
vol. 28, pp. 11811188, 2007.
[3] J. Hill, E. Nelson, D. Tilman, S. Polasky, D. Tiffany, Environmental,
Economic and Energetic Costs and Benefits of Biodiesel and Ethanol
Biofuels, in Proc. Natl. Acad. Sci., New York, pp. 1120611210, 2006.
[4] European Commission, Energy for the future: Renewable sources of
energy, White Paper for a Community Strategy and Action Plan,
commission.com, (1997) 599.
[5] M. Perzon, Emissions of Organic Compounds from the Combustion of
Oats A Comparison with Softwood Pellets, Biomass and Bioenergy.
vol. 34, pp. 828-837, 2010.
[6] A. Demirbas, Biofuels Securing The Planets Future Energy Needs,
Energy Conv. Manag, vol. 50, pp. 22392249, 2009.
[7] J.K. Pittman, A.P. Dean, O. Osundeko, The Potential Of Sustainable
Algal Biofuel Production Using Wastewater Resources, Bioresource
Tech,, vol. 102, pp. 17-25, 2011.
[8] B.E. Rittmann, Opportunities For Renewable Bioenergy Using
Microorganisms, Biotech. Bioeng, vol. 100, pp. 203212, 2008.
[9] T. Klason, X.S. Bai, Computational Study of the Combustion Process
and NO Formation in A Small-Scale Wood Pellet Furnace, Fuel, vol.
86, pp. 1465-1474, 2007.
[10] D.W. Bapat, S.V. Kulkarni, V.P. Bhandarkar, Design and Operating
Experience on Fluidized Bed Boiler Burning Biomass Fuels with High
Alkali Ash, in Proc. 14th. Int. Conf. Fluidized Bed Combustion,
Vancouver, New York, NY: ASME, pp. 165174, 1997.
[11] Do. Hall, F. Rosillo-Calle, P. de Groot, Biomass Energy: Lessons From
Case Studies In Developing Countries, Energy Policy, pp. 6273, 1992.
[12] F. McGowan, Controlling The Greenhouse Effect The Role Of
Renewables, Energy Policy, pp. 110118, 1991.
[13] P. Purohit, Ak. Tripathi, TC. Kandpal, Energetics of Coal Substitution
by Briquettes of Agricultural Residues, Energy, vol. 31, pp. 1321
1331, 2006.
[14] J. Werther, M. Saenger, EU. Hartge, T. Ogada, Z. Siagi, Combustion of
Agricultural Residues, Prog. Energy. Comb. Sci., vol. 26, pp. 127,
2000.
[15] XY. Zeng, YT. Ma, LR. Ma, Utilization of Straw in Biomass Energy in
China, Renew. Sustain. Energy Rev, vol. 11, pp. 976987, 2007.
[16] D.O Hall, J.I. Scrase, Will biomass be the environmentally friendly fuel
of the future?, Biomass and Bioenergy, vol. 15, pp. 357367, 1998.
[17] A.Demirbas, Combustion characteristics of different biomass fuels,
Prog. Energy and Comb. Sci, vol. 30, pp. 219-230, 2004.
[18] I. Campbell, Biomass catalysts and liquid fuels, Technomic Publishing
Company, 1983.
[19] NH. Ravindranath, DO. Hall, Biomass, energy, and environment a
developing country perspective from India, USA, Oxford University
Press, 1995.
[20] L. JF, H. RQ, Sustainable biomass production for energy in China,
Biomass Bioenergy, vol. 25, pp. 483499, 2003.
[21] A. Demirbas, Recent advances in biomass conversion technologies,
Energy Edu. Sci. Tech, vol. 6, pp. 1941, 2000.
[22] Ab. Framtidsbrnslen, Sundsvall Demonstration Plant-En frstudie av en
pilotanlggning fr tillverkning av Fischer- Trospch diesel frn
biomassa, www.framtidsbranslen.se, 2005.
[23] R. Wallac, E. Petersen, L. Moens, K.Sh. Tyson, J. Bozell, Technical
report, National Renewable Energy Laboratory, 2004.
[24] Svenskt Gastekniskt Center, Naturgas som rvara fr kemikalier och
brnslen, www.sgc.se/dokument/sgc123.pdf, 2002.
[25] Bio-DME Consortium: The bio-DME project, phase 1, report to stem,
www.atrax.se/pdf/Final_report_DME.pdf, 2002.
[26] P.C. Badger, Trends in new crops and new uses, Alexandria, VA,
ASHS Press, 2002.
[27] M.N. Carcassi, F. Fineschi, Deflagrations of H2air and CH4air lean
mixtures in a vented multi-compartment environment, Energy, vol. 30,
pp. 14391451, 2005.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
41
[28] WestStart-Calstart, California biogas industry assessment white paper,
2005.
[29] P. Pyhnen, A. Lampinen, K. Hnninen, Traffic fuel potential of waste
based biogas in industrial countries the case of finland, Department of
Biological and Environmental Sciences, University of Jyvskyl, 2004.
[30] Natural Gas Supply Association, 2006.
[31] Energy Information Administration, 2006.
[32] B. Srensen, Renewable energy, 3rd ed. Elsevier Academic Press, 2004.
[33] A. Gunnarsson, Analysis of Alternative Fuels in Automotive Powertrains,
Department of Electrical Engineering, Linkpings universitet, 2009.



Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
42
Optimal Design of Wind/PV/Diesel/Battery Power
System for telecommunication application in a remote
Algeria
H.Zeraia
#1
, C. Larbes
*2
, A.Malek
#3

#
Centre de Dveloppement des Energies Renouvelables, B.P. 62, 16340 Bouzareah, Algiers, Algeria

1
h.benyahia@cder.dz
3
a.malek@cder.dz
*
Ecole Nationale polytechnique, el Harrach, Algiers, Algeria
2
larbes_cherif@yahoo.fr

Abstract Algeria has embarked on an ambitious renewable
energy program in order to increase total food production. It has
a large number of remote small villages and islands that lack in
the electricity, and probability of connecting them with the high
voltage gridlines in the near future is very poor due to financial
and technical constraints.
This paper proposes the use of a PV, wind and diesel generator
hybrid system with storage element in order to determine the
optimal configuration of renewable energy in ALGERIA. The
principals interests of this system are the independence
production, and the supplying of electric energy in isolated
localities.
Have at ones the energetic and economic models, and simulation
tools, we effected an optimization study based on mixed
productions. For this approach, the energetic resources of sites
where are implanted telecommunications systems and their
consumption are supposed known. Then the problem is the
optimization of electric generators using these resources, enable
to have an optimal type system for the powering of
telecommunications equipments in rural site of Algeria.
Homer (hybrid optimization model for electric renewable)
simulation software was used to determine the technical
feasibility of the system and to perform the economical analysis
of the system.

Keywords renewable energy, homer, photovoltaic energy,
diesel, wind energy, optimization.

I. INTRODUCTION
Algeria have got no access to grid based electricity
services, the majority of which live in underdeveloped rural
areas. In order to realize sustainable human development.
Communication technology is one of the fastest growing
technologies during these days. The telecommunication
companies are continuously challenged to provide
uninterrupted services to rural and remote areas where there is
no reliable electrical power supply available. Therefore,
renewable energy systems are becoming increasingly popular
in those industries to provide uninterruptible power to remote
areas. Currently in most cases the telecommunication stations
use diesel generators connected with backup batteries to
provide power. Increasing demand of energy and negative
impacts of fossil fuels on the environment has emphasized the
need of harnessing energy from renewable sources.
In this paper, a stand-alone hybrid alternative energy system
is proposed for remote Algeria. In this case wind and PV are
considered as the main power sources for the system and
diesel generator and a battery bank are also integrated as a
backup power supply. The diesel generator is treated as a
mechanism to provide long-term power storage and the
battery is used as a backup for short-term power storage.
II. DATA INPUT
A. Electrical Load
The record indicates the approximate power
consumption for telecommunication system is
78.6kwh/day with 7.8kw peak and the system runs
on 48v dc bus. telecommunication companies are
committed to provide uninterruptable service and
therefore these sites require continuous power
throughout the year [4], [9]. therefore, the hourly
load is almost a constant, as the power consumption
remains the same. telecommunication
load profile is shown in figure 1 which is produced
by homer [8].
Fig.1 Diurnal variation of load during different months of the year

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
43
B. Geographical location of implementation site
Algerias geographic location has several advantages
for exten- sive use of most of the RES (solar and wind).
Algeria, situated in the centre of North Africa. Algeria is
divided into 48provinces and lies, in the north, on the coast of
the Mediterranean Sea. The length of the coastline is 2400km.
In the west Algeria share sits borders with Morocco,
Mauritania and occidental Sahara, in the south west with Mali,
in the east with Tunisia and Libya, and in the south east with
Niger(Fig. 1). The climate is transition al between
maritime(north) and semi- arid to arid (middle and south). The
Sahara (south of Algeria) covers a total area of 2,048,297km2,
approximately 86% of the total area of the whole country. The
geographic location of Algeria signifies that it is in a key
position to play an important strategic role in the
implementation of telecommunications systems powered by
renewable energy. In our study we have to select station in the
sahara of Algeria and where wind speed and solar irradiation
are important and. we can select Adrar. [6], [7]. geographical
data for the selected site is shown in table 1.
TABLE I
Geographical data for the selected stations
Site Longitude Latitude Altitude
Adrar 017'00"W 2752'00"N 279m
III. RENEWABLE ENERGY RESOURCES
On account of its geographical location, algeria holds one of
the highest solar potentials in the world which is estimated at
13.9 twh per year. the country receives annual sunshine
exposure equivalent to 2,500 kwh/m
2
. daily solar energy
potential varies from 4.66 kwh/m
2
in the north to 7.26
kwh/m
2
in the south. algeria has promising wind energy
potential of about 35 twh/year. our study suggests that the
location at adrar telecommunication site has sufficient wind
and solar energy for generating sufficient power for this
application. collecting weather data is one of the main tasks
for this pre-feasibility study for a renewable energy system.
A. Solar energy resource
The average solar irradiation is 5.88 kWh/m-d and
sensitivity analysis is done with three different values.
Clearness index and the average daily radiation for a year are
shown in table 2 while figure 2 shows the solar radiation in a
year produced by HOMER.

Fig.2 Monthly solar radiation
TABLE II
Clearness Index and average daily radiation for a year
Month Clearness
Index
Daily Radiation
(kWh/m2/d)
January 0.599 3.740
February 0.655 4.870
March 0.685 6.140
April 0.693 7.140
May 0.683 7.580
June 0.669 7.590
July 0.699 7.820
August 0.688 7.260
September 0.673 6.320
October 0.607 4.770
November 0.606 3.936
December 0.607 3.558

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
44
B. Wind energy resource
The second renewable source implemented in
telecommunication system for adrar site is wind energy. wind
data for this site are given by [10] where are used for our
study. Figure 3 shows the average hourly wind speed for a
year. The average wind speed is estimated 6.3m/s and for
sensitivity analysis three values of wind speed are chosen. The
monthly average wind speed is shown in table 3.


Fig.3 The average hourly wind speed for a year
TABLE III
Monthly Average Wind Speed for a year
Month Wind Speed (m/s)
January 6.200
February 6.400
March 6.500
April 6.500
May 6.900
June 6.100
July 6.700
August 6.200
September 6.000
October 5.800
November 5.900
December 5.800


IV. RENEWABLE ENERGY SYSTEM
The proposed hybrid renewable energy system is
shown in figure 4 which consists of the existing power
system, wind turbine, and photovoltaic. The proposed system
is going to reduce diesel fuel consumption and associated
operation and maintenance cost. In this system the wind
turbines and PV will be the primary power source and diesel
generator will be using as a backup for long term storage
system and batteries for short term storage system.




Fig.4 Proposed hybrid power system for adrar site
A. Solar panels
solar panels used in this system are STP280-24 each
IVmodule panel provides 280W with 24V. Therefore, two PV
modules are connected in series to meet the bus voltage which
is 48V. A total of 5.6kW PV rated capacity is used in this
system. Modules are connected in 10 strings each string has
two modules with twenty modules in total. provides 280W
with 24V. therefore, two PV modules are connected in series
to meet the bus voltage which is 48V.
B. Wind turbine
Two BWC-Excel-R/48 are used in this system. Each
one has rated capacity 7.5kW and provides 48V DC.




Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
45
V. RESULTS AND DISCUSSION
TABLE IV
Production of hybrid power generator

PRODUCTION KWH/YEAR %
PV ARRAY 10 260 32
WIND TURBINES 14735 46
DIESEL 6921 22
TOTAL 31915 100
CONSUMPTION DC
LOAD
28835 100
The power system alimented radio telecommunication have
two renewable sources and diesel generator, and it is the
optimized system. the production of each system is shown in
Tab4.
Photovoltaic production is 32% with 10260kWh/yr. Diesel
generator production is 22% with 6921kWh/yr. Finally, wind
turbine is expected to supply the rest of the load which is 46%
with 14735kWh/yr.
Figure 5 shows the monthly average electric production of the
system.

Fig.5 Monthly average electric production for renewable energy system
In Tab 5 we can see that the power system provide a
considerable part from renewable energy with a fraction of
78.3%.this result confirm the feasibility of this energy system
in remote Algeria.


TABLE V
values of optimized power system

Quantity

kWh/yr Value %
Excess
electricity

0.00000973 0.000
Unmet electric
load

0.00000262 0.000
Capacity
shortage

237 0.8230
Renewable
fraction
78.3
Both systems are simulated in HOMER software, and the
optimal results were obtained for each case. Figure 6 shows
the optimization result for the non-renewable energy system.
As shown in the figure the total Net Present Cost (NPC) is
$823,072. Diesel generator burns 12,672L of fuel per year and
annual generator run time is 1,536 hours. In twenty years the
diesel generator will burn 25,3440L of fuel. For this site the
diesel fuel can be transported only by a helicopter. Therefore
the total cost of diesel fuel at $5 per liter, would be very high.
The probability of fuel prices increase is also high. The total
cost is calculated with constant price of fuel, which is $5 per
liter. The total fuel cost during these 20 years will be
$1,267,200 and the total cost for the whole system will be
$2,090,272. Figure 7 shows the monthly average electric
production of the system which is totally produced by diesel
generator.

Fig.6 Optimized result for the renewable energy system

Fig.8 Monthly average electric production for non-renewable energy
system
The renewable energy based system was also
simulated in HOMER software with four sensitivity variables.
These variables are wind speed, solar irradiation, load, and
diesel price and each of these variables has three different
values. Therefore, 81 sensitivity cases have been tested for the
system. Figure 8 shows the optimized results for the proposed
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
46
system. The total Net Present Cost (NPC) is $1,011,514. The
system will consume only 335 liters of diesel fuel per year and
annual generator run time is expected to be 145 hours. The
lifetime of this system is 25 years, but 20 years life is used to
make the comparison between two systems. In twenty years
the diesel generator will burn 6,700L of fuel and it will cost
$33,500. The total cost of the system will be around
$1,045,014. Figure 9 shows the monthly average electric
production of the system. Photovoltaic production is 14% with
6,403kWh/yr. Diesel generator production is 2% with
1,052kWh/yr. Finally, wind turbine is expected to supply the
rest of the load which is 84% with 38,325kWh/yr.
The difference cost between two systems is $1,045,258
which is a very significant number for a small system. Diesel
generator run times are reduced and diesel generator in the
proposed system will produce only 2% of the total power
production. Moreover, the reduction of yearly diesel fuel
consumption from 12,672L to 335L has a large impact on the
environment and it will reduce the helicopter trips to the site.
Also, the diesel generator will require less maintenance and
operation cost and longer period of service before a
replacement.

VI. CONCLUSION
Renewable energy resources selected to supply a
sample of telecommunication systems and the optimization of
power generators using these resources helped us to have such
an optimal system for supplying telecommunications
equipment located in the middle rural Algeria.
These systems can be optimized subsequently controlled by a
control circuit. So we can have depending on the availability
of resources, one of the five combinations found in the
optimal system type, and the telecommunications system will
be powered permanently without any shortage and in all
possible cases.

REFERENCES
[1] Zeraa Hassiba, Larbs Cherif, Malek Ali. Optimal operational strategy of
hybrid renewable energy system for rural electrification of a remote Algeria.
Energy Procedia 2013.
[2] G. C. Seeling-Hochmut. A combined optimisation concept for the design
and operation strategy of hybrid-pv energy systems. Solar Energy
1997;61(2):7787.
[3] Shafiqur Rehman, Md. Mahbub Alam, J.P. Meyer, Luai M. Al-Hadhrami.
Feasibility study of a wind-pv-diesel hybrid power system
[4] Mohamed El Badawe, Tariq Iqbal and George K. Mann. Optimization
AND a comparison between renewable and non-renewable energy systems
for a telecommunication site. 25th IEEE Canadian Conference on Electrical
and Computer Engineering (CCECE) 978-1-4673-1433-6/12/$31.00 2012
IEEE.
[5] Y. Himri, A. Boudghene Stambouli, B. Draoui, S. Himri. Techno-
economical study of hybrid power system for a remote village in Algeria.
Energy 2008; 33 : 11281136.

[6) http://www.irena.org/GlobalAtlas/
[7] www.mem-algeria.org
[8] Homer simulation Tools www.nrel.gov/hom
[9] Spcification Technique ST / PAB / STC / 102. Station dnergie solaire
photovoltaque
pour tlcommunications. Centre National des Etudes des
Tlcommunications (CENT), (janvier 1981).
[10] F.Chellali, A. Khellafb, A. Belouchranic, A.Recioui.
A contribution in the actualization of wind map of Algeria. Renewable and
Sustainable Energy Reviews 15 (2011) 9931002.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
47
Solar Hydrogen from Glycerol-Water Mixture

Chong Fai Kait and Ela Nurlaela
Universiti Teknologi PETRONAS
Bandar Seri Iskandar, 31750 Tronoh
Perak, Malaysia.
chongfaikait@petronas.com.my
Binay K. Dutta
West Bengal Pollution Control Board
Paribesh Bhawan, Salt Lake
Kolkata 700 098, India.


Abstract The photocatalytic activity of titania supported
bimetallic Cu-Ni photocatalysts were assessed for hydrogen
production from water and also a mixture of glycerol-water
system under visible light illumination. Addition of 2.0 mL
glycerol to 8.0 mL water enhanced the solar hydrogen production
from 6.1 mL to 9.5 mL. If metal was not incorporated onto TiO
2
,
the hydrogen production was minimal, 2.0 mL after 2 hr
reaction. The band gap for bimetallic Cu-Ni/TiO
2
was 2.78 eV
compared to 3.16 eV for TiO
2
. Photooxidation of glycerol
produced glyceraldehyde, glycolic acid and oxalic acid.
Keywords Cu-Ni bimetallic; solar hydrogen; visible light;
glycerol
I. INTRODUCTION
Hydrogen can be produced from various processes and the
trend is moving from fossil fuels feedstock into renewable
resources [1]. Solar hydrogen production from water offers an
attractive and sustainable option since water is in abundance
and the energy source from the sun is free. Titania, TiO
2
is the
most popular photocatalyst used for solar hydrogen production
[2]. However, due to its high band gap, the efficiency in
visible light is low. The low efficiency of photohydrogen
production is mainly due to the inability of TiO
2
to utilize
visible light. The bandgap of TiO
2
is about
3.2 eV meaning that it can only be activated by photons with
wavelength 400 nm (UV region). Since solar radiation
consists of 3-4% UV component, visible light-active
photocatalyst is a crucial requirement for economically
feasible hydrogen production.
The low efficiency of photohydrogen production was also
due to recombination reaction of photogenerated electrons and
holes. Research on reducing the band gap has been vigorously
conducted on water system by the incorporation of nonmetal
[3] and metal components [4-5] and also with the addition of
hole scavengers [6-7]. Some metals such as Cu, Ag, Au Ni,
Rh, Pt and Zn were capable to increase the catalytic activity of
the photocatalyst enhancing the hydrogen production [8-9].
Supported bimetallic catalysts have been widely used in
industrial fields [10]. Bimetallic Cu-Ni has been reported to
enhance carbon dioxide hydrogenation [11] and photocatalytic
reduction of nitrates [12]. Hole scavengers are chemical
species added into a photocatalytic system in order to stabilize
the photogenerated holes, to prevent the electron-hole
recombination process. The hole scavengers undergo
oxidation process. It would be beneficial if value-added
products could be formed from the photoreaction.
Glycerol is a by-product from transesterification of fats
and oils to produce biodiesel and also from the hydrolysis of
palm oil (or other oils and fats) in soap and fatty acids
manufacturing. This has caused an oversupply of glycerol.
The photo-oxidation of glycerol was conducted using titania,
TiO
2
[13-14] under the irradiation of UV light to produce
useful chemicals such as dihydroxyacetone, glycolaldehyde,
glyceraldehyde, formic acid and CO
2
. Besides the production
of glycerol derivatives, hydrogen has also been produced
using TiO
2
doped with Pt [15], Au and Pd [16], Cu [17-19]
and Ni [19]. However, the reactions were conducted under the
irradiation of UV light.
In this study, bimetallic 10wt%Cu-Ni/TiO
2
and
monometallic 10wt%Cu/TiO
2
and 10wt%Ni/TiO
2

photocatalysts were prepared, characterized and investigated
for solar hydrogen production under visible light. It is expected
that the activity region of the photocatalyst will be shifted to
the visible light by metal doping. The addition of glycerol,
which acts as hole scavenger will be able to enhance the
hydrogen production efficiency. Products from the
photooxidation process were also analyzed.
II. METHODOLOGY
A. Preparation of Cu-Ni/TiO
2
Photocatalyst
Degussa P25 TiO
2
was used as the support for all the
photocatalysts. Bimetallic photocatalysts with 10wt% total
metal loading were prepared via co-precipitation method to
investigate the effect of Cu:Ni mass composition on solar
hydrogen production efficiency. The monometallics 10wt%
Cu/TiO
2
and 10wt% Ni/TiO
2
were also prepared in addition to
TiO
2
as references. The metal precursors were copper(II)
nitrate trihydrate (Acros, >98% purity) and nickel(II) nitrate
hexahydrate (Acros, >98% purity). Glycerol (Systerm, 95%
purity) was used as a templating agent while sodium
hydroxide, NaOH (Merck, 95%), was used as the precipitating
agent. All the materials were used as received without further
purification. Known amounts of Cu(NO
3
)
2
.3H
2
O and/or
Ni(NO
3
)
2
.6H
2
O were weighed and dissolved in distilled water
followed by the addition of glycerol. The solution was stirred
continuously prior to addition of TiO
2
. The slurry was stirred
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
48
for another hour before adding 0.25 M NaOH dropwise until
pH = 12 to form precipitates. The mixture was aged for 1 day,
filtered and dried overnight at 75C in an oven. Calcination
was conducted at 200C for 1 hr. The effect of total metal
loading on the performance for solar hydrogen production was
also investigated. The drying and calcination temperatures and
durations were selected based on previous study reported
elsewhere [20].
B. Characterization of Cu-Ni/TiO
2
Photocatalyst
The bimetallic, monometallic and TiO
2
photocatalysts were
characterized using powder X-ray diffraction (XRD), field
emission scanning electron microscopy (FESEM) and diffuse
reflectance UV-Vis spectroscopy (DRUV-Vis).
The photocatalysts were analysed for the type of TiO
2

phases present using Bruker D8 Advance with CuK

radiation
(40 kV, 40 mA) at 2 angles from 10 to 80, with a scan speed
of 4 min
-1
. The morphology of the photocatalysts such as
crystallite shape, size, and size distribution were analyzed
using FESEM. The analyses were conducted using Zeiss Supra
35VP with 80 kX magnification and operating at 10 kV.
DRUV-Vis measurement was performed using a Shimadzu
Spectrometer 3150, equipped with an integrating sphere.
BaSO
4
was employed as the reference material with analysis
ranging from 190 to 800 nm. This technique is used to
determine any shifting of the absorption edge to the visible
region for the photocatalysts due to metal incorporation. The
band gap energies of the photocatalysts could be determined
from the Kubelka-Munk function, F(R), using the Tauc plot, a
plot of (F(R).hv)
1/2
against hv.
C. Solar Hydrogen Production
The photocatalysts were evaluated for solar hydrogen
production using a multiport photocatalytic reactor integrated
to water displacement units (Fig. 1) to monitor any gaseous
product. A 500 W halogen lamp was used to simulate visible
light, irradiating the photoreactor from the top giving an
intensity of 12.2 klux.

Fig. 1. Schematic of the multiport photocatalytic reactor
A 0.1 g of photocatalyst powder was suspended in distilled
water (8.0 mL) and placed in the multiport reactor. The amount
of gas evolved was monitored for 2 hr. The gaseous product
was analyzed using a gas chromatograph (Agilent 6890 series
GC system) with 5A molecular sieve column (capillary 45.0 m
530 m 25 m) and equipped with thermal conductivity
detector. Helium gas was used as the carrier gas.
For experiments where liquid glycerol was added as hole
scavenger, the volume of glycerol was varied from 2.0 mL to
8.0 mL. Products from the photooxidation process were
analyzed using high performance liquid chromatography
(HPLC) (Agilent 1100 series) equipped with a Transgenomic
column (ICE-ORH-801) and 0.01 N H
2
SO
4
as eluent.
III. RESULTS AND DISCUSSION
A. Preparation of Cu-Ni/TiO
2
Photocatalyst
Bimetallic photocatalysts were prepared to investigate the
effects of Cu:Ni mass composition, and total metal loading on
the performance for solar hydrogen production. The 10 wt%
monometallic photocatalysts were also prepared. Calcination
was conducted at 200C for 1 h duration. These pretreatment
conditions were the optimum conditions selected based on
previous work [20]. The photocatalysts were given denotation
of 10wt%_9Cu1Ni for the bimetallic photocatalyst with 9:1
Cu:Ni mass composition while 10wt%_10Cu and 10wt%_10Ni
were used for the monometallic Cu and Ni photocatalysts,
respectively.
B. Characterization of Cu-Ni/TiO
2
Photocatalyst
Characterization procedures were conducted to determine
the bulk and surface properties of the photocatalysts.
1) XRD
Fig. 2 shows the XRD patterns of the photocatalysts and
TiO
2
. The peaks were mainly characteristic peaks of the
anatase phase at 2 = 25.3, 33.8, 47.8, 53.8 and 55.0,
while the rutile phase was represented by peaks at 2 = 27.4
and 41.5. It was found that no characteristic peaks
representing Cu- or Ni-species could be detected indicating that
the metal particles were well dispersed on TiO
2
. The presence
of glycerol during photocatalyst preparation contributed to the
high metal dispersion [21-22]. The average crystallite size
calculated using Scherrer equation was 35 nm.

Fig. 2. XRD patterns of (a) TiO2, (b) 10wt%_9Cu1Ni,
(c) 10wt%_10Cu and (d) 10wt%_10Ni
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
49
2) FESEM
The FESEM images represented in Fig. 3 shows that the
photocatalysts were present as uniform spherical shaped
particles. All the photocatalysts tend to display slight
agglomeration. The particle size of the photocatalysts ranged
from 20-40 nm, in good agreement with the crystallite size
calculated from the XRD data. However, no indication of
localized metal deposition was observed, which was also
confirmed by the XRD results as the result of high metal
dispersion [22].

Fig. 3. FESEM images of (a) 10wt%_9Cu1Ni,
(b) 10wt%_10Cu, (c) 10wt%_10Ni and (d) TiO2
3) DRUV-Vis
DRUV-Vis spectrum of TiO
2
in Fig. 4 showed its
absorption edge at 400 nm. No absorption was observed in the
visible region. However, the edge shifted to the visible region
with the addition of Cu, Ni or Cu-Ni indicating reduction in
band gap values. Using Tauc plot (not shown), the band gaps
were calculated and displayed in Table I. TiO
2
has the highest
band gap of 3.16 eV. The addition of monometallic Cu or Ni
reduced the band gap to 2.98 eV. Surface modification of TiO
2

with Cu, Ni or Cu-Ni could significantly reduce the band gap
thus shifting the absorption edge to the visible region [23-25].
The lowest band gap was displayed by 10wt%_9Cu1Ni,
2.78 eV.
The presence of metal not only increased the visible light
absorption, it also enhanced the absorbance in the UV region
(wavelength <400 nm) as indicated by the higher absorbance of
the bimetallic and monometallic photocatalysts compared to
TiO
2
in Fig. 4. 10wt%_10Ni displayed the highest absorbance
of approx. 0.19 followed by both 10wt%_10Cu and
10wt%_9Cu1Ni, and finally by TiO
2
displaying the lowest
absorbance. The presence of Ni in 10wt%_9Cu1Ni seemed to
enhance its visible light harvesting property. 10wt%_9Cu1Ni
displayed the highest absorbance in the visible wavelength, and
extending through to 800 nm. This was followed by
10wt%_10Cu and 10wt%_10Ni and finally TiO
2
displaying
zero absorbance in the visible region.

Fig. 4. DRUV-Vis spectra of the photocatalysts.
TABLE I. BAND GAP OF PHOTOCATALYSTS
Photocatalyst Band gap, eV
TiO2 3.16
10wt%_10Cu 2.98
10wt%_9Cu1Ni 2.78
10wt%_7Cu3Ni 2.98
10wt%_10Ni 2.98

C. Solar Hydrogen Production
Referring to Fig. 5, the effect of Cu:Ni mass composition
on the solar hydrogen production efficiency was investigated.
Bimetallic 10wt%_9Cu1Ni photocatalyst displayed the highest
hydrogen production of 6.1 mL compared to monometallic
10wt%_10Cu and 10wt%_10Ni, giving 5.0 mL and 4.3 mL,
respectively. TiO
2
was able to produce only 2.0 mL gas. The
reduction in band gap (Table I) for the photocatalysts
compared to TiO
2
has led to an increase in hydrogen
production from water.
The addition of a small amount of Ni onto Cu/TiO
2

(10wt%_9Cu1Ni) was able to enhance the performance of the
photocatalyst from producing 5.0 mL to 6.1 mL hydrogen. This
may be due to Cu acting as both hole and electron traps [26]
while Ni as hole trap only [27]. However, as the composition
of Ni increased (10wt%_7Cu3Ni), detrimental effect was
observed resulting in lower amount of hydrogen produced.
The increase in the amount of Ni led to it becoming the hole
accumulation site which could attract the photogenerated
electrons causing electron-hole recombination. During
photoreaction, hydrogen should be produced together with
oxygen. However, no oxygen was detected from the gas
chromatography. This may be attributed to the oxygen itself
acting as an electron acceptor leading to the formation of
superoxide radical anions [28]. This may retard the continuous
production of hydrogen causing the deactivation of the
photocatalyst.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
50

Fig. 5. The effect of Cu:Ni mass composition of photocatalyst with 10wt%
total metal loading on hydrogen production.
Fig. 6 revealed that as the total metal loading of the
photocatalysts (Cu:Ni mass composition of 9:1) was increased
from 5 wt% to 10 wt%, the amount of hydrogen gas evolved
increased from 3.0 mL to 6.1 mL. The presence of metal was
able to inhibit the electron-hole recombination process [29].
However, when the total metal loading was increased to
11 wt%, the photocatalytic activity was reduced to 5.0 mL
hydrogen evolved. Further increase in total metal loading
showed lowered performance. As the total metal loading
increases, the metal sites will act as electron-hole
recombination centers due to electrostatic forces between the
negatively charged metal sites and the positively charged holes
[29-30]. The optimum loading for bimetallic Cu-Ni was
10wt%.

Fig. 6. The effect of total metal loading of photocatalyst (Cu:Ni mass
composition 9:1) on hydrogen production.
From Fig. 7, the effect of glycerol amount on hydrogen
production was investigated using 10wt%_9Cu1Ni. When 2.0
mL of glycerol was added, tremendous increase in the amount
of hydrogen produced (9.5 mL) was observed. At this stage
glycerol acts as a hole scavenger [18] diminishing the effect of
electron-hole recombination. Although the amount of hydrogen
produced decreased when more glycerol was added, the overall
effect was enhancement of performance compared to without
glycerol addition. The decreased performance with higher
amount of glycerol may be due to the competitive adsorption
of glycerol, its intermediates and water onto the photocatalyst
surface [31].
The intermediate products identified from glycerol
photooxidation were glyceraldehyde, glycolic acid, and oxalic
acid. The use of glycerol as sacrificial agent for hydrogen
production from water serves not only to enhance the
photocatalytic performance but also the generation of value-
added products from the biodiesel by-product.


Fig. 7. The effect of glycerol amount on hydrogen production using
10wt%_9Cu1Ni.
IV. CONCLUSION
The photocatalysts displayed high metal dispersion.
Particles were spherical shaped with slight agglomeration with
size ranging from 20-40 nm. The addition of Cu, Ni or
bimetallic Cu-Ni onto TiO
2
led to reduction in band gap from
3.16 eV to 2.78 eV for 10wt%_9Cu1Ni photocatalyst. This led
to enhancement in the hydrogen production (6.1 mL) under
visible light illumination compared to TiO
2
. The addition of
2.0 mL of glycerol as hole scavenger further enhanced the solar
hydrogen production to 9.5 mL. The photooxidation of glycerol
under visible light illumination produced glyceraldehyde,
glycolic acid and oxalic acid.
ACKNOWLEDGMENT
The authors would like to acknowledge Universiti
Teknologi PETRONAS for providing research facilities and
financial support.

REFERENCES
[1] M. Momirlan, and T. Vezirog, Recent directions of world hydrogen
production, Renewable and sustainable energy reviews, vol. 3, pp. 219-
231, 1999.
[2] M. Ni, M.K.H. Leung, D.Y.C., and K. Sumathy, A review and recent
developments in photocatalytic water-splitting using TiO2 for hydrogen
production, J. Renewable and Sustainable Energy Reviews, vol. 11, pp
401-425, 2007.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
51
[3] M. Kitano, M. Matsuoka, M. Ueshima, and M. Anpo, Recent
developments in titanium oxide-based photocatalysts, Applied
Catalysis A: General vol. 325, pp.1-14, 2007.
[4] A. Kudo, R. Niishiro, A. Iwase, H. Kato, Effects of doping of metal
cations on morphology, activity, and visible light response of
photocatalysts, Chemical Physics, vol 339, pp.104-110, 2007.
[5] J.P. Yasomanee, and J. Bandara, Multi-electron storage of photoenergy
using Cu2OTiO2 thin film photocatalyst, Solar Energy Materials and
Solar Cells, vol. 92, pp.348-352, 2008.
[6] J. Premkumar, Development of Super-Hydrophilicity on Nitrogen-
Doped TiO2 Thin Film Surface by Photoelectrochemical Method under
Visible Light, Chemistry Materials, vol. 16, pp.3980-3981, 2004.
[7] J. Bandara, C.P.K. Udawatta, and C.S.K. Rajapakse, Highly stable CuO
incorporated TiO2 catalyst for photocatalytic hydrogen production from
H2O, Journal of Photochemistry and Photobiological Science, vol. 4,
no. 11, pp. 857-861, 2005.
[8] G.L. Chiarello, E. Selli, L. Forni, Photocatalytic Hydrogen Production
Over Flame Spray Pyrolysis-synthesised TiO2 and Au/TiO2, Applied
Catalysis B: Environmental, vol. 84, pp. 332342, 2008.
[9] Z.M. El-Bahy, A.A. Ismail, R.M. Mohamed, Enhancement of Titania
by Doping Rare Earth for Photodegradation of Organic Dye (Direct
Blue), Journal of Hazardous Materials, vol. 166, no.1, pp. 138-143,
2008.
[10] V. Ponec, Alloy catalysts: the concepts, Applied Catalysis A: General,
vol. 222, pp. 3145, 2001.
[11] Y. Liu, D. Liu, Study of Bimetallic Cu-Ni/Al2O3 Catalysts for
Carbondioxide Hydrogenation, Int. Journal of Hydrogen Energy, vol.
24, pp. 351-354, 1999.
[12] W.Gao, R. Jin, J. Chen, X. Guang, H. Zheng, F. Zhang, N. Guan,
Titania-supported Bimetallic Catalysts for Photocatalytic Reduction of
Nitrate, Catalysis Today, vol. 90, pp. 331-336 , 2004.
[13] V. Maurino, A. Bedini, M. Minella, F. Rubertelli, E. Pelizzetti, and C.
Minero, Glycerol Transformation Through Photocatalysis: A Possible
Route to Value Added Chemicals, Journal of Advanced Oxidation
Technologies, vol. 11, no. 2, pp. 184-192, 2008.
[14] V. Augugliaro, H.A. Hamed El Nazer, V. Loddo, A. Mele, G.
Palmisano, L. Palmisano, and S. Yurdakal, Partial photocatalytic
oxidation of glycerol in TiO2 water suspensions, Catalysis Today, vol.
151, no. 1-2, pp. 21-28, 2010.
[15] M. Li, Y. Li, S. Peng, G. Lu, and S.B. Li, 2009, Photocatalytic
hydrogen generation using glycerol wastewater over Pt/TiO2, Frontiers
of Chemistry in China, vol. 4, no.1, pp. :32-38, 2009.
[16] M. Bowker, P.R. Davies, and L.S. Al-Mazroai, Photocatalytic
Reforming of Glycerol over Gold and Palladium as an Alternative Fuel
Source, Catal Lett, vol. 128, pp. 253-255, 2009.
[17] G. Li, N.M. Dimitrijevic, L. Chen, T. Rajh, and K.A. Gray, Role of
Surface/Interfacial Cu
2+
Sites in the Photocatalytic Activity of Coupled
CuOTiO2 Nanocomposites, J. Phys. Chem, C, vol. 112, pp. 19040-
19044, 2008.
[18] V. Gombac, L. Sordelli, T. Montini, J.J. Delgado, A. Adamski, G.
Adami, M. Cargnello, S. Bernal, and P. Fornasiero, CuOxTiO2
Photocatalysts for H2 Production from Ethanol and Glycerol Solutions,
J. Phys. Chem, A, vol. 114, pp. 3916-3925, 2010.
[19] A.V. Korzhak, N.I. Ermokhina, A.L. Stroyuk, V.K. Bukhtiyarov, A.E.
Raevskaya, V.I. Litvin, S.Y. Kuchmiy, V.G. Ilyin, and P.A. Manorik,
Photocatalytic hydrogen evolution over mesoporous TiO2/metal
nanocomposites, Journal of Photochemistry and Photobiology A:
Chemistry, vol.198, no. 2-3, pp. 126-134, 2008.
[20] Ela Nurlaela, Development of Cu-Ni/TiO2 bimetallic catalyst for
photohydrogen production under visible light illumination, M.Sc.
Thesis, Universiti Teknologi PETRONAS, 2011.
[21] Y. Li, M. Cai, J. Rogers, Y. Xu, and W. Shen, "Glycerol-mediated
synthesis of Ni and Ni/NiO core-shell nanoparticles," Materials Letters,
vol. 60, pp. 750-753, 2006.
[22] L.S. Yoong, F.K. Chong, and B.K. Dutta, "Development of copper-
doped TiO2 photocatalyst for hydrogen production under visible light "
Energy, vol. 34, pp. 1652-1661, 2009.
[23] J. Escobar, J. A. D. L. Reyes, and T. Viveros, "Nickel on TiO2-modified
Al2O3 solgel oxides: Effect of synthesis parameters on the supported
phase properties," Applied Catalysis A, vol. 253, pp. 151-163, 2004.
[24] O. V. Komova, A. V. Simakov, V. A. Rogov, D. I. Kochubei, G. V.
Odegova, V. V. Kriventsov, E. A. Paukshtis, V. A. Ushakov, N. N.
Sazonova, and T. A. Nikoro, "Investigation of the state of copper in
supported coppertitanium oxide catalysts," Journal of Molecular
Catalysis A, vol. 161, pp. 191-204, 2000.
[25] T. Umebayashi, T. Yamaki, H. Itoh, K. Asai, Analysis of electronic
structures of 3d transition metal-dope TiO2 based on band calculation,
Journal of Physics and Chemistry Solids, vol. 63, pp. 1909-1920, 2002.
[26] T. Miwa, S. Kaneco, H. Katsumata, T. Suzuki, K. Ohta, S.C. Verma, and
K. Sugihara, "Photocatalytic hydrogen production from aqueous
methanol solution with CuO/Al2O3/TiO2 nanocomposite," International
Journal of Hydrogen Energy, vol. 35, pp. 6554-6560, 2010.
[27] D. Jing, L. Guo, L. Zhao, "Study on the synthesis of Ni doped
mesoporous TiO2 and its photocatalytic activity for hydrogen evolution
in aqueous methanol solution," Chemical Physics Letters, vol.415, pp.
74-78, 2005.
[28] D.H. Tseng, L.C. Juang and H.H. Huang, Effect of oxygen and
hydrogen peroxide on the photocatalytic degradation of
monochlorobenzene in TiO2 aqueous suspension, International Journal
of Photoenergy, vol. 2012, pp. 1-9, 2012.
[29] O. Carp, C. L. Huisman, and A. Reller, "Photoinduced Reactivity of
Titanium Dioxide", Progress in Solid State Chemistry, vol. 32, pp. 33-
117, 2004.
[30] T. Sreethawong, S. Laehsalee, and S. Chavadej, "Use of Pt/N-doped
mesoporous-assembled nanocrystalline TiO2 for photocatalytic H2
production under visible light irradiation", Catalysis Communications,
vol. 10, pp. 538543, 2009.
[31] S. Chavadej, P. Phuaphromyod, E. Gulari, P. Rangsunvigit, and T.
Sreethawong, "Photocatalytic degradation of 2-propanol by using
Pt/TiO2 prepared by microemulsion technique,"Chemical Engineering
Journal, vol. 137, pp. 489495, 2008.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
52
Air Quality in East Asia during the heavy haze event
period of 10 to 15 January 2013

Soon-Ung Park and Jeong Hoon Cho
Center for Atmospheric and Environmental Modeling
Seoul, Korea
supark@snu.ac.kr


Abstract A prolonged heavy haze event that has caused for
the Environmental Protection Bureau (EPB) in Beijing to take
emergency measures for the protection of the public health and
the reduction of air pollution damages in China has been
analyzed with the use of the Aerosol modeling System (AMS) to
identify causes of this event. It is found that the heavy haze event
is associated with high aerosols and water droplets
concentrations. These high aerosol concentrations are mainly
composed of anthropogenic aerosols, especially secondary
inorganic aerosols formed by gas-to-particle conversion of
gaseous pollutants in the eastern part of China whereas those in
the northeastern parts of China are composed of the mixture of
the anthropogenic aerosols and the Asian dust aerosol originated
from the dust source regions of northern China and Mongolia.
These high aerosol concentrations are found to be subsequently
transported to the downwind regions of the Korean Peninsula
and Japan causing a prolonged haze event there. It is also found
that the Asian dust aerosol originated from northern China and
Mongolia and the anthropogenic aerosols produced by chemical
reactions of pollutants in the high emissions region of eastern
China can cause significantly adverse environmental impacts in
the whole Asian region by increased atmospheric aerosol loadings
that may cause respiration diseases and visibility reduction and
by excess deposition of aerosols causing adverse impacts on
terrestrial and marine eco-systems.
Keywords Aerosol loading and deposition; Aerosol Modeling
System (AMS); Anthropogenic aerosol; Asian Dust Aerosol Model
2 (ADAM2); CMAQ; Pollutants emissions in Asia
I. INTRODUCTION
Air quality is defined as a measure of the condition of air
relative to the requirements of one or more biotic species
and/or human need or purpose [1]. Poor air quality in East Asia
has become a major environmental problem in recent years due
to rapid economical growths in most of Asian countries. The
air quality is largely determined by concentrations of gaseous
pollutants (SO
2
, NO
X
, CO, O
3
) and atmospheric aerosols
[2][3][4][5].
Atmospheric aerosols can affect the quality of our lives
significantly because of its potential impacts on human health
and the environment. The submicrometer size of aerosols can
be inhaled and thus may pose certain health hazards [6][7]
[8][9][10][11][12]. Because aerosols also scatter light, they
strongly influence the radiative budget of the Earth-atmosphere
system; they also reduce visibility and diminish the aesthetic
scenery [13][14][15][16][17][18][19][20]. Visibility reduction
is usually caused by weather phenomena such as precipitation,
fog, mist, haze and dust that are associated with hydrometeor
and lithometeor [21].
East Asia is a major source of both natural aerosol (Asian
dust) and anthropogenic aerosols over the Northern
Hemisphere. Asian dust that is a typical example of mineral
aerosol occurs in northern China and Mongolia more
frequently during the spring season [22][23][24][25] and has its
increasing occurrence trend due to desertification in the source
region. Anthropogenic aerosols that are mainly originated from
emitted pollutants have also an increasing trend due to the
rapid economic expansion in many Asian countries [26][27].
Tropospheric aerosols in this region are the complex mixture of
various aerosols such as Asian dust and anthropogenic
aerosols. Consequently, occurrence frequencies of visibility
reducing weather events caused by aerosols have an increasing
trend, especially over Asia [28][29][30].
Visibility reducing weather phenomena caused by aerosols
include precipitation, fog, mist, haze and dust storm [21].
Among these, mist and haze that are composed of
submicrometer particles are mainly formed by gas-to-particle
conversion processes in the atmosphere [31][32][33]. The
recent increasing occurrence frequency of dense haze and mist
events in the eastern parts of China appears to be related to this
size range of secondary inorganic aerosols that are formed
through the chemical reactions of gaseous pollutants such as
SO
2
, NO
X
, NH
3
and water vapor.
In fact a wide swath of central and eastern China
experienced several days of the worst air pollution the country
has seen in recent memory, with the dense haze covering
several provinces in China including Beijing, Hebei, Tianjing,
Shandong, Henan, Jiangsu, Anhui, Jiangxi and Hubei from 11
to 16 January 2013. These events caused for the Environmental
Protection Bureau (EPB) in Beijing to take emergency
measures including halting outdoor activities for primary and
middle school students, suspending construction of 28
construction sites, reducing emissions by 30 % at 58 factories,
and taking up to 30 % of government vehicles off the road
(China Daily). Haze (Smog) blanketed Shandong and Jiangxi
provinces has forced the closure of many highways and the
cancellation or delay of many air flights during these events
period. These dense haze events have been transported
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
53
downwind regions causing high aerosol concentrations of more
than 200 g m
-3
over Korea and 50 g m
-3
in parts of Japan.
Recently [27] has developed the Aerosol Modeling System
(AMS) that is composed of the Asian Dust Aerosol Model 2
(ADAM2) for the Asian dust aerosol modeling and the
Community Multiscale Air Quality (CMAQ) model for the
anthropogenic aerosol modeling [34]. AMS could successfully
simulate a prolonged dense haze event occurred during the
period of 19-22 May 2010 [35] in East Asia.
The Purpose of this study is examine the air quality in
association with dense haze and mist events occurred for the
period from 10 to 15 January 2013 in East Asia with the use of
Aerosol Modeling System (AMS) and to investigate the effects
of various aerosol species on these hazardous environmental
events.
II. MODEL DESCRIPTIONS
A. Meteorological model
The meteorological model used in this study is the fifth
generation mesoscale model of non-hydrostatic version (MM5;
Pennsylvania State University / National Center for
Atmospheric Research) defined in the x, y and coordinate
[36][37]. The model domain (Fig. 1) has the horizontal
resolution of 27 27 km
2
with 30 vertical layers in the Asian
region.
The NCEP FNL operational global analysis data on 1.0
1.0 degree grids are used for the initial and lateral boundary
conditions for the model.
B. Aerosol Modeling System (AMS)
The Aerosol Modeling System (AMS) is consisted of the
Asian Dust Aerosol Model 2 (ADAM2) [38] and the
Community Multiscale Air Quality (CMAQ) modeling system
(http://www.cmaq-model.org) with emission data of pollutants
(SO
2
, NO
X
, VOC, CO, NH
3
, BC, OC and PM
10
) in the model
domain.
1) ADAM2
The ADAM2 model is an Eulerian dust transport model
that includes the specifications of the dust source regions
delineated by the statistical analysis of the World
Meteorological Organization (WMO) 3 hourly reporting dust
data and statistically derived dust emission conditions in Sand,
Gobi, Loess and Mixed surface soil in the model domain (Fig.
1). The model uses the suspended particle-size distribution
parameterized by the several log-normal distributions in the
source regions, based on the parent soil particle-size
distributions with the used of the concept of the minimally and
fully dispersed particle-size distribution [38][39]. It has 11-size
of bins with near the same logarithm interval for particles of
0.15-35 m in radius [23][24]. The model has a temporally
varying emission reduction factors derived statistically using a
normalized difference vegetation index (NDVI) in the different
surface soil types in the Asian dust source region. The detailed
description is given in [38].
2) CMAQ model
The Environmental Protection Agency (EPA) Community
Multiscale Air Quality (CMAQ) modeling system
(http://www.cmaq-model.org) is a three-dimensional eulerian
atmospheric chemistry and transport modeling system that
simulates airborne pollutants, ozone concentration, particulate
matters, visibility, and acidic and nutrient pollutant species
throughout the troposphere [40].
The aerosol component of the CMAQ model has the
particle size distribution as the superposition of three lognomal
subdistributions, called modes. Fine particles with diameters
less than 2.5 m (PM
2.5
) are represented by two
subdistributions called the Aitken and accumulation modes.


Fig. 1. Topography of the model domain with the indication of sites that will
be described in the text. The enhanced map of South Korea with the
indication of monitoring sites is shown in the right lower corner of the
domain


Fig. 2. Horizontal distributions of emission rate (kt yr
-1
grid
-1
)of (a) SO2, (b)
NOX, (c) NH3 and (d) PM10 in the year 2010 over Asian region (grid:
0.50.5)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
54
The Aitken mode includes particles with diameters up to
approximately 0.1 m for the mass distribution and the
accumulation mode covers the mass distribution in the range
from 0.1 to 2.5 m. The coarse mode covers the mass
distribution in the range from 2.5 to 10 m. The model
includes the processes of coagulation, particle growth by the
addition of mass and new particle formation [41].
3) Emission data
Air pollutant emission data (Fig. 2) in the year 2010 are
Analysis (IIASA; ftp://www.iiasa.ac.at/outgoing/may/KRF) in
a grid of 0.5 long. 0.5 lat. in the global domain. The
emission data include SO
2
, NO
X
, NH
3
, CO, VOC, PM
10
, BC
and OC. The estimated total Asian (Chinas) anthropogenic
emissions in the year 2010 are 48.4 Tg (30.0 Tg) SO
2
, 36.7 Tg
(14.0 Tg) NO
X
, 43.5 Tg (22.4 Tg) VOC, 279.0 Tg (144.5 Tg)
CO, and 36.5 Tg (23.0 Tg) PM
10
, suggesting more than 50 %
of the total anthropogenic emissions being contributed by
China in the Asian domain.
Air pollutant emissions in South Korea in the year 2007 are
obtained from the Clean Air Policy Supporting System
(CAPSS, Korean Ministry of Environment) in a grid of 1 1
km
2
. The estimated total anthropogenic emission in the year
2007 are 402,525 t SO
X
, 1,187,923 t NO
X
, 874,699 t VOC,
808,862 t CO, and 98,143 t PM
10
. These emission data over
South Korea are used for the model simulation rather than
using the emission inventory data of IIASA.
III. SYNOPTIC SITUATIONS OF THE CHOSEN HAZE EVENTS
Figure 3 shows the surface weather analysis map obtained
from Korea Meteorological Administration (KMA, 2013)
during the haze event period in East Asia.
The most parts of eastern China are under the influence of a
surface high pressure system centered at central northern China
(110E and 40N) at 00 UTC 9 January 2013. As this high
pressure system moves slowly southeastward, haze is reported
in the wide region of eastern China at 06 UTC 9 January (Fig.
3a). Thereafter this surface high pressure system continuously
moves further southeastward to locate in the coastal region of
Shandong province at 06 UTC 10 January (Fig. 3b), the dense
haze covers almost all over eastern China from northeast China
to the South China Sea. At 06 UTC 12 January the surface
pressure center located at Shandong province slightly moves
northeastward over the Yellow Sea accompanied with further
northeastward extension of the dense haze zone over South
Korea (Fig. 3c).
Thereafter the dense haze occurrence zone is prevailed over
South Korea and extended over Japan with the surface high
pressure system is keeping to move northeastward over
northeastern China (Fig. 3d). This trend is continued until 18
UTC 15 January when the surface low pressure system
developed over the East China Sea moves northeastward to the
East Sea of Korea. This prolonged dense haze event has caused
for the Environmental Protection Bureau of China to take
emergency measures in several provinces in eastern China.
IV. RESULTS OF THE MODEL SIMULATION
A. Comparison of observed and simulated aerosol (PM
10
)
concentrations over South Korea
The Aerosol Modeling System (AMS) has been employed
to simulated concentrations of PM
10
and pollutants for the
period from 4 to 16 January 2013 that includes the dense haze
event period in East Asia in the domain given in Fig. 1.
The time series of observed PM
10
concentrations at several
monitoring sites over South Korea (Fig. 1) are compared with
the model simulated ones to see the performance of the model.
Figure 4 shows time series of hourly mean surface PM
10

concentrations observed and simulated by the model at several
monitoring sites over South Korea (Fig. 1). The simulated
PM
10
concentrations are composed of all kinds of aerosols
including Water (water droplet that is formed through the
chemical reaction and hygroscopic processes), Other (sea salt
and secondary organic aerosols), Asian dust (aerosols emitted
from soil erosion), SIA (secondary inorganic aerosols; SO
4
2-
,
NO
3
-
, NH
4
+
), BC (black carbon) and OC (organic carbon), and
unspecified PM
10
(emitted anthropogenic aerosol not include
above categories).
The model simulates quite well observed high PM
10

concentration events that caused dense haze events over Korea.
Two high aerosol-concentration events are simulated (Fig. 4).
Some quantitative statistical performance measures for
evaluating the model are given in Table 1. These values reflect
averages over time from 00 LST 9 to 00 LST 16 January 2013
at each monitoring site over Korea. All the statistics (Table 1)
indicate a good overall agreement between observations and
model results, especially at the Heuksando site with an absolute
normalized bias (ANB) of -4 %, an correlation coefficient
(CORR) of 90 % and the index of agreements (IOA) of 92 %.
The first event occurs at 00:00 LST (=UTC+9) 12 January
at Baengnyeongdo (Fig. 1) with the surface maximum PM
10


(a) 06UTC 09 Jan. 2013 (b) 06UTC 10 Jan. 2013
(c) 06UTC 12 Jan. 2013 (d) 06UTC 13 Jan. 2013


Fig. 3. Surface weather analysis maps at (a) 06 UTC 9, (b) 06 UTC 10, (c) 06
UTC 12 and (d) 06 UTC 13 January 2013. The haze is indicated by

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
55
concentration more than 230 g m
-3
at 05:00 LST (Fig. 4a).
This event moves southeastward to the Gosan site in Jeju island
of Korea (Fig. 1) at 16:00 LST 12 January 2013 with the
maximum PM
10
concentration of 150 g m
-3
at 05:00 LST 13
January (Fig. 4c). While at the inland site of Gwangju that is
located about 500 km to the southeast of Baengnyeongdo (Fig.
1) the event occurs at 09:00 LST 12 January with the simulated
maximum PM
10
concentration exceeding 200 g m
-3
at 00:00
LST 13 January (Fig. 4f). The observed and simulated high
aerosol concentrations of the first event are mainly contributed
by the anthropogenic aerosols (Fig. 4). Among these aerosols
the secondary inorganic aerosols (SO
4
2-
, NO
3
-
, and NH
4
+
) are
predominated, suggesting the importance of converted aerosols
from emitted air pollutants.
The second high aerosol concentration event occurs at
16:00 LST 12 January at the Baengnyeongdo site with the
maximum surface PM
10
concentration of more than 150 g m
-3

at 10:00 LST 13 January and ends at around 15:00 LST 14
January 2013 (Fig. 4a). This event moves slowly southeastward
to Gosan (starting time at 00 LST 14 and ending time at 00
LST 15), with maximum surface PM
10
concentration of 150 g
m
-3
(Fig.5c) and to Gwangju (starting time at 06:00 LST 14 and
ending time at 16:00 LST 15 January) with the maximum
surface PM
10
concentration of about 150 g m
-3
(Fig. 4f).
During the second event period hazes are reported all over
South Korea (Figs. 3c and d), suggesting the haze event
reported in South Korea being caused by the high aerosol
concentration.
The second high aerosol-concentration event in Figure 4 is
contributed by the mixture of Asian dust aerosol and the
anthropogenic aerosols that is different from the first event
which is mainly caused by the anthropogenic aerosol. The
difference between the first and second high aerosol-
concentration events is more clearly seen in Figure 5. The
column integrated PM
10
concentration (Fig. 5) indicates that
the first high aerosol-concentration event is mainly contributed
by anthropogenic aerosols, especially by secondary inorganic
aerosols (SIA) originated from air pollutants, whereas the
second event is largely contributed by the Asian dust aerosol
originated from the Asian dust source region at all monitoring
sites in Korea (Figs. 5a, b, c, d, e, and f). Note that the upper
level long-range transported Asian dust aerosol is largely
attributed to the haze phenomena of the second event; the
column integrated concentration of the Asian dust aerosol is
much greater than that of the anthropogenic aerosols for the
second event (Fig. 5) whereas that of the anthropogenic aerosol
is predominated for the first event (Fig. 5).
The surface weather analysis map in Fig. 3c clearly
indicates that the first high aerosol concentration is related with
the haze reports over the western parts of the Korean peninsula.
It is worthwhile to note that the observed PM
10
concentration in
Figure 4 does not include the water droplet aerosol since the
sampled air is desiccated before measuring PM
10
concentration.
Therefore, the comparison between the measured and
simulated PM
10
concentrations should be made without water


Fig. 5. The same as in Fig. 4 except for the column integrated PM10
concentration (mg m
-2
)


Fig. 4. Time series of hourly mean observed (red line) and modeled (various
color shaded line) PM10 concentration (g m
-3
) at (a) Baengnyeongdo,
(b) Heuksando, (c) Gosan, (d) Seoul, (e) Gunsan and (f) Gwangju in
Korea for the period of 09:00 LST 8 to 09:00 LST 16 January 2013.
Each color represents different species of aerosol concentration.
TABLE I. PERFORMANCE STATISTICS OF AMS FOR THE AEROSOL CONCENTRATION (PM10)
Site
Parameter
Total # Mean Obs. ( g m
-3
) Mean Mod. ( g m
-3
) STD. Obs. STD. Mod. ANB
a
(%) CORR
b
FAC2
c
IOA
d

Baengnyeongdo 165 75.1 60.9 63.0 54.9 -19 0.78 0.89 0.86
Heuksando 165 56.5 54.5 43.3 60.4 -4 0.90 0.62 0.92
Gosan 158 51.9 48.2 34.1 44.6 -7 0.72 0.58 0.83
Seoul 165 84.8 87.6 47.9 48.6 3 0.66 0.85 0.80
Gunsan 165 83.5 73.6 54.1 64.4 -12 0.81 0.93 0.89
Gwangju 165 79.0 63.4 45.1 50.8 -20 0.75 0.78 0.83
a

b

c

d

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
56
droplet aerosols in the simulated PM
10
concentration. However,
the water droplet aerosol concentration is very important to
distinguish the hygroscopic aerosol from non-hygroscopic
aerosol and the mist from haze. It also affects the visibility
significantly [21].
B. Temporal variations of model simulated PM
10

concentration in China and Japan
Figure 6 shows time series of hourly mean surface PM
10

concentration at three sites in the Asian dust source region (Fig.
1 and Figs. 6a-c) and three sites at the eastern border of the
Asian dust source region in China (Fig. 1 and Figs. 6d-f). Asian
dust occurs every day during the analysis period starting from
00 LST (UTC+8) 11 January with varying intensity depending
on the site and time (Figs. 6a-c). The maximum hourly mean
PM
10
concentration of 1,800 g m
-3
, 150 g m
-3
, 200 g, m
-3
,
respectively occurs at 00 LST 13 at the Dunhung site (Fig. 6a),
00 LST 15 at the Wulantezhongi site (Fig. 6b) and 04 LST 11
January at the Yanan site (Fig. 6c). Most of aerosols at these
sites are contributed by the Asian dust aerosol (Figs. 6a-c).
However, toward the border of the Asian dust source region,
the contribution of anthropogenic aerosols becomes more
important as seen in Figs. 6d-f. The high anthropogenic aerosol
concentration of more than 250 g m
-3
at Beijing (Fig. 6d), and
more than 400 g m
-3
at Baoding (Fig. 6e) and Zhengzhou (Fig.
6f) throughout the analysis period together with the high water
droplet aerosol concentration has caused dense haze events
reported in Fig. 3. The most predominant aerosol for this haze
event is the secondary inorganic aerosols that are formed by
gas-to-particle conversion processes in the atmosphere,
suggesting the importance of SO
2
, NO
X
and NH
3
emissions.
Figure 7 shows the time series of model simulated hourly
mean PM
10
concentration at three sites in the central eastern
low land in China (Figs. 7a-c) where pollutants emissions are
high (Fig. 2), and the three eastern coastal sites in China (Figs.
7d-f). More than 300 g m
-3
of PM
10
concentration occurs
during the period from 10 to late 14 January at the site located
in the northern low land region (Fig. 7a) while at the other sites
located in the low land (Figs 8b and c) the surface PM
10

concentration increases throughout the analysis period with the
maximum PM
10
concentration of more than 500 g m
-3
.
Aerosols are mainly composed of anthropogenic aerosols
including SIA, BC, OC and unspecified PM
10
that are
originated from the pollutant emissions. The high
concentrations of anthropogenic aerosols (especially the
secondary inorganic aerosol) together with that of the water
droplet aerosol have caused the prolonged haze event in this
region of China (Fig. 3).
At the Dalian site (Fig. 7d) located in the northeastern coast
of China (Fig. 1), two main heavy aerosol events occur for the
periods of 06 LST 10 to 00 LST 12 January and 09 LST 12 to
09 LST 14 January. The first one is mainly contributed by


Fig. 8. Time series of model simulated surface PM10 concentration (g m
-3
) at
(a) Sapporo, (b) Osaka and (c) Nagasaki and the column integrated PM10
concentration (mg m
-2
) at (d) Sapporo, (e) Osaka and (f) Nagasaki.


Fig. 7. The same as n Fig. 6 Except at (a) Huimin, (b) Suzhou and (c) Wuhan
located in the eastern low flat region, and (d) Dalian, (e) Qingdao and
(f) Shanghai located in the eastern coastal region of China.



Fig. 6. The same as in Fig. 4 except for the model simulated PM10
concentration (g m
-3
) at (a) Dunhuang, (b) Wulatezhongqi and (c)
Yanan located in the Asian dust source region, and at (d) Beijing, (e)
Baoding and (f) Zhengzhou located near the eastern border of the Asian
dust source region.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
57
anthropogenic aerosols with the maximum surface PM
10

concentration of more than 300 g m
-3
at 23 LST 11 January,
while the second one is contributed by the mixture of
anthropogenic aerosols and the Asian dust aerosol originated
from northern China.
At the coastal sites of Qingdao and Shanghai (Figs. 7e and
f) the high aerosol concentration event has occurred before 00
LST 10 January as is the case at sites in the high pollutants
emission region (Figs. 7a, b and c) but the main event starts to
occur from 14 LST 11 January with increasing intensity
throughout the analysis period (Figs. 7e and f). The maximum
surface PM
10
concentration is more than 600 g m
-3
at 22 LST
15 at Qingdao and 420 g m
-3
at 00 LST 16 January at
Shanghai causing dense haze events in these regions. Aerosols
are mainly composed of anthropogenic aerosols for the first
event occurred at 00 LST 12 and the third event at 22 LST 15
January at the Qingdao site (Fig. 7e) whereas the second event
occurred at 10 LST 14 January is composed of the mixture of
anthropogenic aerosols and Asian dust aerosol transported
from northern China in the dust source region. Although the
concentrations of Asian dust aerosol for the second event are
small compared with that of anthropogenic aerosols at the
surface level (Fig. 7), the column integrated PM
10

concentration (not shown) shows that the Asian dust aerosol is
predominated for the second event, suggesting the upper level
long-range transported Asian dust aerosol being main cause of
the dense haze event at Qingdao.
However, the contribution of Asian dust aerosol to the total
PM
10
concentration is rather small but that of the locally
emitted PM
10
concentration is rather large at Shanghai
throughout the whole analysis period (Fig. 7f).
The high aerosol concentration events occurred in China
(Figs. 6 and 8) have been transported to Japan with diminished
intensity. At the Sapporo site in northern Japan the first
anthropogenic aerosol event occurs from 00 LST (=UTC+9) 12
to 09 LST 13 January (Figs. 8a and d) and then the second
mixed aerosol event (anthropogenic and Asian dust aerosols)
occurs from 09 LST 14 to 04 LST 15 January with low PM
10

concentration at the surface (Figs. 8a and d).
A similar feature can be seen at the Osaka site (Figs. 8b and
e) located in central Japan and the Nagasaki site (Figs. 8c and
f) in Southern Japan with delayed occurrence time by about 12
hours for the first anthropogenic aerosol event. However, the
second mixed aerosol event (more dominated by Asian dust
aerosol in Figs. 8d, e and f) occurs almost the same time of 00
LST 15 January at all 3 sites, suggesting the upper level long-
range transported Asian dust aerosol arrived over Japan almost
the same time.
V. HORIZONTAL DISTRIBUTIONS OF MEAN
CONCENTRATIONS FOR THE PERIOD FROM 00 UTC 10 TO 00
UTC 16 JANUARY 2013
A. The horizontal distribution of mean aerosol (PM
10
)
concentration
Figure 9a shows surface mean total aerosol concentration
(PM
10
) averaged for the period from 00 UTC 10 to 00 UTC 16
January 2013. The aerosols originated in Asia affect all Asian
regions including the Asian continent, the northwestern Pacific
Ocean, the South China Sea and the Bay of Bengal. The high
aerosol concentration exceeding 100 g m
-3
covers over eastern
China, the north of the Tibetan Plateau and northern India (Fig.
9a). Those high aerosol concentrations over eastern China and
northeastern India are mainly contributed by the anthropogenic
aerosols (Fig. 9b) produced from high pollutants emissions
(Fig. 2), and have caused the dense haze events in these regions
(Fig. 3). Whereas the high aerosol concentrations over
northwestern China to the north of the Tibetan Plateau are
mostly contributed by Asian dust aerosol emitted in the Asian
dust source region (compare Figs. 9a and b).
Figure 10 shows mean atmospheric aerosol loadings
(column integrated aerosol (PM
10
) concentration) averaged for
the period from 00 UTC 10 to 00UTC 16 January 2013.
Aerosols produced in the Asia continent affect all over analysis
domain (Fig. 10a) except in the southeastern corner of the
model domain over the subtropical high pressure zone.
The high atmospheric aerosol loading zone exceeding 100 mg
m
-2
extends southeastward from the Taklamakan desert area of
the Asian dust source region to eastern China where it merges
with another high aerosol loading zone extending
northeastward from the South China Sea through the Korean
peninsula to the East Sea of Korea. The high aerosol loading of
more than 100 mg m
-2
over northwestern China to the north of
the Tibetan plateau is mainly attributed to Asian dust aerosol


Fig. 10. Horizontal distributions of column integrated mean (a) PM10 concentr-
ation (mg m
-2
) and (b) anthropogenic aerosol concentration (mg m
-2
)
averaged for the period from 00 UTC 10 to 00 UTC 16 January 2013.



Fig. 9. Horizontal distributions of surface mean (a) PM10 concentration (g m
-
3
) and (b) anthropogenic aerosol concentration (g m
-3
) averaged for the
period from 00 UTC 10 to 00 UTC 16 January 2013.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
58
while those over eastern China and Northern India are
composed of the anthropogenic aerosols (Figs. 10a and b).
It is worthwhile to note that 100 mg m
-2
isoline of the
column integrated mean anthropogenic aerosols concentration
(Fig. 10b) extends up to the northern part of the Yellow Sea but
that of the total aerosol concentration (Fig. 10a) extends further
downwind to the East Sea through the Yellow Sea and the
Korean peninsula, suggesting a significant amount of Asian
dust aerosol being contributed to the total aerosol concentration
over the Yellow Sea, the Korean peninsula and the East Sea.
VI. HORIZONTAL DISTRIBUTION OF TOTAL AEROSOL
DEPOSITION FOR THE PERIOD FROM 00 UTC 10 TO 00 UTC 16
JANUARY 2013
The deposition of aerosols affects significantly marine and
terrestrial ecosystems. Figure 11a shows the total deposition
(wet+dry) of anthropogenic and Asian dust aerosols for the
period of the haze event in eastern Asia from 00 UTC 10 to 00
UTC 16 January 2013. The total aerosol deposition of more the
300 kg km
-2
occurs most of Asian dust source region to the
north of the Tibetan plateau, Yunnan province, the Beibu gulf,
the western coast of Japan, a wide region extending
southwestward from the Northwestern Pacific Ocean to
Philippines (Fig. 11a). Most of high aerosol deposition to the
north of the Tibetan plateau in the Asian dust source region is
largely contributed by the dry deposition (Fig. 11b) whereas all
other high aerosol deposition regions including Yunnan
province, the Beibu gulf, the western coast of Japan and over
oceans (the Northwestern Pacific Ocean, the South China Sea)
are mainly contributed by wet deposition (Fig. 11c). The large
amount of aerosol input more than 1000 kg km
-2
to the Seas
from the Northwestern Pacific Ocean, through the East China
Sea to the South China Sea, could affect significantly marine
ecology in this region (Fig. 11a).
The present analysis indicates that the high Asian dust
emission in northern China and Mongolia and high pollutant
emissions in China could produce severe environmental
problems including dense haze and mist events that cause
severe visibility reduction and adverse impacts on human
health through the increased atmospheric aerosol loadings and
on terrestrial and marine ecologies through the excess
deposition of aerosols.
VII. CONCLUSIONS
The Aerosol Modeling System (AMS) based on the
ADAM2 model for the Asian dust aerosol and the CMAQ
model for the anthropogenic aerosols has been employed to
simulate air quality in Asia in association with dense haze
events observed in East Asia for the period from 10 to 15
January 2013. These events caused for the Environmental
Protection Bureau in Beijing, China to take emergency
measures. The simulated aerosols (PM
10
; Asian dust aerosols
and anthropogenic aerosols) concentrations have been
compared with the monitored PM
10
concentrations at several
sites scattered over South Korea to ensure the usefulness of the
model for the further analysis of air quality in Asia.
It is found that the AMS model can simulate quite
reasonably the observed PM
10
concentrations during the period
of the dense haze events that have caused severe air pollution
problems in China and the downwind region of Korea, and that
can identify the contribution of each type of aerosols to these
poor air quality events.
It is also found that the dense haze events associated with
high aerosol concentration events observed in Korea are caused
by two different transport processes; the one is mainly affected
by the anthropogenic aerosols originated from the pollutants
emitted in eastern China while the other is affected by a
mixture of the Asian dust aerosol from northern China in the
Asian dust source region and the anthropogenic aerosols from


Fig. 11. Horizontal distributions of (a) total deposition (wet+dry) of PM10 (t
km
-2
), (b) total dry deposition of PM10 (t km
-2
) and (c) total wet
deposition of PM10 (t km
-2
) for the period from 00 UTC 10 to 00
UTC 16 January 2013.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
59
coastal eastern China of the high pollutant emission region.
The anthropogenic aerosols during the events period are found
to be primarily contributed by the secondary inorganic
aerosols, suggesting the importance of the emission of aerosol
precursors including SO
2
, NO
X
and NH
3
.
The atmospheric aerosol loadings and the total deposition
of aerosols in association with these events clearly indicate that
not only the air quality in the whole East Asia regions but the
marine and terrestrial eco-systems could be significantly
affected by these events.
In view of the increasing occurrence trend of such events in
East Asia, some reduction measures of pollutants emissions,
especially in China are required for keeping sustainable
environment in East Asia.
This study mainly pertains to a dense haze event case to
understand the impact of the each type of aerosols on this event
but further studies for a year-long period are required to assess
the impact of pollutants emissions on environment and eco-
systems in the Asia region.

ACKNOWLEDGMENT
This work was funded by the Korea Meteorological
Administration Research and Development Program under
grant NIMR2013.

REFERENCES
[1] T.M. Johnson, F. Liu, and R. Newfarmer, "Clear water, blue skies."
Clear water, blue skies 1.2, 1997, pp. 1-129.
[2] S. Rodrguez, X. Querol, A. Alastuey, G. Kallos, and O. Kakaliagou,
Saharan dust contributions to PM10 and TSP levels in Southern and
Eastern Spain, Atmos. Environ., vol. 35(14), pp. 2433-2447, 2001.
[3] X. Querol, J. Pey, M. Pandolfi, A. Alastuey, M. Cusack, N. Prez, ... and
S. Kleanthous, African dust contributions to mean ambient PM10 mass-
levels across the Mediterranean Basin, Atmos. Environ., vol. 43(28), pp.
4266-4277, 2009.
[4] J.M. Prospero, I. Olmez, and M. Ames, Al and Fe in PM2.5 and PM10
suspended in South-Central Florida: The impact of long range transport
of African mineral dust, Water, Air, and Soil Pollution, vol. 125, pp.
291317, 2001.
[5] G. Cowie, W. Lawson, and N. Kim, Australian dust causing respiratory
disease admissions in some North Island, New Zealand Hospitals, New
Zeal. Med. J., vol. 123(1311), p. 87, 2010.
[6] D.V. Bates, B.R. Fish, T.F. Hatch, T.T. Mercer, and P.E. Morrow,
Deposition and retention models for internal dosimetry of the human
respiratory trac, Health Phys., vol. 12, pp. 173207, 1966.
[7] D.W. Dockery, J. Schwartz, and J.D. Spengler, Air Pollution and Daily
Mortality: Associations with Particulates and Acid Aerosols, Environ.
Res., vol. 59, pp. 362-373, 1992.
[8] D.W. Dockery, C.A. Pope, X. Xu, J.D. Spengler, J.H. Ware, M.E. Fay,
B.G. Ferris, and F.E. Speizer, An association between air pollution and
mortality in six U.S. cities, New Engl. J. Med., vol. 329, pp. 17531759,
1993.
[9] F.S. Binkowski and U. Shankar, The regional particulate matter model,
1. Model description and preliminary results, J. Geophys. Res., vol.
100(D12), pp. 26191-26209, 1995.
[10] I. Balshzy, W. Hofmann, and T. Heistracher, Local particle
deposition pattern may play a key role in the development of lung
cancer, J. Appl. Physiol., vol. 94, pp. 1719-1725, 2003.
[11] M.E. Davis, F. Laden, J.E. Hart, E. Gashick, and T.J. Smith, Economic
activity and trends in ambient air pollution, Environ. Health Persp., vol.
118, pp. 614-619, 2010.
[12] P.T.B.S. Branco, M.C.M Alvim-Ferraz, F.G. Martins, and S.I.V Sousa,
A microenvironmental modelling methodology to assess childrens
exposure to indoor air pollution in Porto, Portugal, Recent Advances in
Environmental Science, pp. 211-216, 2013 (WSEAS).
[13] Intergovernmental Panel on Climate Change, Climate Change 1995: The
science of Climate change, Cambridge University Press, 1996.
[14] M.Z. Jacobson, Strong radiative heating due to the mixing state of
black carbon in atmospheric aerosols, Nature, vol. 409(6821), pp. 695-
697, 2001.
[15] Y.J. Kaufman, D. Tanr, and O. Boucher, A satellite view of aerosols
in the climate system, Nature, vol. 419(6903), pp. 215-223, 2002.
[16] J.G. Watson, Visibility: Science and regulation, J. Air Waste Ma., vol.
52, pp. 628713, 2002.
[17] P. Crutzen, New directions: the growing urban heat and pollution island
effect - impact on chemistry and climate, Atmos. Environ., vol. 38, pp.
3539-3540, 2004.
[18] L.-S. Chang and S.-U. Park, Direct radiative forcing due to
anthropogenic aerosols in East Asia during April 2001, Atmos.
Envoron., vol. 38, pp. 4467-4482, 2004.
[19] J.E.P. Penner, X. Dong, and Y. Chen, Observational evidence of
change in radiative forcing due to the indirect aerosol effect, Nature,
vol. 427(6971), pp. 231-234, 2004.
[20] S.-U. Park, L.-S. Chang, and E.-H. Lee, Direct radiative forcing due to
aerosols in East Asia during a Hwangsa (Asian dust) event observed in
18-23 March 2002 in Korea, Atmos. Environ., vol. 39, pp. 2593-2606,
2005.
[21] S.-U. Park, J.H. Cho, and M.-S. Park, Identification of visibility
reducing weather phenomena due to aerosols, Environmental
Management and Sustainable Development, vol. 2, pp. 126-142, 2013a.
[22] H.-J. In and S.-U. Park, The soil particle size dependent emission
parameterization for an Asian dust (Yellow Sand) observed in Korea on
April 2002, Atmos. Environ., vol. 37, pp. 4625-2636, 2002.
[23] S.-U. Park and H.-J. In, Parameterization of dust emission for the
simulation of the yellow sand (Asian dust) observed in March 2002 in
Korea, J. Geophys. Res.. vol. 108(D19), p. 4618, 2003.
[24] S.-U. Park and E.-H. Lee, Parameterization of Asian dust (Hwangsa)
particle-size distributions for use in dust emission model, Atmos.
Environ., vol. 38, pp. 2155-2162, 2004.
[25] X. Yu, B. Zhu, Y. Yin, J. Yang, Y. Li, and X. Bu, A comparative
analysis of aerosol properties in dust and haze-fog days in a Chinese
urban region, Atmos. Res., vol. 99, pp. 241-247, 2011.
[26] K.H. Lee, Y.J. Kim, and M.J. Kim, Characteristics of aerosol observed
during two severe haze events over Korea in June and October 2004,
Atmos. Environ., vol. 40, pp. 51465155, 2006.
[27] S.-U. Park, J.H. Cho, and M.-S. Park, A simulation of Aerosols in Asia
with the use of ADAM2 and CMAQ, Advances in Fluid Mechanics and
Heat & Mass Transfer, pp. 258-263, 2012.
[28] H.Z. Che and X.Y. Zhang, Horizontal Visibility Trends in China 1981
2005, Geophys. Res. Lett., vol 34, doi: 10.1029/2007GL031450, 2007.
[29] R. Gautam, N.C. Hsu, M. Kafatos, and S.C. Tsay, Influences of winter
haze on fog/low cloud over the Indo-Gangetic plains, J. Geophys. Res.,
vol. 112(D5), doi:10.1029/2005JD007036, 2007.
[30] K. Wang, R.E. Dickinson, and S. Liang, Clear sky visibility has
decreased over land globally from 1973 to 2007, Science, vol.
323(5920), pp. 1468-1470, 2009.
[31] B.J. Finlayson-Pitts and N.J. Pitts Jr, Atmospheric chemistry.
Fundamentals and experimental techniques, Wiley, 1986.
[32] J.H. Seinfeld, Atmospheric chemistry and physics of air pollution, Wiley,
1986.
[33] P. Warneck, Chemistry of the natural atmosphere, Academic Press, 1988.
[34] S.-U. Park, The aerosol modeling system for the simulations of high
aerosol concentration events in East Asia,, Recent Advances in
Environmental Science, P3 (plenary lecture in WSEAS), 2013.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
60
[35] S.-U. Park, J.H. Cho, and M.-S. Park, A simulation of haze and mist
events observed in east Asia during 19-22 May 2010 using the Aerosol
Modeling System (AMS), Recent Advances in Environmental Science,
pp. 204-210, 2013b (WSEAS).
[36] G.A. Grell, J. Dudhia, and D.R. Stauffer, A description of 5th generation
Penn State/NCAR mesoscale model (MM5), NCAR TECH., Note
NCAR/TN-398, 1994.
[37] J. Dudhia, D. Grill, Y.-R. Guo, D. Hausen, K. Manning, and W. Wang,
PSU/NCAR mesoscale modelling system tutorial class notes (MM5
modelling system version 2), 1998.
[38] S.-U. Park, A. Choe, E.-H. Lee, M.-S. Park, and X. Song, The Asian
dust aerosol model 2 (ADAM2) with the use of normalized difference
vegetation data (NDVI) obtained from the spot4/vegetation data, Theor.
Appl. Climatol., vol. 101, pp. 191-208, 2010.
[39] Y. Shao, E. Jung, and L.M. Leslie, Numerical prediction of northeast
Asian dust storms using an integrated wind erosion modeling system, J.
Geophys. Res., vol. 107(D24), p. 4814, doi:10.1029/2001JD001493,
2002.
[40] University of North Carolina, Operational Guidance for the Community
Multiscale Air Quality (CMAQ) Modeling System, Community
Modeling and Analysis System Institute for the Environment, 2010.
[41] F.S. Binkowski and S.J. Roselle, Models-3 Community Multiscale Air
Quality (CMAQ) model aerosol component 1. Model description, J.
Geophys. Res., vol. 108(D6), p. 4183, 2003.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
61



Elvisa Becirovic
Elektroprivreda BiH
Sarajevo, Bosnia and Herzegovina
J akub Osmic, Mirza Kusljugic
Faculty of Electrical Engineering
Tuzla, Bosnia and Herzegovina
Nedjeljko Peric
Faculty of Electrical Engineering and Computing
Zagreb, Croatia


AbstractThis paper presents a novel control algorithm for
variable speed wind generators (VSWG), designed to provide
support to grid frequency regulation. The proposed control
algorithm ensures that VSWG truly emulates response of a
conventional generating unit with non-reheat steam turbine
(GUNRST) in the first several seconds after active power
unbalance. A systematic method of analysis and synthesis of the
new control algorithm is described in detail.

Keywords variable speed wind generator; primary frequency
control; emulation of inertial response; droop control; model
reference controller; design methodology
I. INTRODUCTION
UE to the increasing ratio of variable speed wind
generators (VSWG) penetration in electrical power
generation there is a need for their contribution to grid
frequency regulation. Since VSWG operates at maximum
power point tracking (MPPT) and uses AC/DC/AC converter,
there is no direct coupling between grid frequency deviation
and its active power generation. A control algorithm for
VSWG, which will ensure its participation in grid frequency
regulation, should ensure extraction of additional electric
power from VSWG during frequency disturbance.
The steady state of electrical power system (EPS) is
characterized by balance between active power generation and
active power consumption. Following a sudden active power
disturbance in EPS, such as loss of generating unit or sudden
increase in active power load, the rest of EPS cannot respond
immediately by increasing necessary (missing) turbine
mechanical power. This is due to nonzero time constants of
governor and turbine dynamics. As a result the grid frequency
starts to decrease. Active power balance is first established
from electromagnetic energy accumulated in the system
immediately after disturbance occurrence. Then, provided
EPS transient stability is maintained, in the so called inertial
phase of rotating machines frequency response, kinetic energy
is converted to active power and delivered to the rest of the
system to maintain active power balance. The inertial phase
lasts no more than several seconds. In this phase the turbine
also starts to deliver additional power to the generator.
Provided frequency stability is maintained, this action results
in decrease and eventual halting of frequency declining and
consequently in recovery of rotating machines rotation speed
and kinetic energy. Finally, after approximately 30 seconds,
frequency of the system stabilizes at the new steady state
value. The permitted interval of the grid frequency change is
quite short, so the system of turbine mechanical power
regulation must be fast enough to prevent frequency deviation
below the limited level in order to avoid frequency
disturbance propagation, which can eventually lead to
frequency instability.
The main indices describing grid frequency behaviour
following active power disturbance are: rate of change of
frequency (ROCOF), minimal value of frequency reached
(frequency nadir) and steady state frequency deviation
(SSFD). The value of ROCOF mainly depends on the sum of
moments of inertia of rotating machines. By increasing EPS
moment of inertia, ROCOF decreases. Frequency nadir is
determined by: intensity of power disturbance, kinetic energy
stored in EPS, number of generators contributing primary
frequency control and dynamic characteristics of generators,
loads, and governors. The SSFD value is determined by speed
governor droop characteristics of generating units
participating in primary frequency control.
Characteristic responses of mechanical power and turbine
speed of three conventional generating units: unit with a
reheat steam turbine (dashed line), generating unit with a non-
reheat steam turbine (GUNRST) (solid line) and hydraulic
unit (dash dotted line), to a unit step change in active load are
presented in Fig. 1 and Fig. 2 [1]. Figures indicate that
GUNRST has the fastest response. Hydraulic unit has the
slowest response, since it has a non-minimum phase transfer
function (unstable zero in the transfer function).


Design of Model Reference Controller of
Variable Speed Wind Generators for Frequency
Regulation Contribution
D
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
62


Fig. 1. Deviation of generating units mechanical power to a unit step increase
in load demand: generating unit with a non-reheat steamturbine (solid line),
generating unit with a reheat steamturbine (dashed line), hydraulic unit (dash
dotted line).

Nowadays, the most used types of large power wind
generators are VSWG, such as double fed induction
generators (DFIG) and fully rated converter wind turbine
(FRCWT) [2]. Since DFIG and FRCWT use fast electronic
converters (AC/DC/AC) and operate at MPPT, there is no (or
very little) direct coupling between grid frequency deviation
and their active power generation. Operation at MPPT results
in VSWG having no spinning reserve, which could be used to
support frequency regulation after disturbance. It is, therefore,
necessary to modify VSWG control algorithms to support grid
frequency regulation. DFIG and FRCWT frequency regulation
capabilities have been in focus of interest of scientific
community over past years [2]-[11]. As a result, a number of
papers have been published, dealing with different control
algorithms for VSWG, designed to provide contribution to
primary frequency control [12]. These control algorithms can
be classified into: inertial control, droop control, deloading
control or their combination [3]-[5], [7] and [10]. All above
mentioned approaches pertain to VSWG control level. Control
algorithms for control coordination of the wind farm level or
EPS as a whole, such as in [8] and [11], are also proposed.
Inertial control approach enables transformation of the part
of VSWG kinetic energy into electrical power, which is
instantaneously delivered to EPS by fast electronic converter.
Time constant of electronic converter is of milliseconds order.
Transformation of kinetic energy to electric power causes
VSWG speed decrease, which must be limited in order to
prevent speed of turbine reaching its minimum permitted
level. In addition, since VSWG traditionally operate at MPPT,
wind generator speed must be recovered to this optimum value
as soon as possible. Since VSWG have no spinning reserve,
speed recovery is performed by VSWG delivering less active
power than optimum. This ensures speed of VSWG being
recovered to its optimum value (for constant wind speed to the
value before the frequency transient).
The main challenge in inertial (and droop) control of
VSWG approaches is shaping up the responses of active
power delivered to the rest of EPS and the responses of


VSWG speed after grid frequency disturbance. A number of
different inertial controllers, droop controllers or their
combinations have been proposed, i.e. as presented in [3]-[5],
[7], [10] and [11]. The main idea, presented in these
approaches, is the following: add a new control signal C
ad
to
the existing torque control loop, before or after PI controller,
which will force VSWG to emulate inertial behaviour or
inertial behaviour plus droop control of conventional
generators, following a grid frequency disturbance. Then,
such combined signal appears as reference torque input T
ref
to
electronic converter, such as in [4], or as reference active
power input P
ref
such as in [7]. These approaches are
illustrated in Fig. 3. In the referred research for inertia
emulation the so called washout filter is used for inertial
signal zeroing at network frequency steady state as well as for
grid frequency (or frequency deviation) signal filtering. A
washout filter in fact performs an ideal derivation or filtered
derivation of the grid frequency (or frequency deviation)
signal. In some papers additional compensator block is used to
create phase compensation of signal leaving washout filter [3].
If signal C
ad
is added after PI controller, then torque control
loop considers this signal as disturbance signal. In this case it
is not necessary to derivate grid frequency signal since
integral part of PI controller makes this signal of no effect at
steady state, even if steady state value of is different from
zero. Droop part of additional signal C
ad
in Fig. 3 is used to
emulate droop behaviour of conventional regulator in primary
frequency control.
A lack of analytical methods for analysis and synthesis of
VSWG inertial and/or droop control are evident, i.e. in [2],
[3], [4], [7] and [11]. Accordingly, in these references the
parameters of inertial and droop controllers were mainly
determined by trial and error method. In addition, the
following question: What should a desirable response of
active power injected by VSWG into the EPS after an active
power disturbance look like? has not been answered yet.
VSWGs rotational speed after supporting grid frequency

Fig. 2. Speed deviation of generating units to unit step increase in active
load demand: generating unit with a non reheat steamturbine (solid line),
generating unit with a reheat steamturbine (dashed line), hydraulic unit
(dash dotted line).
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
63



must recover to the optimal, pre-disturbance value. Hence the
same amount of VSWG kinetic energy that was delivered to
the EPS after active power disturbance must be supplied to
VSWG in the process of recovering its speed. Consequently,
if VSWG delivered kinetic energy is too large in inertial
phase, than VSWG delivers much less power than before the
disturbance during VSWG speed recovery phase. This will be
regarded by EPS as the new active power disturbance before
reaching the new grid frequency steady state. Thus response
of VSWG delivered active power during entire frequency
transient is very important. Some authors suggest that process
of VSWG speed recovery should be performed smoothly by
using smooth control algorithm, whereas others propose
performing switching off VSWG frequency support at
different time [4], [7] and [8]. A novel VSWG control
algorithm designed to support frequency regulation is
proposed in this paper. Block diagram of proposed control
algorithm is presented in Fig.4. Main characteristics of the
proposed control algorithm (controller) are the following:
- Analytical design method of VSWG controller supporting
grid frequency regulation, following active power
disturbance, has been presented. By using the new control
algorithm, VSWG truly emulates reference GUNRST
model in several seconds after frequency disturbance.
Reference generating unit can be any existing GUNRST or
a hypothetical GUNRST with values of nominal power and
inertia constant equal (or close) to those of the controlled
wind turbine.
- Since MPPT control loop uses PI controller which cancels
nonzero stationary value of signal C
ad
, in the new control
algorithm frequency (deviation) derivation is not used.
Instead, a lead compensator is used in the new control
algorithm.






















II. MODELS OF GENERATING UNITS USED IN FREQUENCY
STABILITY STUDIES
A simplified generic block diagram of conventional
generating unit is presented in Fig. 5. By variation of its
parameters, from this block diagram, it is possible to simulate
dynamic response of any conventional generating unit
(including GUNRST) participating in primary frequency
control [1]. In Fig. 5
g
G is governor transfer function,
tdc
G is
transfer function of transient droop compensation block -
which exists only in hydraulic generating unit,
t
G is turbine
transfer function, H is inertia constant of rotating parts of
generating unit, R is droop constant, and D models variation
of load with grid frequency variation. Inertia constants of
conventional generators are in 2-9 s range [13]. Typical inertia
constants of wind generators are in 2-6 s range [14]. A typical
frequency response to a step of load increase P
e
at time t =1
s is presented in Fig. 6. Neglecting the effect of load variation
(D =0) from Fig. 5 it can be seen that

a m e
P P P = , (1)


Fig. 3. Control loops for wind unit active power with inertia and droop
emulation. P
meas
measured active power of wind unit, - grid frequency ,
- grid frequency deviation, C
pi
control signal fromPI controler, C
ad

aditional control signal for inertia and droop emulation.
P
meas
MPPT

P

ref

+
s T
K
i
1
1


Converter
C
pi
+
T
ref
(or P
ref
)
s K
in

1
1
1
+ s

Torquecontroller

d
K
1
1
2
+ s
or




+
+
C
ad
Inertia emulation
Droop emulation
Compensator
Compensator

+
MPPT signal

Feedback signal

+
P
r

+
g
G

tdc
G

t
G

P
m
Hs 2
1

P
a


D
P
e
Governor

Rotor
inertia
Load
+

Turbine

Fig. 5. Generic block diagram of conventional power unit used in
frequency stability studies.
R
1
Fig. 4. Proposed controller of VSWG active power with inertia and droop
emulation

ref

+
s T
K
i
1
1

w
+

Converter
P
ref

Torque controller
t g
G G K
2

+
+
Inertia emulation
Droop emulation
Lead compensator

+
+
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
64



where
e
P is electrical power of generating unit,
=
t g m
G G
R
P
1
is turbine mechanical power, and
= Hs P
a
2 is accelerating (inertial) power. Accordingly,
P
e
can be expressed as
=
t g e
G G
R
Hs P
1
2 . (2)
In case that model of generating unit is known, it is possible to
determine (approximate) active power disturbance by using
(2) and by measuring . In (2), however, an ideal derivation
of frequency deviation is performed. An ideal derivation of
signal cannot be achieved. The term in (2) should be replaced
by filtered derivation of frequency deviation. According to
this, the expression for approximation of electrical power is

=
+
=

+
=
m i t g
B
t g
B
e
G G G G
R
s
Hs
G G
R s
Hs
P
1
1
2
1
1
2
(3)
where
B
is time constant of the first order time lag (low pass
filter)
1
1
+ s
B

, and
B B
1 = is its bandwidth. The
bandwidth
B
should be large enough to preserve useful
information in signal . At the same time, the bandwidth
must be limited from above to filter noise present in .
Considering frequency spectrum of signal from Fig. 6 it can
be concluded that an upper bound of frequency spectrum
could be chosen as ( ) 2 5 . 2 15 =
s
rad
B
or even lower.
This value can be changed depending on characteristics of
considered EPS. By performing frequency measurement and
using (3) it is possible to determinate (approximate) active
power disturbance as presented in Fig. 7. It is clear from Fig.
7 that quite good active power disturbance (unit step of active
load demand at time t =1 s) approximation is achieved. The
increase of
B
is followed by the increase of approximation
accuracy. The same figure shows that inertial power time

constant is much lower than mechanical power time constant.
Accordingly, in the first moments after frequency disturbance,
the missing active power in EPS is almost completely
generated due to decreasing kinetic energy of rotating
machines.
III. A NOVEL APPROACH OF VSWG CONTROL FOR
FREQUENCY REGULATION
A novel approach to control VSWG electrical power in order
to support grid frequency regulation during transient process
after a sudden active power disturbance is presented in this
paper. The basic idea is to force VSWG to behave as close as
possible to reference GUNRST within few seconds (05
seconds) after frequency disturbance. This approach ensures
that during the initial phase of frequency transients EPS with
VSWG behaves exactly as EPS with GUNRST connected at
same point as VSWG. For the purpose of analysis and
synthesis simplification, this paper assumes that reference
GUNRST has the same inertia constant and nominal active
power as VSWG.
Since VSWG that operates at MPPT have no spinning
reserve, it is not possible to exactly realize the control law
given by (3). The first term in (3) (inertial power) converge to
zero as time goes to infinity since it includes s in its
numerator. The second term in (3) converges to
( ) [ ]
ss g
G G dcgain R 1 as time goes to infinity, where
ss
is
steady state frequency deviation. If ( ) 1 = G G dcgain
g
then
steady state of the second term in (3) converges to R
ss
. To
solve this problem a term that has the same static gain as term

t g
G G
R
1
is added to the right hand side of (3). This term
should not significantly change behaviour of (3) in frequency
transient process but it must force right hand side of (3) to
converge to zero with a desirable time constant as time
increases to infinity. The following additional term is
proposed in this paper

( )
t g
G G
s
R
Gad
1
1
+
=

. (4)
Accordingly, the new control law becomes
Fig. 7. Inertial power (dashed line), mechanical power (dash dotted line)
and electrical power (solid line).
Fig. 6. Frequency response to a unit step of active power
disturbance.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
65



( )
.
1
1
1
2
1
1
1
2
1
1
1
1
2
1

+
=

+

+
=

+
+
+
=
mm i t g
B
t g
B
t g t g
B
e
G G G G
s R s
H
s
G G
s
s
R s
Hs
G G
s
R
G G
R s
Hs
P
(5)
Equation (5) indicates that second term G
mm
of the proposed
control law also converges to zero since it has s in its
numerator. The second term G
mm
in (5) differs from the second
term G
m
in (3) in the dipole

1 +
=
s
s
G
d

. (6)
This dipole has the pole 1 =
p
s . If is chosen as
sufficiently large positive number then the pole of dipole (6) is
very close to its zero 0 =
z
s . Multiplication of a transfer
function with a dipole slightly changes the response of the
transfer function in transient process. At the same time, the
behaviour of a transfer function at steady state can be
substantially changed. The responses of G
m
and G
mm
to a unit
step signal are presented in Fig. 8.
The transfer functions of governor and turbine are chosen as

( ) ( ) 1
1
,
1
1
+
=
+
=
s
G
s
G
t
t
g
g

. (7)
The simulation is performed with time constant of
governor 2 . 0 =
g
, turbine time constant 3 . 0 =
t
, and droop
constant R = 0.04 [1]. As expected duration of overall
transient process in EPS is about 30 seconds it follows that
appropriate value of the time constant is
5 . 29 30 30 = = =
t g t g
. Fig. 8 shows that there
is little difference between responses G
m
and G
mm
within first
several seconds. This was the ultimate goal of the second term
modification in control law (3).
A published block diagram of DFIG that operates at MPPT
is shown in Fig. 10 [6], [15], [16], and [17]. A similar control
structure is used for FRCWT [18]. This model today is known
as generic model of VSWG active power control [17]. Block
diagram in Fig. 9 is nonlinear which complicates its analysis
and synthesis. A method for linearization of the block diagram
shown in the Fig. 9 is presented below.
A typical value of time constant T
con
is 0.02 seconds (such
as in [6]). In comparison to time duration of frequency
transient process this time constant can be neglected. In order
to get linear model corresponding limiter is neglected as well.
A typical value of time constant T
f
is 60 seconds (such as in
[17]). This is the reason that for purposes of synthesis of
controller (not in the simulation) reference rotational speed of
VSWG -
ref
can be kept constant during entire frequency
transient process. Considering all above mentioned, a
simplified block diagram of VSWG control system, presented
in Fig. 9, is shown in Fig.10. In Fig. 10
0 ref
represents the
optimal VSWG rotational speed before frequency disturbance.
This block diagram is still nonlinear, due to existence of
signal multiplication and division. VSWG operate at the
MPPT, thus at pre-disturbance steady state VSWG speed is
optimal. At the vicinity of the optimal VSWG speed,
characteristic tip speed ratio () - captured mechanical power
(P
m
) is somewhat flat in comparison to other regions of this
characteristic. Since frequency transient process is relatively
short, it is expected that wind speed will not considerably
change during this process. Consequently, captured
mechanical power of VSWG can be regarded as constant
during frequency transient process. If too large variation of
VSWG speed is not expected, then, for analysis and synthesis
purposes, VSWG speed signal entering block of multiplication
and division can be regarded as constant that is
const
ref w
= =
0
. If instead of signals their variations
around equilibrium point are considered then linear model of
the system, presented in Fig. 11, could be developed. A new
control signal is added after multiplication of the signal
leaving PI controller by actual wind speed. The block with
transfer function G
1
in Fig. 11 represents the novel control
algorithm. The input in this block is grid frequency deviation
. In order to simplify further analysis it is assumed
that H H
w
= .
























Fig. 8. Step response of
m
G (dashed line) and
mm
G (solid line)
MPPT

P

ref

+
s T
K
i
1
1

w
+

s H
w
2
1

w

1
1
+ s T
f
1
1
+ s T
con

w
V
w
V
1

P
e
+ P
m
Wind
speed

Pitch
controller
P
a
T
a
Fig. 9. Block diagramof control systemof VSWG (adopted from [6],
[15], [16], and [17]).
( ) ,
p
C
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
66
























Fig. 11 shows that electrical power being injected by VSWG
to EPS after frequency disturbance is given by

a e
P P = . (8)
Transfer function G
1
from Fig. 11 shouldbe chosen in the way
that injected electrical power P
e
conforms to control law (5).
In Fig. 11 symbol represents deviation of grid frequency.
Transfer function from to P
e
is defined as

1
G P
e

, (9)
whererepresents characteristic function of the system and it
is given by

2
2
0
0
2
2
2
1 1 1
1 1
Hs
T
K
Ks Hs
Hs s T
K
i
ref
ref
i
+ +
=

+ + =

. (10)
Substituting (10) into (9) gives

.
2 2
2
2
2
1
2
2
1
2
1
i
i
a e
HT
K
s
H
K
s
G s
T
K
Ks Hs
G Hs G P P
+ +
=
+ +
=


(11)
The poles of transfer function (11) are

i
HT
K
H
K
H
K
2
4
2 2
1
4
2
2 1

= . (12)
If request of VSWG control system is that the fastest response
to a step of
ref
without overshoot is achieved, then
parameters K and T
i
of PI controller should be chosen so that
poles (12) are real, negative end equal (the transfer function
(11) must have double negative pole).
The pole of the transfer function (11) is double and equals to

H
K
4
2 1
= , (13)
if the following equation is valid
.
8 2
2
4
2
K
H
K
H
H
K
T
i
=

= (14)
Due to integral part of PI controller, VSWG speed equals to
reference speed in steady state. From (11) it follows


+
=
+ +
=
2
1
2
2
1
2
4
2 2
H
K
s
G s
HT
K
s
H
K
s
G s
P
i
e
. (15)
In addition from Fig. 11


.
2 2
2 2
1
2
1 1
1
0
1
0

+ +
=

=
i
ref
ref
w
HT
K
s
H
K
s
sG
H
G
Hs
(16)
From (16) it follows that there is an inverse proportionality
between VSWG speed deviation
w
and initial wind speed

ref0
. From (15) it follows that
e
P in steady state diminishes
due to existence of term s
2
in the numerator of transfer
function. In addition,
w
also goes to zero as time passes if
there is no pole of G
1
at the origin of s plane (that is there is no
s in denominator of G
1
). This is due to existence of s in the
numerator of expression at the right hand side of (16). This
way it is ensured that VSWG speed and electrical power of
VSWG at steady state restore to values before frequency
disturbance. At the end of transient process, steady state
frequency deviation becomes
ss
. From above
considerations we can also conclude that, in order to achieve
specified goals, the existence of term s (ideal derivation) in
numerator of G
1
is unnecessary. This is qualitatively new fact
in comparison to published papers, since they present transfer
function of controller including s in its numerator [4], [5], [7],
[9] and [11].
In order to achieve that VSWG emulates reference-GUNRST
model it is necessary that expression on the right hand side of
(15) equals expression on the right hand side of (5). This
could be achieved only if G
1
includes integrator s 1 . In this
case, however, according to previous considerations, VSWG
speed will not be restored to the value before frequency
disturbance. Because of this it is necessary to make
modification of the reference model (5). This can be
performed by multiplying transfer function of the reference
model with a dipole that includes s in its numerator, without
affecting reference model (5) during transient process. Then,
the new reference model is

+
=

+
=
r
t g
B
e
G
s
s
G G
s
s
R s
Hs
s
s
P
1
1
1
1
2
1
1
2
1
1
1
2
(17)
where
t g
B
r
G G
s R s
H
G
1
1
1
2
+

+
=

.
Time constant
1
should be selected big enough to prevent

ref
=0
+
s T
K
K
i
+
Hs 2
1

w

Fig. 11. Linearized control structure of VSWG for frequency control


support fromFig. 10.
P
m
=0

+
1
G


0
1
ref


0 ref


P
e

P
a

Fig. 10. Simplified block diagramof control systemof VSWG.

ref0
+

+
s T
K
i
1
1
s H
w
2
1 P
a

w P
e


P
m


+
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
67


















changes of the new model transient behaviour in comparison
to the old one. In this paper the chosen value is that of 30
1
=
s. Responses to the step input of ideal reference model (3) and
model (17) are shown in Fig. 12. From Fig. 12 we can see that
there is no substantial difference between the two responses in
the first few seconds. To make the right hand side in (15)
equal to the right hand side in (17) expression for G
1
must be

( )
.
1 1
4 1
1 1
4 2
1
4
1
2
1
2
1
2
1
1
m i t g
B
B
r
G G G G
s s
H
K
s
R
s s
H
K
s
H
G
s
H
K
s
G
+ =

+
+

+
=
+

+
=


(18)
For values H =5 and K =1, the first term on the right hand
side in (18) has a double zero 05 . 0 4
2 / 1
= = H K z and
poles 033 . 0 1
1 1
= = p and 15 1
2
= = =
B B
p . The
zero at -0.05 and the pole at -0.033 are quite slow (and very
close to each other) in comparison to the second pole
For this reason they can be cancelled so the first part of
transfer function can be simplified to
( ) ( ) 1
1
4
1
1
4
2
1
4 2
1
+

+
=
+

+
=

+
=
s
s
K
H
K
s
s
K
H
K
s
H
K
s
H
G
B B
B
B
ii

, (19)
where K
1
=K/2. Regarding the position of its zeros and poles,
transfer function
ii
G represents transfer function of lead
compensator. The second term on the right hand side of (18)
has double zero ( ) 05 . 0 4
2 / 1
= = H K z and poles
033 . 0 1
1 1
= = p and
1 2
1 p p = . Since poles and
zeros of the second term in (18) are very close; for the
purposes of transient analysis they can be cancelled as well.
This way transfer function of the second term in (18) can be
simplified to

t g t g mm
G G K G G
R
G
2
1
= = . (20)
Accordingly, the transfer function G
1
becomes

( )
t g
B
G G K
s
s
K
H
K G
2 1 1
1
1
4
+
+

+
=

. (21)
Block diagram of the final control law given by (21) is
presented in Fig. 4.
IV. SIMULATION RESULTS
Simulation was performed using Matlab and Simulink. A
simple electrical power system consisting of two areas, each
represented by equivalent generating units with a reheat steam
turbine of nominal active power P
n1
=P
n2
=400 MW (see Fig.
13), is used for simulation. The areas are connected by a tie
line with synchronizing torque coefficient (transmission line
constant) T =4. In the first scenario, a reference-GUNRST of
nominal active power of P
nr
=200 MW is connected to the bus
of the upper generating unit in Fig. 13. In the second scenario,
a VSWG of nominal active power P
nw
=200 MW is connected
instead of reference-GUNRST. The transfer functions of
governor and turbine of the generating unit with a reheat
steam turbine are as follows [1]:

1
1
2 1
+
= =
s T
G G
G
g g
(22)

( )( ) 1 1
1
2 1
+ +
+
= =
s T s T
T sF
G G
RH CH
RH HP
t t
. (23)
The transfer function of reference-GUNRST governor is the
same as transfer function (22). The transfer function of turbine
of reference-GUNRST is

1
1
+
=
s T
G
CH
tr
. (24)

Fig. 12. Time responses of P
e
(solid line) and P
e2
(dashed line) to a step
of .
0
1 g
G
1 t
G
P
dist

1
2
1
D s H
eq
+


1
s
1
T
R
1

0
2 g
G
2 t
G

2 2
2
1
D s H +


2
R
1



+
+
P
e2

+
+
+

+
+
Fig. 13. Model of two area interconnected systemwith frequency support.
s
1
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
68



Values of parameters in (22), (23), and (24) are given in the
Appendix. VSWG mathematical model is identical to that
presented in Fig. 9 with added new control signal as shown in
Fig. 11. Saturation of signals is not considered. Frequency
deviations following a step disturbance of active power P
dist
=50 MW (5% of EPS nominal active power), applied on the
bus of the first generating unit, are presented in Fig.14.
Parameters of VSWG controller used in the simulation are K =
2, T
i
=8H/K =20 s, K
1
=K/2, and K
2
=32. The value of
parameter K
2
is close to value 1/R =25. Fig. 14 indicates that
the same ROCOF and frequency nadir are achieved by using
reference-GUNRST unit and VSWG. Since VSWG that
operate at MPPT have no spinning reserve, SSFD has larger
value in case of using VSWG than in the case of using
reference-GUNRST.
V. CONCLUSIONS AND FUTURE WORK
Due to increased ratio of VSWG penetration in electrical
energy generation frequency dynamic response of EPS is
worsening. In order to reverse this trend VSWG should
replace conventional generating units of EPS not only in
stationary state but VSWG must contribute to primary
frequency control as well. Using the proposed control
algorithm it is ensured that in the first seconds after an active
power disturbance VSWG behaves as the conventional
generating unit with the fastest response (GUNRST). Further
research in this area should be directed to application of the
proposed control algorithm to realistic EPS and combination
of the proposed control algorithm and deloading control.
Coordination of primary frequency control systems on wind
farm and EPS levels should also be researched. Finally,
nonlinear analysis and synthesis of VSWG control system
could be interesting area for further research.
APPENDIX
Parameters used in simulations.
Main gate servomotor constant: T
G
= 0.2 s, reheat time
constant: T
RH
=7 s, charging time constant: T
CH
=0.3 s, droop
constant: R =0.04, fraction HP turbine power: F
HP
=0.3,
inertia constant: H =H
1
=H
2
=H
w
=5, load damping: D
1
=
D
2
=1, synchronizing torque coefficient (transmission line
constant): T =4, active power base: P
base
=P
n1
=P
n2
=400
MW, nominal grid frequency:
base
= 50 Hz. Equivalent
inertia constant of generating unit in the area 1 is calculated as
H
P
P
c H H
base
nr
eq
+ =
1
, where c =0 in the case that VSWG is
connected on bus of area 1. In this case H
eq
=H
1
=5. In the
case that GUNRST is connected on bus of area 1 then c =1
and H
eq
=7.5.
REFERENCES
[1] P. Kundur, Power SystemStability and Control. New York: Mc Graw-
Hill, 1993, ch. 11.
[2] O. Anaya-Lara, N. J enkins, J . Ekanayake, P. Cartwright, and M. Hughes,
Wind Energy Generation Modeling and Control. West Sussex, U.K.:
Willey, 2009.
[3] F. M. Hughes, O. Anaya-Lara, N. J enkins, and G. Strbac, Control of
DFIG-based wind generation for power network support, IEEE Trans.
Power Systems, vol. 20, no. 4, pp. 1958-1966, Nov. 2005.
[4] J . Morren, S. W. H. de Haan, W. L. Kling, and J . A. Ferreira, Wind
turbines emulating inertia and supporting primary frequency control,
IEEE Trans. Power Systems, vol. 21, no. 1, pp. 433-434, Feb. 2006.
[5] G. Ramtharan, J . B. Ekanayake, and N. J enkins, Frequency support
fromdoubly fed induction generator wind turbines, IET Renewable
Power Generation, vol.1, pp. 3-9, March 2007.
[6] N. R. Ullah, T. Thiringer, and D. Karlsson, Temporary primary
frequency control support by variable speed wind turbines potential
and applications, IEEE Trans. Power Systems, vol. 23, no. 2, pp. 601-
612, May 2008.
[7] J . F. Conroy and R. Watson, Frequency response capability of full
converter wind turbine generators in comparison to conventional
generation, IEEE Trans. Power Systems, vol. 23, no. 2, pp. 649-656,
May 2008.
[8] P.-K. Keung, L. Li, H. Banakar, and B. T. Ooi, Kinetic energy of wind-
turbine generators for systemfrequency support, IEEE Trans. Power
Systems, vol. 24, no.1, pp. 279-287, Feb. 2009.
[9] J . Duval and B. Meyer, Frequency behavior of grid with high
penetration rate of wind generation, in PowerTech 2009 Conf.,
Bucharest, Romania, J une 28 J uly 2, 2009, pp. 1-6.
[10] M. Akbari, S. M. Madani, Participation of DFIG based wind turbines in
improving short term frequency regulation, in Proc. Electrical
Engineering (ICEE) 18
th
Iranian Conf., Isfahan, Iran, May 11-13, 2010,
pp. 874-879.
[11] J . M. Mauricio, A. Marano, A. Gomez-Exposito, and J . L. M.
Ramos,Frequency regulation contribution through variable-speed wind
energy conversion systems, IEEE Trans. Power Systems, vol. 24, no.1,
pp. 173-180, Feb. 2009.
[12] Y.-Z. Sun, Z.-S. Zhang, G.-J . Li, and J . Lin, Review on frequency
control of power systems with wind power penetration, in Proc. Power
System Technology (POWERCON), 2010 International Conf.,
Hanghzou, China, Oct. 24-28, 2010, pp. 1-8.
[13] J . J . Grainger and W. D. Stevenson, Power SystemAnalysis. New York:
Mc Graw-Hill, 1994.
[14] H. Knudsen and J . N. Nielsen, Introduction to the modeling of wind
turbines, in Wind Power in Power Systems, T. Ackermann, Ed.
Chicester, U. K.: Wiley, 2005, pp. 525-585.
[15] N. W. Miller, W. W. Price, and J . C. Sanchez-Gasca, Dynamic Modeling
of GE 1.5 and 3.6 Wind Turbine-Generators, GE-Power SystemEnergy
Consulting, Tech. Rep. Version 3.0, Oct. 2003.
[16] N. W. Miller, J . J . Sanchez-Gasca, W. W. Price, and R. W. Delmerico,
Dynamic modeling of GE 1.5 and 3.6 MW wind turbine-generators for
stability simulations, in Proc. IEEE Power Eng. Soc. General Meeting,
J uly 2003, pp 1977-1983.
[17] T. Ackerman, at all, Wind Power in Power systems Second Edition.
J ohn Willey & Sons, 2012.
[18] M. Chinchilla, S. Arnaltes, and J . C. Burgos, Control of permanent
magnet generators applied to variable-speed wind-energy systems
connected to the grid, IEEE Trans. Energy Conversion., vol. 21, no. 1,
pp. 130-135, March 2006.
Fig. 14. Frequency response: with support by VSWG (solid line), with
support by reference-GUNRST unit (dashed line), and without support
(dash dotted line).
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
69

Diffusion dynamics of energy service companies
in the residential sector

Andra Blumberga, Dagnija Blumberga, Gatis ogla, Claudio Rochas, Marika Ro, Aiga Barisa

Institute of Energy Systems and Environment,
Riga Technical University
Riga, Latvia
E-mail: andra.blumberga@rtu.lv, dagnija.blumberga@rtu.lv, gatis.zogla@rtu.lv, claudio.rochas@rtu.lv,
marika.rosa@rtu.lv, aiga.barisa@rtu.lv
Abstract Energy service contracting (EPC) is one of the
private sector instruments to improve end use energy
efficiency. The dominant part of EPC projects is
implemented in the industrial, governmental and municipal
sectors while the residential sector has been less attractive to
energy service companies (ESCO). This paper describes the
first EPC project implemented in a multi apartment building
in Latvia. A combination of this project experience, the
system dynamics modelling, and microeconomics theory has
been used to develop the system dynamics model to capture
the interactions and feedback from the ESCO market for
residential building energy efficiency. Three policy tools were
tested for use in improving the performance of the system.
The purpose of the tests was to facilitate the diffusion (or
distribution) of EPC in the residential sector.
Keywords Energy service company, Energy efficiency, System
dynamics, Residential buildings, Energy service contracting
I. INTRODUCTION
Energy demand is continuously growing worldwide.
It is at the core of climate change caused by greenhouse
gas (GHG) emissions. The European Commission has
adopted the "Energy Roadmap 2050" to reduce CO
2
by
80-95% by 2050 [1]. One of the legislative instruments
used at the EU level to achieve end-use energy efficiency
is Directive 2006/32/EC on energy end-use efficiency and
energy services. Under this Directive, every EU member
must reduce end use energy by 9% by 2016 [2].
One of the energy efficiency policy tools available to
the end user is energy service contracting or EPC provided
by energy service companies or ESCOs. EPC has a
significant effect as a private sector instrument to improve
end use energy efficiency. ESCO projects are self-
financing because the investment is paid back from the
energy cost reduction. The ESCO business differs from
the energy consulting business. The latter assesses exactly
which energy efficiency measures should be implemented
but is not responsible for achieving the forecasted savings
by the measures that it recommended. Energy service
contracting may include many different types of activities,
e.g., Sorrell [3] suggests describing energy service
contracts using three variables: depth, scope and method
of finance.
The ESCO concept has been established first in
North America at the beginning of 1970s when energy end
users faced an energy crisis and were looking for ways to
achieve substantial energy cost reductions. Although
ESCO is a well-developed branch of business, the
available scientific literature on it is not extensive.
However, it provides some insight into the field. Goldman
et al. [4] has carried out an empirical analysis of American
ESCO market trends and performance. Vine [5] has
analysed ESCO activities in 38 countries. Bertoldi et al.
[6,7,8] discuss ESCO diffusion in the EU market. In their
latest report Marino et al. [9] have concluded that between
2007 and 2010, the European Union ESCO market has
grown slowly. The Marino paper also mentions that latest
findings reveal that the ESCO market is very complex and
turbulent.
The complexity of the energy service contracting
market reveals an important need to recognize, understand
and manage the dynamic and, in many cases, non-linear
interactions influenced by delays between and among the
many variables that create both short and long term
impacts. System dynamics as a white box modelling
tool is a very useful device to reveal the structure of the
system as well as determine the short and long term
behaviour of complex and dynamic systems. The literature
on the system dynamics modelling of the ESCO market is
very thin. Capelo [10] has approached the ESCO market
in Portugal using system dynamics.
In this paper, a case study of the application of
ESCO to the residential sector is used to illustrate how
system dynamics modelling can address energy efficiency
issues through effective national policy making. In Section
2 the background to Latvias residential sector energy
efficiency and a description of ESCO case study is
provided. It is followed by Section 3 where the
methodological framework is outlined. Sections 4, 5 and 6
provide information on the system dynamics model to be
used to support the national energy efficiency policy
planning process. Finally, the major findings from model
simulations under different policy scenarios are compared
and analysed. Discussions and conclusions then follow.
II. BACKGROUND INFORMATION
A. Residential sector in Latvia and energy efficiency
policy
The residential sector is currently the largest energy
consumer in Latvia, accounting for 38.8% of overall
energy end-use in the country [11]. In 2010, the total
housing floor area in Latvia reached 61.1 million m
2
[12].
The household sector in Latvia includes multi apartment
buildings (61.5% of the total residential building stock)
and single-family buildings [12]. Located in the northern
part of Europe with cold climate (heating degree days
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
70
above 4000 degree days) the greatest energy consumed in
the residential sector is for heating energy with an average
annual consumption 180 kWh per m
2
.
To ensure an implementation of requirements of
European Union Directive 2006/32/EC on energy end-use
efficiency and energy services, the First National Energy
Efficiency Action Plan of Latvia [13] established a goal to
reduce energy consumption by 3483 GWh or 9% by 2016.
The greatest energy savings, that is by 77% or 2701 GWh,
are planned for the residential sector.
Although Latvia has a huge residential building
energy efficiency potential, only 200 of the more than 30
000 multi apartment buildings are completely renovated.
The main causes are rooted in the lack of motivation on
the part of apartment owners. The barriers that obstruct the
implementation of energy efficiency measures have to be
reviewed in the socioeconomic context considering that
the residents motivation is affected not only by rational
reasons such as technical issues arising from energy
efficiency measures [14] but also by a combination of
complex socioeconomic factors such as a lack of
awareness and knowledge (that is, know-how), confidence
in neighbours and persons implementing energy efficiency
measures, various administrative requirements during the
implementation process, residents worries about their
solvency and uncertainty regarding their income and
ability to undertake more financial obligations as well as
other factors.
The most challenging task for the energy efficiency
policy is to overcome the lack of collective action, which
arises from the ownership structure of multi apartment
buildings. Apartments are owned by individual occupants.
Implementation of common energy efficiency measures in
the building may be performed with the agreement of at
least 51% of apartment owners in the building.
ESCO removes most of the barriers that impede
implementation of energy efficiency measures. If an
energy service contract is signed with an ESCO, the
company guarantees the quality of the construction work.
It also assumes all financial risks and absorbs any risk
related to administrative barriers.
B. ESCO to increase residential energy efficiency
Most ESCO projects are implemented in the
industrial and governmental/municipal sectors. The
residential sector has been less attractive to ESCOs. Multi
apartment buildings are among the major energy
consumers with high energy savings potential, yet ESCOs
have not been active in this market sector. The literature is
very thin on the explanation of the reasons. The multi
apartment building sector faces similar problems in all
East European countries. This results from the historical
heritage left from Soviet and East Block times. There are
several barriers for ESCOs that are considering entering
this market sector. These include collective action
problems arising from the ownership structure (every flat
in the building is owned by a different owner), general
distrust, high administrative barriers, and a perception that
construction work will be poor quality.
2009 represented a turning point for residential
energy efficiency projects in Latvia and Europe. The
energy service company RENESCO implemented energy
efficiency project in multi apartment building at Gaujas
street 13 in Valmiera. The nine storey building with a total
heated area 1914 m
2
and 36 apartments was built in 1980.
Its average annual heating and hot water energy
consumption is 406 MWh or 214 kWh/m
2
per year. The
building envelope, heating and hot water supply systems
are in poor technical condition and do not meet existing
building standards. Water was found to leak through the
roof, and much energy was lost through walls and
windows. Two thirds of the thermal energy in the hot
water loop was lost in the circulation system, and only one
third was actually used for water heating. The simulation
model showed that the potential energy savings amount to
47% or 192 MWh per year.
It took six months for RENESCO to persuade
apartment owners to sign an energy performance contract
for 20 years for the implementation of both energy
efficiency and non-energy efficiency measures. Energy
efficiency measures taken include insulation of the walls
with 10 cm of mineral wool, the attic with 20 cm of
mineral wool, the basement ceiling with 10 cm of
polystyrene, as well as the installation of new windows
and main entrance doors, the reconstruction of the hot
water system and the installation of an energy monitoring
system in the building. Non-energy efficiency measures
such as the renovation of the roof, staircases and
balconies, the installation of a new water pump station,
and the replacement of the cold water supply system have
been implemented to avert serious technical problems and
to improve the general appearance of the building. The
total investment amounted for both energy efficiency and
non-energy efficiency measures to approximately 170,000
EUR.
The financing mechanism of the ESCOs is generally
classified as guaranteed savings and shared savings
(Bertoldi et al., 2006). As discussed in Bertoldi et al. [7,8]
in countries with developing ESCO markets, the shared
savings mechanism is more suitable since it does not
require clients to assume investment repayment risk. The
adjusted shared savings mechanism was used in the first
RENESCO energy performance contract and is illustrated
in Fig. 1. The company carries both the performance and
the credit risk while the client has no financial risk. The
client has no financial obligation other than to pay for the
energy consumption to the housing management company
as in the baseline, including the actual savings to the
ESCO over a specified period of time. This obligation is
not considered debt and does not appear on the customer's
balance sheet.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
71

Fig. 1. RENESCO adjusted shared savings mechanism
The monitoring system is installed to follow the
energy performance of the building. Hourly data from
every flat are collected and analysed to improve energy
efficiency even more. Fig. 2. illustrates measured energy
consumption before the implementation of energy
efficiency measures, the results from the simulation model
and real values measured after implementation of energy
efficiency measures. The monitoring data show that the
average energy savings during first two years are 56.7%.
The annual reduction of CO
2
emissions is 64.9 t CO
2
. The
total planned energy savings during the duration of the
contract are planned to reach 4150 MWh and the
corresponding reduction of CO
2
emissions 1100 t CO
2
.


Fig. 2. Heat energy (heating and hot water) consumption in Gaujas 13, Valmiera [15].
The RENESCO project provided additional value
both to apartment owners and to society in general.
Firstly, RENESCO has proved that an ESCO can
remove number of the serious barriers facing energy
efficiency projects by assuming either all or most of the
technical, administrative, financing, construction
quality, financial and credit risks. Secondly, apartment
owners highly value the improved safety, indoor
comfort and appearance of the building being
safeguarded by a long-term energy performance
agreement. The last added value is the extended life
time of building envelope and technical systems.
Up to March 2012, RENESCO has signed nine
more energy performance contracts with other
buildings.
III. METHODOLOGY
For this study, a combination of system dynamics
modelling, microeconomics theory and RENESCO
project experience in the residential building energy
efficiency sector has been used. The system dynamics
model is developed to capture the interactions and
feedback of the ESCO market for residential building
energy efficiency. This approach is referred to as the
white box or causal-descriptive modelling tool. For
white box models, the validity of the internal
structure of the model is essential, because the
behaviour of the system can be modified by adjusting
its structure. This model is developed using the
programme Powersim, which is designed to build non-
linear, dynamic systems with delays.
Microeconomics theory is used as analytical
framework to address research problems related to
human behaviour and energy efficiency and savings
[17]. Transaction costs discussed by Sorell [3] are used
for ESCO supply model.
The main purpose of the model is to disclose and
reproduce the structure of the system how the ESCO
market really works in the residential sector and to use
it to provide a tool for policy planning.
IV. A SYSTEM DYNAMICS MODEL
A. Overview of the conceptual model
RENESCO is a typical example of a sellers
market where the company is the only market player.
The demand for its services depends on the companys
sales and marketing capacity. The main reasons behind
RENESCO being the only player in the market are a
heterogeneous profile of customers (each apartment
owner has voting rights), risks based on a complex
knowledge of technical, financial, managerial, and
legal risks as well as other issues, and he business
culture based on short term agreements with low risk.
The conceptual model depicted in Fig. 3.
simplifies the ESCO market but includes important
relationships and interactions. The model is part of the
national residential energy efficiency model described
in Blumberga et al. [19]. It illustrates how the major
four sectors are related in the common structure to
show the diverse nature of the problem. The main
sector is the ESCO market where supply and demand
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
72
determines the ESCO diffusion rate in the market. It is
closely linked with the financial resources sector that
supports the growth of ESCO capacity. The building
stock sector includes all residential buildings. The
internal dynamics of this stock determines both the
ESCO diffusion in the market and the availability of
financial resources. The fourth sector indicates the
national energy savings and GHG emissions as result
of changes in building stock.


Fig. 3. Conceptual structure of building energy efficiency system dynamics model.
B. Model description
Fig. 4. depicts the stock and flow diagram of the
ESCO supply and demand sub-model. This is the main
sub-model. It shows how supply and demand
determines the diffusion rate of the ESCO services in
the market. The demand is determined by total ESCO
costs, the apartment owners willingness to pay and
information perceived by apartment owners about both
successful and unsuccessful projects implemented by
the ESCO. The ESCO costs comprise internal and
external transaction costs, investment costs, energy
costs and interest. The ESCO sales capacity depends on
the ESCO profit the higher the profit, the more
money can be spent on sales and marketing activities.
Energy savings achieved by ESCO increase or decrease
depending on the project success rate, which is based
on the learning curve of ESCO. Decreased housing
maintenance costs as an additional benefit offered by a
housing maintenance company are included in the
model. The number of votes required (the number of
votes needed to decide whether to sign ESCO contract
- now 51% of apartment owners have to agree) is one
of the major barriers for ESCOs, and creates very high
transaction costs, in particular, sales and marketing
costs. This barrier has an impact on inconvenience
costs incurred by both apartment owners and the
ESCO. The higher the costs, the more financial
resources are required for sales and marketing
activities. A reduction of housing maintenance costs
increases the net benefit of every apartment owner.
Because ESCO invests in the general building
improvement measures, less housing maintenance is
required. This, in turn, increases the willingness of the
apartment owners to pay and makes the ESCO more
attractive to them. The information delay that is caused
by the owners capability to understand the information
determines how net benefits affect their willingness to
pay.

Fig. 4. Stock and flow diagram of ESCO supply and demand submodel.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
73
Fig. 5. Stock and flow diagram of building stock sub-model
The building stock sub-model displayed in Fig. 5. is
disaggregated into several stocks based on the main
milestones of the energy efficiency projects. These
include potential projects or buildings without energy
efficiency measures, ESCO selected projects to be
financed, ESCO initiated projects or buildings where the
ESCO has signed contracts and both unsuccessfully and
successfully implemented projects. Projects are measured
in heated area (m
2
). The rate at which buildings move
from one stock to another is determined by factors
influencing the rate. For example, project acceptance rate
depend on demand and capacity; financing rates depend
on the time needed to secure financing. The information
available and acquired by apartment owners about ESCO
projects implemented forms an important reinforcing
feedback loop that determines demand. The higher the
number of successful projects, the higher the demand for
ESCO services and vice versa. The non-linear relationship
of learning effect determines how the ESCO improves its
performance over the time.

Fig. 6 illustrates the financial resources available to
an ESCO dealing with the residential sector. Financing is
provided by the subsidy program, the ESCOs own
resources and the bank loan. The subsidy program has
limited resources. The amount of the subsidy depends on
the subsidy fraction and the project acceptance rate. The
ESCOs equity is generated by energy savings from the
implemented projects. This money is spent on the
ESCOs operating costs. Any profit and any money
remaining are reinvested in new projects. The bank loan is
used if the project cash flow needs additional resources
besides the subsidies and the ESCOs own financial
sources.


Fig. 6. Stock and flow diagram of financial sources submodel.
Energy savings and GHG emissions sub-model
is used to account for total energy savings at the national
level as well as the reduction of GHG emissions.



V. MODEL TESTING AND VALIDATION
Data quality and availability are always key concerns
for all modelling exercises. Barlas [17] provides an
explanation that the validation of system dynamics models
has to be carried out rigorously, both for structural (where
an additional set of tests is needed relative to econometrics
and optimization) and behaviour validation. Structural
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
74
validation is performed before behaviour validation. In
this study, model structure verification tests were carried
out to assess both the structure and the components of the
model. These tests included a structure assessment test, a
parameter assessment test, a boundary adequacy test and a
unit consistency test. This was followed by two behaviour
tests: an extreme condition test and a behaviour
sensitivity test. The extreme condition test measured
extreme values of selected model parameters to compare
the behaviour generated by the model with the anticipated
behaviour of the real system under this value. The
behaviour sensitivity test was used to determine the
parameters on which the model is highly sensitive. These
were compared with the sensitivity of the real system to
these parameters. The tests showed that the most sensitive
parameters are energy tariff, contract period, investment
costs, specific energy consumption and the reference
fraction of successful projects.
As an example to illustrate model validation, the
behaviour sensitivity test, which tests the sensitivity of the
total investment to the contract period, was performed.
The initial value is set at 20 years. In each run, the
contract period is increased or decreased by one year.
When the contract period is increased, it has no impact on
the total investment. The trend is very different if the
contract period is decreased. Fig. 7. illustrates that as the
contract period decreases below 15.4 years, the ESCO
stops its investments for a certain period of time. This gap
is needed for the ESCO to accumulate money from
savings in the implemented projects. The shorter the
contract period, the longer the time needed to accumulate
financial resources.


Fig. 7. Behaviour sensitivity test of total investments to contract period
Calibration and validation of system dynamics
models can be performed using data from field collection
and from the literature. Models which perfectly represent a
system do not exist and simulated data generate only
trends instead of accurate numbers. The historical
behaviour validation test is used to build confidence in the
model [18]. The simulated total number of projects in
progress is compared with the historical data. The results
are shown in Fig. 8. The trend of both the projects in
progress (projects in progress added to completed
projects) and the completed projects from model
simulation match the historical data.
The first ESCO project in this sector was
implemented and financed in 2009 by a subsidy program
called, Improvement of energy performance of multi
apartment buildings. There is a time delay caused by the
introduction of the ESCO concept to market. Data analysis
shows the ESCO growth fraction increasing.


Fig. 8. Historical and simulated data of accumulated ESCO projects in progress and total completed projects
VI. REFERENCE BEHAVIOUR OF THE MODEL
The reference behaviour of the model can serve as a
basis for comparison of alternative policy tools. In this
study, the main parameters characterizing the ESCO
market in the residential sector are the total area of
completed projects and the total area of projects in
progress. In addition to these, the distribution of financial
resources is reviewed. The time horizon used in the model
is from 2009 to 2020.
Fig. 9. shows the reference behaviour of both
projects in progress and total completed projects. The
number of implemented projects lags behind the number
of projects in progress. The main reason for this is the
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
75
delay caused by the time required to secure financing and
the time required for the construction work.

Fig. 9. Reference behaviour of projects in progress and total completed
projects.

VII. ALTERNATIVE SCENARIOS AND POLICY TOOLS
One of the goals of the system dynamics model is to
test different policy tools that can be used to improve the
performance of the system under study. Sorell [3] lists
policy measures that may encourage ESCO market
development. Only reinforcing information by public-
funded information programmes and demonstrations is
relevant if the market is not well balanced as it is with a
sellers market as in the case of Latvia.
RENESCO proposes additional policy tools that
governments should use to reinforce the ESCO market
diffusion. These are: (1) reduce the number of votes
needed to decide whether to sign ESCO contract (now
51% of apartment owners have to agree); (2) real estate
tax reduction for buildings that have contracts with an
ESCO [15]; and (3) publicly finance information
campaigns. In this study, all three policy tools proposed by
RENESCO are modelled and simulated.
The proposed policy tools require either structural
changes in the model or strengthening the endogenous
feedback loops. If the number of votes required to sign
ESCO contract is reduced, the inconvenience costs
incurred by apartment owners are decreased, those
reducing ESCO transaction costs. Simulation is performed
with the value of inconvenience costs set at 0.
A reduction of real estate taxes together with
reduction of housing maintenance costs increases the net
benefit of every apartment owner. This, in turn, increases
the willingness of the apartment owners to pay and makes
the ESCO more attractive to them. The structure of this
policy tool is illustrated in Fig. 10. The value of reduction
of real estate tax is set at 0.14 EUR/m
2
per year.

Fig. 10. Impact on ESCO market of real estate tax and reduction of
housing maintenance cost reduction.
The last policy tool evaluated within this model is the
information campaign. It can be used to begin and
encourage the recruitment process (making people aware)
and is measured as annual floor area of recruited buildings
per year. It determines ESCO sales capacity building rate
(see Fig. 11.). The information campaign helps to boost
demand for ESCO services.

Fig. 11. Information campaign in the model.
VIII. RESULTS
Analysis of how the three policy options referenced
above impact the ESCO market were simulated and
compared with the reference behaviour.
If reduction of real estate tax is used as policy tool,
the behaviour of both the projects in progress and the total
completed projects follow the same increasing trend as the
reference behaviour, but reached a higher number by
2020: 199,773 m
2
compared with 198,048 m
2
. The
increase is not significant. Number of votes needed for
ESCO agreement (represented in model as inconvenience
costs) does not impact number of implemented projects.
The key to explaining the effect of both policy tools lies in
the scale of both values they form a very small part of
total ESCO costs. These results do not confirm the
assumption that the reduction of number of votes and the
increase of net benefits is an effective set of policy tools to
speed up the diffusion of ESCO services in the residential
sector.
Fig. 12. illustrates the behaviour of both the projects
in progress and the total completed projects if information
campaign with three different levels of strength is used. In
this simulation, the trend of behaviour in all three cases
has not changed while the scale has changed noticeably.
For example, if information campaign is not strong (500
m
2
/year/year) compared with the reference behaviour, by
the year 2020 the number of total completed projects
increases to 230,673 m
2
compared with 198,048 m
2
in the
reference behaviour. The financial resources show a
similar trend of distribution compared with the reference
behaviour. These results confirm the assumption that an
information campaign is a highly effective policy tool to
reinforce the diffusion of ESCO services in the residential
sector.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
76

Fig. 12. Simulation results of information campaign.
IX. DISCUSSIONS
The simulation results reveal that in the case of a
suppliers market, supply by an ESCO depends on the
companys capacity, the investment costs, the ESCO
transaction costs, the energy tariff, the available funding,
the available information, and the net benefits. The critical
factors affecting the diffusion of ESCO services are the
length of the contract period, the initial values and any
increases in energy tariff and investment costs.
The model places emphasis on relationships rather
than on the exact numbers, especially in the case of the
proposed policy tools. They have to be interpreted in an
open-ended way since they may not represent what might
or might not happen in the real world. The model
simulation results show the strength of endogenous
feedback loops within the system, and reveal the need for
structural changes and integrated policy measures to
obtain the highest diffusion rate of ESCO services in the
residential market.
Given the present model structure, including
currently available policy tools, it is possible to simulate
the future development of the ESCO market. All
parameters are based on information supplied by
RENESCO, except for uncertainty costs. Although the
system is not sensitive to this parameter, it gives some
level of certainty to the simulation results.
X. CONCLUSION
In this study, system dynamics modelling was used
to develop a model to simulate ESCO services in the
residential market. The main feature of the system
dynamics modelling is that the structure of the system is
transparent, hence uncertainties with regards to future
changes are more easily addressed. The model structure
with its feedbacks, delays and non-linear relationships
helps to understand the supply and demand of ESCO
services as a system rather than an isolated set of
variables. Information provided by RENESCO and
microeconomics theory served as the basis for the
structure of the model. While the literature sources suggest
several policy tools to reinforce ESCO activities in the
market, only three are proposed to be used in Latvias
residential sector (1) reduction of real estate tax; (2)
changes in voting structure of apartment owners, and (3)
publicly financed information campaigns. Simulation
results confirm that increase of net benefits reinforced by
information campaign is highly relevant set of policy
tools, however, removal of apartment owners barriers do
not reinforce diffusion of ESCO services in the residential
sector.
REFERENCES
[1] A Roadmap for moving to a competitive low carbon economy
in 2050, COM/2011/0112 final, Brussels, 2011.
[2] Directive 2006/32/EC Of The European Parliament And Of
The Council of 5 April 2006 on energy end-use efficiency and
energy services and repealing Council Directive 93/76/EEC,
2006.
[3] S. Sorrell, The economics of energy service contracts, Energy
Policy 35 (2007) 507-521.
[4] C.A. Goldman, N.C. Hopper, J.G. Osborn, Review of US
ESCO industry market trends: an empirical analysis of project
data, Energy Policy, 33 (2005) 387 405.
[5] E. Vine, An international survey of the energy service
company industry, Energy Policy 33 (2005) 691704.
[6] P. Bertoldi, S. Rezessy, Energy service companies in Europe,
Status Report 2005, European Commission DG Joint
Research Centre, Ispra, Italy, 2005.
[7] P. Bertoldi, S. Rezessy, E. Vine, Energy service companies in
European countries: current status and a strategy to foster
their development, Energy Policy 34 (2006) 18181832.
[8] P. Bertoldi, S. Rezessy, B. Boza-Kiss, Latest Development of
Energy Service Companies across Europe, Institute
Environment and Sustainability, European Commission DG
Joint Research Centre, Ispra, Italy, 2007.
[9] A. Marino, P. Bertoldi, S. Rezessy, Energy Service
Companies Market in Europe, Status Report 2010, European
Commission DG Joint Research Centre, Ispra, Italy, 2010.
[10] C. Capelo, Modeling the Diffusion of Energy Performance
Contracting, 29-th International System Dynamics
Conference in Washington, DC, 2011.
[11] Latvian Energy in Figures, Ministry of Economics of
Republic of Latvia, Riga, Latvia, 2011.
[12] Data from Central Statistical Bureau of Latvia, database
available at www.csb.gov.lv.
[13] Latvias First National Energy Efficiency Action Plan of
Latvia 2008-2011, Cabinet of Ministers of Republic of Latvia
No.266, Riga, Latvia, 2008.
[14] T. A. Kiv, A. Hamburg, M. Thalfeldt, J. Fadejev, Indoor
Climate of an Unheated Apartment and its Impact on the Heat
Consumption of Adjacent Apartments, 3rd International
conference on Urban Sustainability, Cultural Sustainability,
Green Development, Green Structures and Clean Cars
(USCUDAR '12), Barcelona, October 17-19, 2012, 52-58.
[15] Conference ESCO Europe 2012, January 25-26, London,
2012.
[16] V. Oikonomou, F. Becchis, L. Steg, D. Russolillo, Energy
saving and energy efficiency concepts for policy making,
Energy Policy 37 (2009) 47874796.
[17] Y. Barlas, Formal aspects of model validity and validation in
system dynamics, System Dynamics Review 12 (1996) 183
210.
[18] J.A.M. Vennix, Group Model Building: Facilitating Team
Learning Using System Dynamics, John Wiley and Sons,
Chichester, 1996.
[19] A.Blumberga, G. ogla, G., P. Davidsen, E. Moxnes,
Residential Energy Efficiency Policy in Latvia, Proceedings
of the 29th International Conference of System Dynamics
Society, Washington, USA (2011) 35-36.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
77
Dynamical Characteristics in Time Series Between
PM
10
and Wind Speed
Deok Du Kang, Dong In Lee
Dept. of Environmental Atmospheric Science
Pukyong National University, Busan 608-737
Busan, Republic of Korea
Jae-Won Jung

, Kyungsk Kim*
Dept. of Physics
Pukyong National University, Busan 608-737
Busan, Republic of Korea


AbstractWe study the temporal variation characteristics of
PM
10
and wind velocity in eight South Korean cities. We employ
the detrended cross-correlation analysis (DCCA) method to
extract the overall tendency of the hourly variation. We ascertain
from three-daily and one-weekly intervals that Busan has the
negative largest, while Donghae has the positive largest in the
DCCA cross-correlation coefficient between PM
10
and wind
velocity. As a result of Asian dust events, the cross-correlation is
statistically significant for the hourly time series data less than
two days. Particularly, we discuss whether a cross-correlation is
statistically significant or not from random number surrogation
and shuffled time series surrogation.
Keywords PM
10
; Wind speed; Asian dust; DCCA; Random
number surrogation; Shuffled time series surrogation

I. INTRODUCTION
Particulate matter is composed of organic and inorganic
mixtures such as natural sea salt, soil particle, vehicle exhaust,
construction dust, and soot. Some of these particles with
aerodynamic diameters of less than 10 microns that can enter
the bodys respiratory system are known as PM
10
[1,2]. PM
10
concentration has an effect on climate change by causing an
imbalance of the global radiative equilibrium through direct
effects that block the stoma of plants and cut off the solar
radiation, these are different from the indirect effect that
change the optical properties of clouds, cloudiness, and the
lifespan of clouds [3,4]. Various factors contribute to the
degree of PM
10
concentration. Notable among these are the
type of land use and surface vegetation coverage, as well as
meteorological factors [5].
The temporal data of the PM
10
concentration that occurs in
metropolitan areas were affected depending on the source,
seasonal fuel usage, urban layout, commuting traffic
environment, and micrometeorological change. Especially,
concentration distribution is influenced greatly according to
changes in temperature, the wind speed, and humidity [4,6].
Jang et al. [7] analyzed the spatio-temporal occurrence period
of the fluctuation of particulate matter by using a power
spectrum analysis. Giri et al. [8] also examined the relationship
between meteorological parameters and urban air pollutants via
the Pearsons correlation. Particularly, Xue et al. [2] investigated
the trend of PM
10
concentration variations and correlations between
suspended particles and meteorological variables by using
correlation analysis via data of time series. However, the
methods and techniques used in such studies are fundamentally
based on the correlation method. These were not able to
remove the specific trend of various time series data like
meteorological data, and the premise that time series have
normality, when non-normality may actually be the case.
Therefore, the reliability of the results is lacking for judging a
correlation.
In this study, we analyze and simulate cross-correlations
along time scale between PM
10
concentration and wind speed
using the detrended cross-correlation analysis (DCCA) method
[9-11] through the removal of specific trends in eight South
Korean cities. We discuss the effect of meteorological factors
on the fluctuation of PM
10
concentration during Asian dust
events and other time periods. In addition, in order to quantify
whether cross-correlations are significant or not, we examine
statistical cross-correlation tests and random permutations of
the original data.
II. THEORETICAL METHODOLOGY
A. DCCA Method
In this section, for the purpose of simplicity, we are
concerned with two time series of PM
10
differences { }
i
x and
wind velocity differences { }
i
x

, where 1, 2,..., i N = . Then, we


introduce statistical quantities
1
k
k i
i
X x
=
=

and
1
k
k i
i
X x
=

=

,
where k N . Next, we let to introduce the DCCA method,
which is a generalization of the detrended fluctuation analysis
method and implemented in two published papers [10,12]. For
two time series of equal length N, we compute two integrated
signals
k
X and
k
X

, where 1, 2,..., k N = . We also divide the


entire time series into N n and ends at i + n, we define the
local trend,
, k i
X and
, k i
X

(i k i + n), to be the ordinate of
a linear least-squares fit. The detrended walk is defined as the
difference between the original walk and the local trend as
well. The covariance of the residuals in each box is calculated
as

2
, ,
1
1
( , ) ( )( )
1
i n
DCCA k k i k k i
k
f n i X X X X
n
+
=
=
+

. (1)
From (1), we calculate the detrended covariance function by
summing over all overlapping N n boxes of size n as follows:
* Corresponding Author. Tel: +82-51-629-5562; Fax: +82-51-629-5549.
E-mail: kskim@pknu.ac.kr (K.Kim).


Applied Meteorology Research Lab, National Institute of
Meteorological Research, Seoul 156-720, Korea
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
78
2 2 2
1
1
( ) ( , ) ~
N n
DCCA DCCA
i
F n f n i n
N n

=
=

. (2)
Here, the exponent quantifies the long-range power-law
cross-correlations and also identifies seasonality, but

does
not quantify the level of cross-correlations. Lastly, we find the
DCCA cross-correlation coefficient and compare our result to
other findings. The DCCA cross-correlation coefficient
DCCA

is defined as the ratio between the detrended covariance
function
2
( )
DCCA
F n and the detrended variance functions,
( )
DFA
F n and ( )
DFA
F n , i.e.,

2
( )
( , , , )
( ) ( )
DCCA
DCCA
DFA DFA
F n
n T
F n F n
=

. (3) + = . (1) (1)


From (3), the value of
DCCA


ranges between 1 1
DCCA
,
and ( )
DFA
F n n

and ( )
DFA
F n n

are, respectively,
characterized by the detrended fluctuation analysis exponents
and , and box size n. Equation (3) is also dependent
upon two time series of length T . When the variables i and j
are perfectly cross-correlated, the value of
DCCA
is 1. On the
other hand,
DCCA
= 1 if two variables are perfectly anti
cross-correlated.
DCCA
= 0 corresponds to the case where the
relation between two variables have no cross-correlation.
Furthermore, we can calculate for an infinitely long time
series when
DCCA
=0. For finite time series, even if cross-
correlations are not present,
DCCA
has presumably some small
nonzero value. Hence the DCCA cross-correlation coefficient
can serve as an indicator of cross-correlations.
B. Data
In this study, we selected 4 coastal cities (Busan, Incheon,
Mokpo, and Donghae) and 4 inland cities (Daegu, Daejeon,
Wonju, and Andong) in South Korea peninsula. Inland cities
were designated to be those located more than 50 km from the
coast, while those located closer to the coast were designated to
be coastal cities.
We analyzed PM
10
concentration in the data of the Air
Quality Monitoring network that the Ministry of Environment
runs, and a period of data of 5 years from 2006 to 2010. The
meteorological factor used in this analysis is wind speed, and
we use the data of the manned regional meteorological offices
of the Korea Meteorological Administration in order to ensure
reliability of data, and it is data of five years from 2006 to 2010
years, as is the case with the PM
10
concentration data.
The number of Asian dust days was decided by observed
days of Asian dust KMA, while the number of non-Asian just
days was the remaining days of the analysis period. To perform
the DCCA, we use hourly data of the day that Asian dust was
observed.
C. Random Number Surrogation and Shuffled Time Series
Surrogation
For finite time series, because of the size effect, even if
cross-correlations are not present,
DCCA
is presumably some
small nonzero value. Therefore the DCCA cross-correlation
coefficient serves only as an indicator of the presence of cross-
correlations. If
DCCA
is 0.2 or 0.3, this value must be judged to
be present or absent. In order to test whether the cross-
correlations are significant or not, we examine them using the
method that generates random number surrogates suggested by
Podobnik and Stanley [9]. In addition, we conduct random
permutations of the original data to find out the distribution
effect of the time series on
DCCA
, since time series data
generally appear to have a non-normal distribution. First, we
determine the null hypothesis in random number surrogation.
Because this is not a unique choice, we begin by assuming that,
with the null hypothesis, the time series are independent and
identically distributed random variables and calculate the range
of
DCCA
that can be obtained under the assumption that the
time series are independent and identically distributed random
variables. We calculate critical points ( , )
rc
T n for the 90%
confidence level defined such that the integral between
( , )
rc
T n and ( , )
rc
T n is equal to 0.90. Thus, we determine
the range of
DCCA
within which the cross-correlations can be
considered statistically significant. In the Asian dust, we
determine this for each of two different choices of time series
length-ranging from T=1134 and calculate the probability
distribution function (PDF) ( )
DCCA
P for the DCCA cross-
correlation coefficient
DCCA
in (3) for different values of box
size n. Each PDF is obtained by generating 100 independent
and identically distributed time series pairs taken from a
Gaussian distribution. We use a trend based on a first-order
polynomial fit.
Second, let us introduce shuffled time series surrogation.
This is also assumed for the null hypothesis. The time series
data is obtained through random permutations of the original
data and the range of
DCCA
hat can be obtained under the
assumption is calculated. This method guarantees that the
surrogate data will be consistent with the null hypothesis of a
-correlated random process, while exactly preserving the
distribution of the original data. We calculate critical points
( )
sc
n for the 90% confidence level. We thus determine the
range of
DCCA
within which the cross-correlations can be
considered statistically significant. We determine this for both
Asian dust days and non-Asian dust days by city. In addition,
we calculate the PDF ( )
DCCA
P of the DCCA cross-correlation
coefficient
DCCA
in (3) for four different values of box size n.
Each PDF is obtained by generating 100 time series pairs which
are shuffled. We also use a trend based on a first-order polynomial
fit. To confirm the normality of the shuffled time series, we
introduce skewness
w
S and kurtosis
t
K

as
3 2 3/ 2
( ) / [ ( ) ]
w i i
i i
S x x x x =

(4)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
79
and
4
2 2
( ) / [ ( ) ]
t i i
i i
K x x x x =

, (5)
where skewness means a symmetric degree of distribution, and
the value of skewness is zero at normal distribution. Kurtosis
measures the flatness of a distribution, and normal distribution
has a kurtosis value of three.
III. NUMERICAL CALCULATIONS AND RESULTS
A. DCCA Method in the Asian Dust Events
First of all, we examine the DCCA analysis between PM
10

and wind speed during Asian dust events during the five years
in the in eight cities. We decide on box size n from 3 hours to
168 hours (a week). We report the
DCCA
between PM
10
and
wind speed in Table I and Fig. 1.
TABLE I. CORRELATION COEFFICIENTS
DCCA
BETWEEN PM10 AND WIND
SPEED.

As shown in Fig. 1, in the case of PM
10
and wind speed,
there exists a positive correlation with 15 51 n in Andong.
In Busan and Donghae of the coastal area, there exist positive
correlations with 12 n and 27 45 n , respectively. The
cross-correlation with 12 48 n is used the mean duration
time of Asian dust. Furthermore When Asian dust flows into
the Korean peninsula, humidity is in inverse proportion
according to the increase or decrease of PM
10
concentration
according to Jang et al. [7]. On the other hand, the change of
pressure and temperature is similar with a case in which Asian
dust does not occur.
B. Critical Values for the Random Number Surrogation
The range of
DCCA
which can be considered statistically
significant is shown in this section. Fig. 2 shows the PDF
( )
DCCA
P of the DCCA cross-correlation coefficient
DCCA
for
four different values of box size n. As it is the PDF which
based on independent and identically distributed random
variables followed in a Gaussian distribution, ( )
DCCA
P is
symmetric. This is influenced by two parameters, time series
length T and box size n. For each T the PDF converges to a
Gaussian distribution as the value of n increases because of the
central limit theorem. Table II is the critical value (1134, )
rc
n
for the 90% confidence level. Because of an unfound form of
PDF for values of n, we calculate the critical values
numerically. As most of correlation coefficients between
temperature and PM
10
show positive values, but in case of
humidity show largely negative values, we calculate the upper
limit value and the lower limit value through a two-sided test.

Fig. 1.
DCCA
between PM10 and wind speed.

Fig. 2. PDFs of critical value (1134, )
rc
n for the statistical test.
TABLE II. CRITICAL VALUE (1134, )
rc
n FOR THE DCCA CROSS-
CORRELATION COEFFICIENT WHEN EACH SERIES IS GAUSSIAN IN 90%
CONFIDENCE LEVEL WITH ZERO MEAN AND UNIT VARIANCE (T=1134).


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
80
C. Critical Values for Shuffled Time Series Surrogation
Fig. 3 is the plot of the PDF ( )
DCCA
P of the DCCA cross-
correlation coefficient
DCCA
between PM
10
and wind speed for
four different values of box size n in the four cities of inland
areas. As it is the PDF which is based on random permutation
of the original data, ( )
DCCA
P is non-symmetric. Fig. 4 is a
histogram of shuffled original data for values of skewness and
kurtosis in Daejeon (representative value of the inland area).
The values of skewness and kurtosis indicate that the data of
meteorological factors and PM
10
assume non-normality.
In Tables III and IV, we can find the critical value ( )
sc
n
between PM
10
and wind speed for the 90% confidence level in
the eight cities. Because of an unfound form of PDF for
greater values of n, we calculate the critical values
numerically. When we judge cross-correlation, we use the
average of critical values. Likewise, we can calculate upper
limit and lower limit values through a two-sided test. Because
of the non-normality of the time series data, the moduli of the
upper and lower limit values are shown small differences.
We compare
DCCA
with the critical point (1134, )
rc
n and
( )
sc
n for each n. If (1134, )
DCCA rc
n > and
( )
DCCA SC
n > , the cross-correlations are considered
statistically significant, and we reject the null hypothesis that
DCCA
comes from a Gaussian independent and identically
distributed time series and the random permutation of the
original data with no cross-correlations. This means that the
area below (1134, )
rc
n and ( )
sc
n to 0 means insignificant
correlations.
TABLE III. CRIRICAL VALUE FOR CROSS-CORRELATION COEFFICIENTS
DCCA

BETWEEN PM10 AND WIND SPEED IN COASTAL AREAS.


Fig. 3. PDFs of critical value between PM10 and wind speed.

Fig. 4. Shuffled original data for values of Kurtosis and Skewness in Daejeon
during Asian dust.
TABLE IV. COEFFICIENTS
DCCA
BETWEEN PM10 AND WIND SPEED IN
INLAND AREAS.

TABLE V. ESTIMATED RESULTS OF CROSS-CORRELATION COEFFICIENTS
BETWEEN PM10 AND WIND SPEED IN COASTAL AREAS.
Incheon Donghae Mokpo Busan
Value
T-test
(P-value)
X-square
(P-value)
T-test
(P-value)
X-square
(P-value)
T-test
(P-value)
X-square
(P-value)
T-test
(P-value)
X-square
(P-value)
n=3
1.040
(0.302)
2.763
(0.251)
-3.890
(0.000)
6.813
(0.033)
3.940
(0.000)
30.165
(0.000)
-2.190
(0.031)
36.860
(0.000)
n=6
-0.599
(0.550)
5.353
(0.069)
-0.762
(0.448)
1.017
(0.601)
0.413
(0.680)
7.665
(0.022)
-3.020
(0.003)
10.444
(0.005)
n=12
-2.040
(0.044)
14.302
(0.001)
-3.110
(0.002)
3.801
(0.149)
-3.480
(0.001)
7.732
(0.021)
-1.290
(0.201)
11.410
(0.003)
n=24
3.770
(0.000)
119.13
(0.000)
-4.320
(0.000)
8.544
(0.014)
-4.330
(0.000)
3.187
(0.203)
-3.230
(0.002)
6.302
(0.043)
n=48
3.480
(0.001)
90.785
(0.000)
-3.640
(0.000)
8.503
(0.014)
-0.938
(0.350)
5.409
(0.067)
-5.690
(0.000)
0.908
(0.635)
n=72
3.620
(0.000)
94.104
(0.000)
-3.110
(0.002)
4.895
(0.0865)
-0.040
(0.968)
12.768
(0.002)
-5.730
(0.000)
18.824
(0.000)
n=168
3.190
(0.002)
37.475
(0.000)
3.150
(0.002)
10.208
(0.006)
-1.190
(0.238)
10.204
(0.006)
-7.310
(0.000)
30.110
(0.000)
TABLE VI. ESTIMATED RESULTS OF CROSS-CORRELATION COEFFICIENTS
BETWEEN PM10 AND WIND SPEED IN INLAND AREAS

Daejeon Wonju Daegu Andong
Value
T-test
(P-value)
X-square
(P-value)
T-test
(P-value)
X-square
(P-value)
T-test
(P-value)
X-square
(P-value)
T-test
(P-value)
X-square
(P-value)
n=3
2.970
(0.004)
1.805
(0.406)
3.860
(0.000)
4.548
(0.103)
-4.020
(0.000)
10.185
(0.006)
-5.400
(0.000)
6.489
(0.039)
n=6
3.220
(0.002)
9.597
(0.008)
2.180
(0.032)
0.816
(0.665)
-1.720
(0.088)
12.310
(0.002)
-4.050
(0.000)
8.667
(0.013)
n=12
2.500
(0.014)
3.859
(0.145)
1.710
(0.010)
2.770
(0.250)
-2.310
(0.023)
5.249
(0.073)
-3.690
(0.000)
10.450
(0.005)
n=24
0.780
(0.437)
7.785
(0.020)
4.300
(0.000)
9.217
(0.010)
-1.640
(0.101)
3.743
(0.154)
-4.720
(0.000)
0.397
(0.820)
n=48
0.920
(0.360)
7.447
(0.024)
3.510
(0.001)
10.133
(0.006)
-3.380
(0.001)
4.753
(0.093)
0.584
(0.561)
2.281
(0.319)
n=72
0.788
(0.423)
9.009
(0.011)
3.170
(0.002)
2.347
(0.309)
-5.480
(0.000)
2.607
(0.272)
4.120
(0.000)
8.695
(0.013)
n=168
1.220
(0.226)
16.275
(0.003)
3.110
(0.002)
46.887
(0.000)
-8.790
(0.000)
7.529
(0.023)
7.010
(0.000)
16.558
(0.000)


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
81
Tables V and VI present the estimation results of the cross-
correlation coefficient between PM
10
and wind speed in coastal
and inland areas, with both the Student-t and X-squared tests.
First, To discuss the T- and P-values of the cross correlation
coefficient for the 90% confidence level, the cross-correlation
coefficients, except for n=12 in Busan, are closer to normal
except for n=12, while those in Mokpo are not practically
significant for n=6 and 12. In inland areas, the cross-correlation
coefficients in Daejon for n=3, 6, and 12 are statistically
significant, while those are not statistically significant in
Wonju (Daegu) for n=12 (24). To consider the normality test of
statistical data via the X- and P-values for the cross correlation
coefficient between PM
10
and wind speed in coastal areas, the
cross-correlation coefficients in all values n of Busan are
statistically significant, while the cross-correlation coefficient
is not statistically significant in Mokpo for n=168. The cross-
correlation coefficients in Daejon for n=24, 48, 72, and 168 can
be considered statistically significant, while those are not
statistically significant in Mokpo for n=24 and 48 in inland
areas.
IV. SUMMARY
We have analyzed correlations from time scales between
PM
10
concentration and wind speed by using the DCCA
method through the removal of specific trends in eight South
Korean cities. We introduced time series data into non-Asian
dust events to analyze the change of wind speed due to a
fluctuation in PM
10
concentration. We have also examined
statistical cross-correlation tests in order to quantify whether the
cross-correlations are significant or not. The processes to
quantify whether cross-correlations are significant or not are as
follow: first, we calculate the values of correlation coefficients
using the DCCA method. Secondly, we generate critical values
to test whether crosscorrelations are genuine or not using a
random number surrogate and a shuffled data surrogate.
Thirdly, we determine the range of correlation coefficients
within which the cross-correlations can be considered statistically
significant.
As a result of Asian dust events, a cross-correlation was
considered significant when 12 48 n . This is considered to
be the mean and max duration time of Asian dust events.
However, there is no cross-correlation between PM
10
and
meteorological factors with the exception of time interval. It is
found that fluctuation of PM
10
concentration is greater than the
meteorological factors.
In the case of correlation between PM
10
and wind speed,
cross-correlations are more significant in inland areas than in
coastal areas. Particularly, there exist negative correlations in
Daegu and Wonju. It was assumed that when the source of
particles is domestic, the wind has a diluting effect and thus
produces a negative correlation. But in case of coastal areas,
when a pollutant is injected from external sources, there could
be a positive correlation. For this reason, there exists a positive
correlation in Incheon when n is 24, 48, 72, and 168. The
correlation between humidity and PM
10
is mostly negative in
cities other than Busan and Daegu [12]. We noticed that the
values of cross-correlations between PM
10
and meteorological
factors could be quantified by way of the DCCA cross-
correlation coefficient [9,11]. In the future, we hope that this
study will be extended to treat other types of climatological
data, due to the general applicability of the DCCA cross-
correlation coefficient.
This work was supported by Basic Science Research
Program through the National Research Foundation of Korea
(NRF) funded by the Ministry of Education, Science and
Technology (2013R1A1A2008558), by the Center for
Atmospheric Sciences and Earthquake Research (CATER
2012-6110), and by the National Research Foundation of
Korea (NRF) through a grant provided by the Korean Ministry
of Education, Science & Technology (MEST) in 2011 (No.
K1663000201107900).
REFERENCES
[1] P. A. Baron and K. Willeke, Aerosol measurement: Principles,
Techniques, and Applications, 2nd ed. John Wiley & Sons: New York,
2005: Y. S. Xue, L. Yu, and P. Xu, Analysis of PM10 concentration
time series and climate factor in Hangzhou 2004, Int. Workshop on
Geosci. and Remote Sens. 2008, pp. 575-578, 2008.
[2] E. Zervas, Application of Descriptive Statistical and Time Series
Analysis on Atmospheric Pollution, Proceedings of the 3rd
International Conference on Devel, Paris, France, December 2-4, 2012;
A. Yawotti, P. Intra, and N. Tippayawong, Characterization of a
Cylindrical Tri-Axial Charger as a Critical Component in a Fast
Response Particulate Air Pollution Sensor, Proceedings of the 5th
WSEAS International Conference on Environmental and Geological
Science and Engineering, Vienna, Austria, November 10-12, 2012.
[3] R. K. Patterson and J. Wagman, Mass and composition of an urban
aerosol as a function of particle size for several visibility levels, J. of
Aero. Sci., Vol. 8, pp. 269-279, 1977.
[4] H. Y. Son and C. H. Kim, Interpretating the spectral characteristics of
measured particle concentrations in Busan, J. of Korean Soc. for
Atmos. Env., Vol. 25, pp. 133-140, 2009.
[5] X. Querol, A. Alastuey, S. Rodriguez, F. Plana, C. R Ruiz, N. Cots, G.
Massague, and O. Puig, PM10 and PM2.5 source apportionment in the
Barcelona Metropolitan area, Catalonia, Spain, Atmo. Env., Vol. 35, pp.
6407-6419, 2001.
[6] Y. S. Jung and J. S. Jung, On Surface Ozone Observed in the Seoul
Metropolitan Area during 1989 and 1990, J. of Korea Air Pol. Res.
Assoc., Vol. 7, pp. 169-179, 1991.
[7] J. H. Jang, H. W. Lee, and S. H. Lee, Spatial and Temporal Features of
PM10 and Evolution Cycle in the Korean Peninsula, J. of the Env. Sci.,
Vol. 21, pp. 189-202, 2012.
[8] D. Giri, M. V. Krishna, and P. R. Adhikary, The Influence of
Meteorological Conditions on PM10 Concentrations in Kathmandu
Valley, Int. J. Env. Res., Vol. 2, pp. 49-60, 2008.
[9] B. Podobnik and H. E. Stanley, Detrended Cross-Correlation Analysis: A New
Method for Analyzing Two Nonstationary Time series, Phys. Rev. Lett. Vol.
100, pp. 084102, 2008.
[10] B. Podobnik, D. Horvatic, M. A. Petersen, and H. E. Stanley, Cross-
correlation between volume change and price change, Proc. Natl. Acad.
Sci., Vol. 106, pp. 22079-22084, 2009.
[11] G. Lim, K. Kim, J. K. Park, and K. H. Chang, Dynamical Analyses of
the Time Series for the Temperature and the Humidity, J. Korean Phys.
Soc. Vol. 62, pp. 193-196, 2013.
[12] D. D. Kang and K. Kim, unpublished.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
82
Monitoring System of Environment Noise and Pattern
Recognition
Luis Pastor Snchez Fernndez, Luis A. Snchez Prez, Jos J. Carbajal Hernndez
Instituto Politcnico Nacional, Centro de Investigacin en Computacin
Av. Juan de Dios Btiz s/n, Col. Nueva Industrial Vallejo, C.P. 07738, Mxico, D.F.
lsanchez@cic.ipn.mx, l.alejandro.2011@gmail.com, carbajalito@hotmail.com


AbstractThis paper presents an overview of the wireless
monitoring system of environment noise, placed throughout
Historical Centre of Mxico City which represents an attractive
technological innovation. It takes permanent measurements of
noise levels and stream the data back to the main monitoring
station every five minutes and the measurements of noise
produced during the take-off in a location of the International
Airport. The data acquisition is made at 25 KHz at 24 bits
resolution. This work allows analyzing the urban noise level and
its frequency range. Additionally, a computational model for
aircraft recognition using take-off noise spectral features is
analyzed based on other previous results. Eight aircraft
categories with all signals acquired in real environments are
used. The model has an identification level between 65 and 70%
of success. These spectral features are used to allow comparison
with other aircraft recognition methods using speech processing
techniques in real environments. This system type helps to
foresee potential effects to health of environment noise.
Keywordsnoise, aircraft, pattern, recognition, monitoring.
I. INTRODUCTION
The heavy traffic during the morning and evening rush
hours creates a noise problem that is difficult to address. The
noise emissions should be no more than 68 dB(A) during the
day and 65 dB(A) at night. However, the noise level in most
areas has been measured between 77 and 82 dB(A). The
aircraft classification is based on the principle that the airline
should pay a fair price that should be proportional to its noise
impact, independently of the weight of the aircraft or of the
transport service rendered. Committees of Aerial Transport
and Environmental propose an aircraft classification based on
the level of noise emission [1], [2].

This aircraft recognition, based on the preprocessed
spectral features allows the comparison with other aircraft
recognition methods using feature extraction with speech
processing techniques, a neural model more complex and
measurement segmentation in time, all in real environments
[3], [4]. Some discussions have commented on the potential
usefulness and feasibility of these preprocessed spectral
features of take-off noise for the aircraft recognition. Their
lower performance is demonstrated in this paper.

The monitoring system is presented in Fig. 1. Each node
includes a half-inch prepolarized IEC 61672 Class 1 micro-
phone [5], [6] with a windscreen, rain protection, and bird
spike mounted 4 m above the road surface in a weatherproof
case, a data acquisition a card of dynamic range, an industrial
computer and wireless connection to Internet by means of 3G
Mobile Broadband. Each node measures the noise levels every
30 seconds and streams the data back to the main monitoring
station every five minutes. A portable node measures the noise
produced during the take-off at International Airport. Fig. 2
shows an example of urban noise patterns of two weeks in the
Historical Centre of Mxico City. These patterns will be
analyzed in a next stage.

Aircraft classification base on take-off noise becomes a
complicated problem when it is done in real environments
because the background noise, the weather, the speed of the
take-off and even the aircrafts load can interfere with the
correct detection. Some devices with neural networks
recognize the aircraft class, but they can only discriminate
between jet aircrafts, propeller aircrafts, helicopters and
background noise [7].


Fig. 1. Monitoring system of environment noise in Mxico City
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
83
II. WIRELESS MONITORING SYSTEM



Fig. 2. Noise patterns of two weeks. Node placed at Corregidora and Pino
Surez in Historical Centre of Mxico City


Wireless topology reduces costs and provides flexibility in
setting up the monitoring systems. Each monitoring node is
based on a headless industrial PC running Windows XP with a
Wi-Fi adapter and a NI USB-9234 dynamic signal analyzer
(DSA).

The system is designed so it can keep collecting data locally
for up to 14 days. The government plans to use the acquired
data to identify worst times and locations, create noise maps,
and to implement regulatory actions to control and prevent the
noise and promote a healthier "noise" environment and bring
the city up to par with other big cities worldwide. In addition
to traditional metrics used for road traffic noise such as Leq
(equivalent sound level) at different averaging periods and
times of day, the system is capable of recording fractional
octave analysis and measuring prominent tones.

If the system analyst makes a request, the nodes are capable
of transferring audio files to Central Server, for study of
transient signals that may trigger alarms. This is helpful in the
identification of isolated sound sources that cause annoyance.

The preliminary strategy was to use the public Wi-Fi in the
area which was installed in 2008, but for lack of coverage in
all nodes, the communication migrated to 3G (International
Mobile Telecommunications-2000 IMT-2000) system
provided by a wireless carrier in areas without signal of public
Wi-F. 3G allows simultaneous use of speech and data services
and significantly slower data rates around 5.8 Mbps on the
uplink to the data center compared to the 54 to 100 Mbps
possible with 801.11X.
IEEE 802.11 divides the band from 2400 to 2483.5 GHz
into channels, analogously to how radio and TV broadcast
bands are carved up but with greater channel width and
overlap. For example the 2.40002.4835 GHz band is divided
into 13 channels each of width 22 MHz but spaced only
5 MHz apart, with channel 1 centered on 2.412 GHz and 13 on
2.472 GHz. By reserving certain channels for the noise
monitoring system, they may be able to eliminate the slower
3G connection when they expand it.


III. COMMUNICATION PROCEDURE BASED ON TCP/IP

The nodes (measurement points) have a dynamic address
assigned by a DHCP server (Dynamic Host Configuration
Protocol). The Control Center has a IP Static address.

Control Center is comparable to a server. Nodes are similar
to a Client. The nodes attempt to initiate the connection (open
a Connection TCP). IF the Control Center receives this request
of connection, which is validated with a key that must send
each node, admits the connection.

Connections TCP/IP stay open. Each node hopes by the
data request of the Control Center. The basic period of data
request is 5 minutes.

Figures 3, 4 and 5 present examples of monitoring of
environment noise for the Historical Centre of Mxico City.
Weighting filter A and C may be used [8], [9], [10].


IV. NOISE CHARACTERISTICS OF AIRCRAFT TAKING OFF

The Fig. 6 presents the system block diagram for the
pattern generation and recognition. The take-off noise is
considered a non-stationary transient signal because it starts
and ends in a zero level and it has a finite duration [10].
Figure 7 presents the time-domain representation of a take-
off noise typical signal. Fig. 8 shows, most of the signal energy
is below 2.5 KHz. In this case, apart from the fact that the
signal starts and ends in a zero level, the background noise is
more notorious in the ends of the signal because in the central
part, the aircraft-generated noise masks it.
For all used aircraft noises the typical form of the
amplitude spectrum is observed from 0 to 5000 Hertz, for this
reason, in this work was used a sampling frequency of 25000
Hz (samples per second). The amplitude spectrum has 300000
harmonics with resolution of 0.04167 Hz.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
84

Fig. 3. Noise level, time, date, and amplitude (dBA). Node placed at Corregidora and Pino Surez in Historical Centre of Mxico City

Fig. 4. Noise map displaying noise level in dBA (NSCE), time: hours (Horas). Node placed at Corregidora and Pino Surez in Historical Centre of Mxico City
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
85

Fig. 5. Central server interface in the Control Centre


Table I presents some examples about noise pollution and its effects on health. The harmful effects are related with the
exposure time, sound pressure level and its frequency range.
TABLE I. NOISE POLLUTION AND ITS EFFECTS ON HEALTH (SOME EXAMPLES)
Effect Exposure time Sound pressure level Frequency range
Vibration on visual acuity [14]-[19] Seconds 110 dB 4 a 800 Hz
Human body vibration [19] Seconds 105 dB 4 a 100 Hz
Breathing frequency variation [14]-[17] Seconds 70 dB 0.5 - 100 Hz
Ear pain or discomfort [14]-[17] Seconds 110 dB 50 - 8000 Hz
Abdominal discomfort [16] Seconds 80 dB 800 Hz
Speech interference [14], [15] Minutes 50 dB 100 - 4000 Hz
Stress [20] Minutes 105 dB Whole range
Endocrine disturbance [14]-[17] Days 80 dB 3000 - 4000 Hz
Sleep disturbance [21], [22] 10 - 15 events per night 45 dB Associated to aircraft noise
Hearing threshold shift and loss [14]-[16] Months 80 dB 3000 - 6000 Hz
Vibro-acoustic disease [18] Years 90 dB < 500 Hz
Cardiovascular disease [15] Years 90 dB < 500 Hz
Hypertension [20]-[22] Years 50 dB Associated to aircraft noise
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
86

Fig. 6. System block diagram for the pattern generation and recognition
The spectral resolution (number of harmonics) must be
reduced because of the following reasons:
1. The amplitude spectrum has 300000 harmonics and its
processing will be very complex.
2. It is only of interest the spectral form.
The following suppositions are presented:
1. A reduction method of spectral resolution (with average
spectrum) improves the processing results at initial and
final times within the measurement interval of aircraft
noise. In other words, a feedforward neural network was
trained with one noise pattern which was acquired from
zero seconds from the aircraft take-off until 24 seconds
later; in run time, if the aircraft take-off noise is acquired
from 5 seconds until 24 seconds, this time displacement
of 5 seconds will affect little the spectral form if its
spectral resolution has been reduced.
2. A moving average filter creates a typical form of the
aircraft take-off noise spectrums.
3. Decimated average spectrums, with a rate X, conserve the
typical spectral form.

Fig. 7. Time-domain representation of typical signal of take-off noise with
sampling frequency of 25 kHz (25 ks/s) for a Boeing 737-200

Fig. 8. Frequency-domain representation of the signal of Fig. 7

In this work was used the Bartlett-Welch method [11],
[12], [13] for spectral estimation. The Bartlett method consists
on dividing the received data sequence into a number K, of
non-overlapping segments and averaging the calculated Fast
Fourier Transform. It consists of three steps:

1. The sequence of N points is subdivided in K non
overlapping segments, where each segment has length M.

i i
x n x n iM
,
i 0,1,..., K-1
,
n 0,1,..., M 1
(1)
2. For each segment, periodogram is calculated
xx

P f

2
M 1
j2 fn
xx i
n 0
1

P f x n e
M


,
i 0,1,..., K 1
(2)

3. The periodograms are averaged for the K segments and the
estimation of the Bartlett spectral power can be obtained
with (3).
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
87



2
K 1
i B
xx xx
i 0
1

P f P f
K


(3)

Welch Method [13] unlike in the Bartlett method, the
different data segments are allowed to overlap and each data
segment is windowed.

i
x n x n iD , n 0,1,..., M 1, i 0,1,..., L 1
(4)
Where iD is the point of beginning of the sequence i-th. If
D=M, the segments are not overlapped. If D=M/2, the
successive segments have 50% of overlapping and the
obtained data segments are L=2K.

Why Welch method is introduced?
- Overlapping allows more periodograms to be added, in hope
of reduced variance.
- Windowing allows control between resolution and leakage.
The Welch method is hard to analyze, but empirical results
show that it can offer lower variance than the Bartlett method,
but the difference is not dramatic.
Suggestion is that 50 % overlapping is used.

The data segment of 600000 samples, acquired in 24
seconds, is divided in 24 segments: K=24, with 50% of
overlapping, therefore, L=2K=48 overlapped data segments,
later is applied the FFT (periodogram) to each segment and
they are averaged. In this paper, 8 aircrafts types were tested.
The rate of training/validation patterns per plane was 8/3. The
trained neural network was tested with 75 patterns in data base
and several real time measurements performed in five days.

The Figs. 9, 10, 11 and 12 present an example of aircraft
noise signals processed with this method.

The diminution of the spectral resolution allowed
maintaining the spectral form with smaller amount of
information for the neuronal network training.


Fig. 9. Average spectral representation of 48 overlapped data segments, for a
Boeing 737-200. Each segment has 25000 samples (one second). Filter rank
= 50, 12500 harmonics. White curve is filtered with moving average filter

Fig. 10. Average spectral representation of Fig. 9, up 2.5 KHz. 2500
harmonics. White curve is filtered with moving average filter

Fig. 11. The filtered curve is normalized




Fig.12. Normalized filtered curve with decimation. The 227 points (processed
harmonics) are the neural network inputs
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
88
V. NEURAL MODEL AND PERFORMANCE EVALUATION
The neural network has 227 inputs. Every input is a
harmonic normalized and processed, an example was
presented in Figs. 9 to 12. The output layer has 8 neurons,
corresponding to the 8 recognized aircraft. After several tests,
the neural network was successful with a hidden layer of 14
neurons. The activation functions are tan-sigmoid. The Fig 13
presents the topological diagram and training performance.
The training performance was successful with an error of
1.12851e
-10
in 300 epochs.






Fig. 13. Neural network topology and training performance. Input: 227
neurons (processed harmonics of aircraft noise). Output layer: 8 neurons
(aircrafts type)


A test of aircraft recognition is presented in Table II. The
average identification level is 65%. Table III shows a
performance evaluation, in percent of success, compared with
the referenced method in [3]. The aircraft recognition methods
using feature extraction with speech processing techniques
[3], [4], [23], a neural model more complex and measurement
segmentation in time [4], all in real environments, have higher
performance than an aircraft recognition method whose
signals were acquired in controlled environments and with
preprocessed spectral features [24]. In this paper the methods
are evaluated under similar conditions.
TABLE II. AIRCRAFT RECOGNITION RESULTS WITH PATTERNS IN DATA BASE

AIRCRAFT CLASS

PATTERNS
FOR TEST

RECOGNIZED

PERCENT OF
SUCCESS
Airbus 1 8 5 62
Airbus 2 9 6 67
Airbus 3 9 5 55
Airbus, Boeing
737-800
10
7
70
Atr-42 6 4 67
Boeing 737-100,
737-200
14 9 64
Boeing 737-600,
737-700
12
8
67
Boeing 747-400 7 5 71
TOTAL 75 49

65


TABLE III. PERFORMANCE EVALUATION COMPARED TO THE PUBLISHEDED
METHOD [3] IN PERCENT OF SUCCESS.

AIRCRAFT
CLASS


PUBLICATED METHOD [3].
(USING ONLY LINEAR
PREDICTION CODE)

METHOD ANALYZED
IN THIS PAPER

Airbus 1 80 62
Airbus 2 83 67
Airbus 3 66 55
Airbus, Boeing
737-800
71
70
Atr-42 100 67
Boeing 737-100,
737-200
72
64
Boeing 737-600,
737-700
77
67
Boeing 747-400 75 71
TOTAL 80

65

VI. CONCLUSIONS AND FUTURE WORK.

The system makes various types of spectral analyses and
allows to obtain the main used statistical indicators for
environment noise which can be expressed in dB(A) or dB(C)
depending of the used weighting filter. Possible potential
affectations to health can be determined, to different
exposition times, which can give an idea of what can happen if
the sonorous levels stay during a certain time.

The system allows making measurements of noise
produced by airplanes at airport during take-offs. The
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
89
measurement of the events is during 24 seconds, with
sampling frequency of 25 KHz. The most representative
frequencies of signal are below the 2.5 KHz.

Different analyses are developed, as well as the aircraft
identification as much in real time, as in sounds stored
previously, which have been stored without any class of
processing. This allows that during a measurement simply the
information of the take-offs is captured and later a deep
analysis is made of the stored information.

The aircraft recognition based on their take-off acoustic
impact improves to an environment noise monitoring system.
The cited methods are robust to disturbances such as sounds of
birds which they are near the microphone, barks, sounds of
trucks, sounds of other airships which they are maneuvering
within the airport, sounds generated in the houses near the
point of measurement like music, works, even voices of the
operators; in addition to the variations generated by the
climate, the speed and the load of the airplane at the moment
of the take-off.

The aircraft recognition methods using feature extraction
with speech processing techniques, parallel or combined
neural models and measurement segmentation in time [3], [4]
or spatially are the appropriate technical-scientific
recommendations for a best performance.

For future work, we will used spatial information to extract
patterns related with aircraft position. This will improve the
recognition performance and the acoustic impact estimation.

REFERENCES
[1] Kendall, M. EU proposal for a directive on the establishment of a
community framework for noise classification of on civil subsonic
aircraft for the purposes of calculating noise charges, European Union,
2003
[2] Holding, J. M.: Aircraft noise monitoring: principles and practice, IMC
measurement and Control, vol. 34, issue 3, pp. 72-76, 2001.



















[3] Snchez, L., Snchez, L. A., Carbajal, J., and Rojo, A. Aircraft
Classification and Acoustic Impact Estimation Based on Real-Time
Take-off Noise Measurements. Neural Processing Letters, 1-21, 2012.
[4] Snchez-Prez, L. A., Snchez-Fernndez, L. P., Surez-Guerra, S. and
Carbajal-Hernndez, J. J. Aircraft class identification based on take-off
noise signal segmentation in time. Expert Systems with Applications, 40,
5148-5159, 2013.
[5] International Electrotech. Comm. (IEC): Standard IEC651: Sound Level
Meters, 1979.
[6] Chu, W.: Speech Coding Algorithm: Foundation and Evolution of
Standardized Coders, J. Wiley, 2003.
[7] Lochard - Airport Environment Management: EMU2100 Brochure,
2008.
[8] Perez-Meana, H. (ed.): Advances in Audio and Speech Signal
Processing: Technologies and Applications, Idea Group Pub. 2007.
[9] International Electrotech. Comm. (IEC): Standard IEC1260: Octave
Filters, 1995.
[10] American National Standards Institute (ANSI): Standard S1.11-2004:
Specification for Octave-Band and Fractional-Octave-Band Analog and
Digital Filters, 2004.
[11] Oppenheim, A.V., and R.W. Schafer. Discrete-Time Signal Processing.
Englewood Cliffs, NJ: Prentice Hall, 1989. Pgs. 311-312.
[12] Sanchez, L. et al.: Noise pattern recognition of airplanes taking off: task
for a monitoring system. Lecture Notes in Computer Science, vol. 4756,
pp. 831-840, 2007.
[13] Welch, P.D. "The Use of Fast Fourier Transform for the Estimation of
Power Spectra: A Method Based on Time Averaging Over Short,
Modified Periodograms." IEEE Trans. Audio Electroacoust. Vol. AU-15
(June 1967). Pgs. 70-73.
[14] Kryter KD. The effects of noise in man: Academic Press; 1985.
[15] Berglund B, Lindvall T, Schwela DH. Guidelines for community noise:
World Health Organization (WHO); 1999.
[16] Harris C. Handbook of acoustical measurements and noise control:
McGraw-Hill; 1995.
[17] Berglund B, Lindvall T, Schwela DH. Community noise: World Health
Organization (WHO); 1995.
[18] Alves-Pereira M, Castelo Branco N. Vibroacoustic disease2004.
[19] Recuero M. Ingeniera acstica: Paraninfo; 1994.
[20] Ostrosky-Sols F. Toc toc, hay alguien ah?: Infored; 2001.
[21] Michaud DS, Fidell S, Pearsons K, Campbell KC, Keith SE. Review of
field studies of aircraft noise-induced sleep disturbance. J Acoust Soc
Am 2007;121(1):32-41.
[22] Knipschild P. V. Medical effects of aircraft noise: Community
cardiovascular survey. Int Arch Occ Env Hea 1977;40(3):185-90.
[23] Snchez, L., Snchez, L. A, Ibarra, M. Aircraft Classification and Noise
Map Estimation Based on Real-Time Measurements of Take-off Noise,
Proceedings of NCTA 2011 International Conference on Neural
Computation Theory and Applications, Paris, France. 24-26 October,
2011, pp.153-161. ISBN: 978-989-8425-84-3.
[24] Sanchez, L., Pogrebnyak, O., Oropeza, J. and Surez, S. Noise pattern
recognition of airplanes taking off: task for a monitoring system. Lecture
Notes in Computer Science, vol. 4756, pp. 831-840, 2007.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
90
Benchmarking, Standard Setting and Energy
Conservation of Olefin Plants in Iran
S. Gowharifar*, Head of Petrochemical Industry,
Iranian Fuel Conservation Company, Tehran, Iran.,
Gowharifar@Ifco.ir
B. Sepehrian, Caspian Energy Company. Tehran, Iran.,
Baharehsepehrian@yahoo.com
G. Nasiri, Head of HSE Department NIPC Company.
Tehran, Iran., Nasiri@nipc.ir
A. Khoshgard, Chemical Engineernig Dep. Islamic
Azad University South Tehran Branch, Tehran, Iran.,
A_Khoshgard@Azad.ac.ir
M.Momenifar, Senior Expert of Planning,
IKCO Company, Tehran, Iran.,
m.momenifar@Ikco.com

Abstract Olefin plants are one of the most energy intensive
petrochemical plants in the world [1, 2]. In Iran more than 15%
of petrochemical products are olefins and it will increase rapidly
during next future, so improvement of energy efficiency in olefin
plants is key element in NPC (Iranian National Petrochemical
Company) plan for cost reduction and sustainable development.
In this paper the energy consumption of existing olefin plants are
compared with design condition and also world best technology.
This comparison indicates the meaningful gap between
operational and best condition. In this study the opportunities of
energy saving in olefin plants are investigated using process
integration tools and benchmarking. The result of this study
indicates there is potential for energy consumption reduction up
to 25% in Iranian olefin plants.

Keywords Energy efficiency; Energy analysis; Best
technology; Benchmarking; Olefin; Ethylene; Gap Analysis;
Specific energy consumption.
I. INTRODUCTION
Energy efficiency significantly affects profit margins of a
plant, while increased cost of fuel and power, and more
stringent environmental regulations makes it more important.
Experience shows that the ability to benchmark and monitor
energy efficiency is essential for successful implementation of
an energy efficiency improvement program [3,4]. Energy
benchmarking is the process of quantifying and comparing the
energy consumption of a process unit or whole
refinery/petrochemical plant to some pre-selected standard and
to the rest of the industry. A system is needed to enable
calculating and expressing each processing unit's energy
efficiency as a single number so that performances of different
units can be compared.
Process energy use is defined as the sum of fuel, steam and
electricity in primary terms that are used for reactions
(converting feedstock into olefins) and all the subsequent
processes (e.g. compression and separation). SEC ( specific
energy consumption: total energy consumption per ton of
product) is one of the measuring tools for energy efficiency in
plants[5] but it is not an accurate parameter, because the sites
processing a complex feed are expected to consume more
energy compared to ones with a simpler feedstock. The SEC
also does not assess the unit operation (i.e. furnace severity)
where focus can be moved to one or more products. Therefore
SEC is poor parameter to be used to allow a true comparison
between sites and even between different operating periods on
the same plant. On the other hand BT (Best Technology)
methodology provides a very reliable energy benchmarking
tool, which has several advantages over other systems as
following:

It sets energy targets in terms of best available
technology, and not just by comparison with the
industry.
It compares process units with what can really be
achieved not just by theoretical targets.
It provides basis for the Gap Analysis whereby
areas of inefficiency can be identified and their
contribution quantified.

Process BT Index =
Allowances process Individual of Sum
n Consumptio Energy Actual


BT has best technology configuration behind it - this
can be used to point out differences in process
configurations between the actual and the efficient
unit .
II. Methodology
The SEC of all olefin production plants was calculated and
compared with world best technology and detailed study for
some selected plants has been done to calculate BT Index and
gap analysis. The allowed energy use for Ethylene Plant
depends very much on the yield of ethylene, expressed as
weight percentage of ethylene of the feed to furnace. If
ethylene yield on fresh feed increases from 25 wt% to 50 wt%,
the total BT energy consumption increases by about 40%.This
means that recovery of ethylene requires more energy than the
recovery of other products [6, 7]. BT implies attainable
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
91
efficiency, without assuming any constraint on investment or
payout. Energy loss represents the difference between the total
energy input and total energy output. Thermodynamic
theoretical energy requirement is the minimum energy input
requirement for converting naphtha to end products. It is the
difference between the total calorific value of products and the
calorific value of naphtha at ambient temperatures. The former
is larger than the latter because the overall naphtha-based
steam cracking reactions are endothermic. In order to compare
energy efficiencies across different processes, we believe
process energy use for steam cracking (thermodynamic
theoretical energy requirements and energy loss together) can
be used as a basis for comparing energy efficiency in this
article. The correlation is expressed in terms of total energy
per ton of fresh feed.
The following steps were taken to achieve the goals of this
study:
Step 1: Data Collection
Collect, reconcile and validate data: The required data for this
step are feed & products specification and flow rates, all the
utility import/export and generation in plants and also the
required data for efficiency evaluation of main energy
consumers in plant. For detailed study and benchmarking of
selected plants the additional data collected to find the BT
index and allowance of energy consumption [8]. These
additional data were energy parameters on process to process
Exchangers- Utility Exchangers and Steam generation
systems. Collected data validated and reconciled to define a
representative energy balance for the processes. The data was
then analyzed and interpreted using ProSteam software for
rigorous steam, water, power and fuel balances. SuperTarget
software was used to calculate heat integration level of the
unit.
Step 2: Site BT Benchmarking
In this step, Benchmarking of selected plants were done
using BT methodology to compare against worldwide
industry performance [9]. SEC parameter was used for
remaining plants to compare operational condition with
words best technology and design situation [10].
The following assumptions have been used to calculate
SEC and BT index:
The meaning of import is utilities taken from outside
the plant's battery limits
The power imported has been converted to a primary
energy source. The energy value of the power
imported from the site has been calculated assuming
that an external power station would be generating
power at an efficiency of 35% equivalent to a fuel
consumption of 10.3 GJ per MWH of power.
Steam imported has been converted to a primary
energy source. The energy value of the steam
imported has been calculated assuming an external
generation efficiency of 92%.
Fuel consumption includes fuel imported and any off
gas from the process that is routed to the furnaces for
fuel.
Steam internally generated and consumed has been
accounted for as fuel consumed.
Where fuel, steam or power is exported then an
energy credit is applied. There is also a credit if there
is a significant high temperature condensate return to
outside of battery limits.
The auxiliary utilities cooling water, nitrogen, plant air,
instrument air, potable water and fire water have been
included using their equivalent primary energy forms as these
tend to be insignificant energy consumers (in comparison to
fuel, steam and power), normally in constant use and are often
already included in the power import
Step 3: Gap Analysis:
The BT index allows for direct comparison of the processes as
it assesses the performance of the plants against an achievable
design, with parameters accounting for variations in operation
such as yield.
Step 4: Approach to Achievable SEC
Achievable SEC for each plant was calculated using gap
analysis and defining realistic and feasible energy saving
projects. During project development numerous options for
energy improvement were reviewed and assessed in terms of
applicability and economical viability. The projects listed in
the following categories.
Non-investment projects, implementation
and optimization (operational)
Minor investment projects, design,
implementation and optimization
Investment projects, design, implementation
and optimization (Major Investment, pay
back <3 years)
Investment projects, design, implementation
and optimization (Major Investment, pay
back <5 years)
Step 5: Energy Improvement Program
The realistic and feasible projects in each plant are classified
using technical and economic criteria, which are ranked
according to their cost and duration to provide a Roadmap for
each process. This Roadmap includes an energy improvement
program that can form the basis of future energy
improvements on other petrochemical sites that have not been
studied in detail.
Step 6: Specific Energy criteria for new plants
The key design factors influencing energy consumption on
each of the olefin process were identified. Then the results of
detailed study and energy conservation programs were used to
define SEC criteria for new plants.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
92

III. SELECTED PLANTS FOR STUDY
Major olefin production plants in Iran are distributed in
two specific zone named petrochemical special economic
zone ( Mahshahr )and pars special energy zone
( Assaluyeh ) which are located in south of Iran. These
specific zones are near hydrocarbon resources (oil and
gas) and also have good situation for foreign investment
and export of products. Two olefin plants are operating
out of these zones, but they will increase rapidly by
transferring ethane from pars special energy zone to other
cities. Following table shows the existing and under
construction olefin plants.
The following picture shows the sharing of operating
plants in the production of olefin.

Table 1- olefin plants in Iran
Location Plant name Feed
Production
Capacity
(kTon/yr )
Production/
Construction
1
P
e
t
r
o
c
h
e
m
i
c
a
l

S
p
e
c
i
a
l

E
c
o
n
o
m
i
c

Z
o
n
e

Bandar Imam Naphtha/Ethane 528 Production
2 Marun Ethane 1300 Production
3 Amir Kabir Ethane 678 Production
4
P
a
r
s

S
p
e
c
i
a
l

E
n
e
r
g
y

Z
o
n
e
Arya Sasol Ethane 1000 Production
5 Jam Naphtha/Ethane 1626 Production
6 Morvarid Ethane 500 Production
7 Kavyan Ethane 2000 Construction
8
O
t
h
e
r

Z
o
n
e

Arak Naphtha/Ethane 434 Production
9 Tabriz Naphtha/Ethane 192 Production
10 Ilam Ethane 582 Construction
11 Gachsaran Ethane 1000 Construction
12 Firouzabad Ethane 1000 Construction
13 Genaveh Ethane 500 Construction
14 Dehloran Ethane 698 Construction
15 Bushehr Ethane 1000 Construction
Total production capacity: 6258 kTon/yr Under construction capacity:6780 kTon/yr




Fig1-Production capacity distribution of Iranian olefin plants in 2011 ( % )




Fig2- Operational Production distribution of Iranian olefin plants in 2011 ( % )

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
93

Fig3- Load factor in 2011 (ratio of actual production to design production capacity)

Tabriz and Bandar Imam are oldest production plants in Iran,
so the detailed study on these sites was done. All the required
data for mass & energy balance were collected and validated
with proper engineering softwares. BT index of these sites was
calculated Using benchmarking and process integration tools
in different sections of them. BT index shows the
opportunities of energy efficiency improvement in the process.

SEC and BT index of Operating olefin plants:
The SEC of operating plants according to table-1 was
calculated in design and operational conditions during one
year period of time. The gap between operational and design
condition was used to find non cost and low cost opportunities
of improvement. The following table shows the results of the
SEC calculation and no cost operational energy saving
potential in each plant.


The detailed study was done on BIPC and Tabriz olefin plants.
The following tables shows the calculation results in these
plants. Bandar Imam Plant (BIPC) has a larger inefficiency
gap indicating that there is a greater potential to save energy.
Although it is larger than Tabriz; it is also older and was
constructed at a time when many energy efficient ideas were
not incorporated due to the abundance of fuel The BT index
allows for direct comparison of the processes as it assesses the
performance of the plants against an achievable design, with
parameters accounting for variations in operation such as
yield. Best technology olefins plants generate power and
steam at high efficiencies and of sufficient quantity that they
do not need to import either utility, however both the Bandar
Imam and Tabriz plants import power and steam and this
significantly contributes to their BT score.



Table 2- olefin plants in Iran
Row Plant name
Annual HVP Production ( kTon/yr ) Gap between operational and design
Design Operation SEC ( Gj /ton)
Equivalent Saving
MMNm3/yr Natural gas
1 Bandar Imam 528 404 6.7
72.7
2 Marun 1300 1001 5.6
149.6
3 Amir Kabir 678 508 6.1 82.9
4 Arya Sasol 1000 774 0.5
9.4
5 Jam 1626 1089 10.7
313.4
6 Morvarid 500 252 7.4
49.9
7 Arak 434 401 4.6
49.9
8 Tabriz 192 187 6.4
32.0
Total HVP production 6258 4616 Total Saving
757.9
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
94
Table 3- SEC Calculation in BIPC and Tabriz
Tabriz Olefins Bandar Imam (BIPC) Olefins
42.8 T/hr Plant Feed Rate 108.7 T/hr Plant Feed Rate
Product Rate Product Rate
15.7 T/hr Ethylene 52.7 T/hr Ethylene
6.6 T/hr Propylene 16.7 T/hr Propylene
22.3 T/hr Total HVP 69.4 T/hr Total HVP
Energy
Consumption
Energy Consumption
416 Gj/hr Fuel 1113 Gj/hr Fuel
73 Gj/hr Steam 608 Gj/hr Steam
46 Gj/hr Power 66 Gj/hr Power
535 Gj/hr Total 1786 Gj/hr Total
Existing SEC Existing SEC
34 Gj/Ton Ethylene Based 33.9 Gj/Ton Ethylene Based
24 Gj/ton HV Product Based 25.7 Gj/ton HV Product Based


The energy allowance for Tabriz and BIPC olefin plants was
calculated for different sections of each plant. It was done
using process integration software and also using BT
efficiencies for energy conversion systems such as power and
steam generation systems, furnaces and so on. Then The BT
index for Tabriz and BIPC plant was calculated for each
section and also for overall plant. Table 4 shows the results
of BT index gap analysis in BIPC Olefin plant.

Table 4- BIPC olefin plant inefficiency gaps
Bandar Imam Plant(BIPC) Gap (GJ/h) Energy Use (GJ/h) BT Index (%) BT Reduction (%)
Current 1748 185
Fired Heater Efficiency 99 1649 174 10.5
Heat Integration Gap 72 1577 167 7.7
Process Gap 237 1340 142 25
Power and Shaft work Efficiency 394 946 100 41.6
100% BT 946 100

The Tabriz current operating BT index is actually higher
than Bandar Imam's. The main reason for this is that the
power imported is significantly more expensive than the
fuel; therefore ideally the plant should generate as much
power from fuel as possible. The key area to their position
is the operation of the process furnaces. Table 5 shows the
results of olefin BT score in Tabriz and BIPC. The
Bandar Imam plant has the highest Energy allowance and
gap. The results of detailed gap analysis for this plant are
illustrated in the fig-4, which shows the largest
inefficiency in Power and the shaft work.





Table 5- olefin BT Comparison






Fired Heater
Efficiency
99Gj/hr
12%
Heat
Integration
72Gj/hr
9%
Power &
Shaft Work
Efficiency
394GJ/hr
49%
Process
237Gj/hr
30%

Fig4- BIPC olefin plant Energy Gap pie chart

Achievable SEC for BIPC and Tabriz olefin plant was
calculated. The detailed gap analysis indicates the opportunity
of energy saving in different categories with different
investment and pay back periods. The reliable projects listed
in the following categories according to technical and
economical parameters [11].
BIPC Tabriz Units Olefins
185% 218% BT Score
1748 788.4 GJ/h Actual Energy consumption
945.9 361.9 GJ/h Energy Allowance
802.1 426.5 GJ/h GAP
69.4 22.3 t/h HV Products
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
95
Non-investment projects, implementation
and optimization (operational)
Minor investment projects, design,
implementation and optimization
Investment projects, design, implementation
and optimization (Major Investment, Pay
back <3 years)
Investment projects, design, implementation
and optimization (Major Investment, pay
back <5 years)
Table-6 shows current SEC in BIPC and Tabriz olefin plants
and also the estimated achievable SEC related to different
categories of projects mentioned above.

Table 6- current SEC in BIPC and Tabriz olefin plants

Estimated Achevable
SEC(Majer Investment,
payback<5years )
Estimated Achevable
SEC(Majer Investment,
payback<3years )
Estimated
Achevable
SEC(Minor
Investment)
Estimated
Achevable
SEC(Operational)
Current
SEC Plant Site
Gj/t HVP Gj/t HVP Gj/t HVP Gj/t HVP Gj/t HVP
19.4 22.2 24.3 24.9 25.7 Olefins BIPC
20.2 20.2 21 21.4 21.7 Olefins Tabriz


The following assumptions have been used in this calculation:
The meaning of import is utilities taken from outside
the plant's battery limits
The power imported has been converted to a primary
energy source. The energy value of the power
imported from the site has been calculated assuming
that an external power station would be generating
power at an efficiency of 35% equivalent to a fuel
consumption of 10.3 GJ per MWh of power.
Steam imported has been converted to a primary
energy source. The energy value of the steam
imported has been calculated assuming an external
generation efficiency of 92%.
Fuel consumption includes fuel imported and any off
gas from the process that is routed to the furnaces for
fuel. Steam internally generated and consumed has
been accounted for as fuel consumed.
Where fuel, steam or power is exported then an
energy credit is applied. There is also a credit if there
is a significant high temperature condensate return to
outside of battery limits.
The auxiliary utilities cooling water, nitrogen,
plant air, instrument air, potable water and fire water
have not been included in this equation as these
tend to be insignificant energy consumers (in
comparison to fuel, steam and power), normally in
constant use and are often already included in the
power import table1.





Fig5- Economic saving and Capex



Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
96


Energy saving and roadmap
The detailed study of two plants and SEC comparison on other
olefin production plants illustrate high opportunity for energy
saving. The total estimated energy saving for all olefin plants
is about 757.9 MMNm
3
/yr of natural gas. The roadmap to
achieve the estimated energy saving in BIPC is shown in fig -
5. The SEC, economic saving and Capex cost of BIPC olefin
plant versus payback period is illustrated in fig -5.




Fig6- roadmap BIPC olefin plant



Conclusion
Olefin plants are one of the most energy intensive production
plants of chemical and petrochemical industries which use
hydrocarbons as fuel and feed. This paper shows the existing
situation of olefin production plants in Iran and also the
energy saving opportunities in these sites. The SEC for new
olefin plants was offered and approved by IFCO (Iranian Fuel
Conservation Company) as 20 Gj/Ton of HVP in olefin plants.
The economically attractive improvement opportunities have
been identified within the 2 detailed studied plants. It is
estimated that total implementation of these projects will save
over 102 Nm3/yr of equivalent natural gas in energy use. On
the other hand there is a potential of energy reduction more
around 656 Million NM
3
/yr in other olefin production plants
which corresponds to 1.36 Million Ton of CO
2
Reduction in
Iranian olefin plants.




Acknowledgement
The authors would like to thank Iranian Fuel Conservation
Company (IFCO) for supporting the bulk of this work and
project.



References

[1] IEA, Tracking Industrial Energy Efficiency and CO2
Emissions OECD/IEA, 2007
[2] Tao Ren, Martin Patel, Kornelis Blok, Olefins from
conventional and heavy feedstocks: Energy use in steam
cracking and alternative processes, Energy, Volume 31, Issue
4, 2006, pp 425451.
[3] Deger Saygn, Martin K. Patel,Ceci lia Tam, Dolf J. Gielen
,IEA, Chemical and Petrochemical Sector Potential of best
practice technology and other measures for improving energy
efficiency.2009
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
97
[4] Tao Ren, Martin K. Patel, Kornelis Blok, Steam cracking
and methane to olefins: Energy use, CO
2
emissions and
production costs Energy Volume 33, Issue 5, May 2008, Pages
817833
[5] J. d. Beer, Potential for industrial energy efficiency
improvement in the long term, P. D. W. C. Turkenburg and
D. K. Blok, Eds.: Utrecht, 1998, pp. 278.
[6] L. F. Albright, B. L. Crynes, and S. Nowak, Novel
production methods for Ethylene, light hydrocarbons and
aromatics, New York: Marcle Dekker Inc., 1992.
[7] Hydrocarbon-Processing, Petrochemical Processes,
Hydrocarbon Processing, March 2003
[8] ChemSystems, Process Evaluation/Research Planning:
PERP 2002/2003 Program (Appendix II Production Cost
Tables), Chem System/Nexant Inc., 2002
[9] Douglas C White, Emerson Process Management,
OLEFIN PLANT ENERGY SAVINGS THROUGH
ENHANCED AUTOMATION, AIChE Paper Number 110f.
[10] Solomon, Worldwide Olefins Plant Performance
Analysis 1995, quoted in "Energy efficiency improvement in
ethylene and other petrochemical production" By D.
Phylipsen, et. al., report NW&S 99085, Dep. of Science,
Technology and Society at the Utrecht University, 1999,
Solomon Associates Ltd., Windsor 1995.
[11] Marianne Lindstrm, Mikko Attila, Jaana Pennanen,
Finnish Environment Institute Elise Sahivirta, Finnish
Ministry of the Environment," AUTHORITIES ROLE IN
THE ASSESSMENT OF ENERGY EFFICIENCY ".

I.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
98
Performance evaluation of an anaerobic hybrid
reactor treating petrochemical effluent

M.T. Jafarzadeh*,Manager of Environment,
National Petrochemical Company, Tehran, I.R. Iran.,
Jafarzaadeh@yahoo.com
N. Jamshidi, HSE Training Manager,
National Petrochemical Company, Tehran, I.R. Iran.,
Naserjam@yahoo.com
L. Talebiazar, Senior Expert of Environment Lab.,
AmirKabir University of Technology, Tehran, I.R. Iran.,
ltalebiazar@yahoo.com
R.Aslaniavali, Senior Expert of Environment,
AFA Company, Tehran, I.R.Iran.
R_Aslani@gmail.com


Abstract Organic loading rate (OLR), Hydraulic Retention
Time (HRT) and up flow velocity are important parameters
significantly affecting microbial ecology and characteristics of
anaerobic reactors. In this study, Performance of an anaerobic
hybrid reactor (UASB/Filter) at mesophilic condition was
evaluated in a 15.4 L reactor receiving petrochemical wastewater.
The temperature of influent was adjusted by an inline heat
exchanger at around 35 C. The reactor was seeded with
flocculent sludge from a UASB plant treating dairy wastewater.
The sludge was acclimatized to petrochemical wastewater in two-
stage operation. After 39 weeks, a COD reduction of 70.3% was
obtained at OLR=2.0 kg m
-3
d
-1
and HRT=18 h.
Under steady state conditions, experiments were conducted at
OLRs of between 0.5 and 24 kg TCOD m
-3
d
-1
, hydraulic
retention times (HRT) of 4-48 h and up flow velocities 0.021-0.25
m h
-1
. Removal efficiencies in the range of 42-86% were achieved
at feed TCOD concentrations of 1000- 4000 mg L
-1
. The biogas
production data used for determination of biogas production
kinetics. The values of Gmax and GB estimated as 11.173 LL
-1
d
-1

and 85.83 g L
-1
d
-1
, respectively.
KeywordsHybrid; Industrial wastewater; Petrochemical;
Anaerobic treatment
I. INTRODUCTION
The petrochemical industry poses a significant
environmental impact by discharging effluent to receiving
waters containing (hardly) biodegradable organic matter.
Aerobic processes are not regarded as a suitable treatment
option because of high energy requirements for aeration,
limitations in liquid-phase oxygen transfer rates, and large
quantities of sludge production. Traditional anaerobic
processes are also limited by low rates of organic matter
removal, long hydraulic retention times (HRT), accumulation
of excessive residual organic matter and intermediate products,
and large reactor volume requirements. Recent developments
in anaerobic treatment processes, especially high retention of
biomass in the reactor, has made it possible to decouple solids
retention time (SRT) and hydraulic residence time in high-rate
anaerobic reactors. This has resulted in increased treatment
efficiency of these processes and gradual but steady
improvement of the common perception that anaerobic
processes are not suitable for treatment of various industrial
effluents.
Increase in yearly production capacity from 5.9 million tons
in 1990 to 125 million tons until 2025, either due to
construction of new plants or expansion of existing
petrochemical plants in IRAN, resulted in more quantity and
higher strength of wastewater. The type of wastewaters treated
by anaerobic technology in the world is completely different
from wastewater produced by petrochemical industries in
IRAN. So that the major part of studies on anaerobic treatment
or constructed anaerobic plants for petrochemical wastewater
focused on PET or PTA plants but there is only one plant in
IRAN that produce PET and PTA. On the other hand some
petrochemical complexes are concentrated in a region named
petrochemical zones that the wastewater from all complexes
will be treated at one common wastewater treatment plant. This
will result in more difference between the qualities and
compounds of wastewater treated by anaerobic technology in
here and that in other countries.
Several anaerobic reactors have been successfully applied
to the treatment of various industrial wastes [1]. According to a
report published in 1990 from some companies that made
anaerobic reactors, there were more than 1330 anaerobic
reactors in the world [1]. But it is important to note that the
majority of the reactors (76%) were used in food industries.
Since last 10 years only their use in other industries such as
petrochemical industry has started [1]. From 1330 reactors,
only 80 reactors are used for chemical wastewater treatment
and from these 80 reactors only 33 reactors (less than 2.5% of
all reactors) used in petrochemical industries that of 27 reactors
are used for PET and PTA wastewater treatment. From reactors
used for treatment of petrochemical wastewater, 9 reactors
were hybrid and of that 8 of them used in PET and PTA
wastewater treatment.
Several authors reported that up to a certain limit, the
treatment efficiency of complex wastewaters, in high rate
anaerobic reactors increases with increasing OLR. A further
increase of OLR will lead to operational problems like sludge
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
99
bed flotation and excessive foaming at the gas-liquid interface
in the gas-liquid-solid (GLS) separator, as well as
accumulation of undigested ingredients. As a result, the
treatment efficiency deteriorates [2,3,4]. Also accumulation of
biogas in the sludge bed was noticed, forming stable gas
pockets that lead to incidental lifting of parts of the bed and a
pulse- like eruption of the gas from this zone [4,5]. As we
know, the applied organic loading rate (OLR) is related to
hydraulic retention time (HRT) and waste COD concentration.
For this reason, OLR is an inadequate design parameter to
assure well performance of anaerobic reactors. James C.Young
[6] reported that HRT was the most important parameter
affecting COD removal performance. Wang [7] reported that,
during anaerobic sewage treatment in a 170 m
3
hydrolysis up
flow sludge bed (HUSB) reactor, HRT in the range (2.5 5 h)
does not seriously affect the removal rate of the suspended
solids. Differently, Gonalves et al. [8] show that the removal
efficiency decreased with decreasing HRT accompanied by
increase of up flow velocities. It might be argued that the HRT
is an inadequate parameter for describing solids removal in up
flow reactors. The effect of HRT could manifest as a result of
its direct relation to the liquid up flow velocity (V
up
) and also to
the solids contact time in the reactor and so the possibility of
solids to coalesce or to be entrapped in the sludge bed.
Moreover, the HRT is a major parameter, which determines the
SRT [9]. The SRT can indirectly influence the solids removal
as through changing of the physical-chemical and biological
characteristics of the sludge bed in addition to biogas
production.
The up flow velocity is one of the main factors affecting the
efficiency of up flow reactors [8,10, 11]. The up flow velocity
affects the sludge retention as it is based on the settling
characteristics of sludge aggregates. Therefore, the up flow
velocity could be a restrictive factor with respect to the
required reactor volume when treating very low strength
wastewater and wastewaters with high suspended solids [11].
The up flow velocity has two opposing effects. On one hand,
increasing up flow velocity increases the rate of collisions
between suspended particles and the sludge and thus might
enhance the removal efficiency. On the other hand, increasing
the up flow velocity could increase the hydraulic shearing
force, which counteracts the removal mechanism through
exceeding the settling velocity of more particles and
detachment of the captured solids and consequently
deteriorates the removal efficiency.
A wide range of organic and hydraulic loading rates has
been reported in the literature for anaerobic reactors, depending
on the substrate used and the quality and quantity of the
microbial community. Syutsubo et al. [12] reported a COD
loading of 45 kg COD m
-3
d
-1
with a COD removal efficiency
of 90% at sludge loading rates (SLRs) of up to 3.7 g COD g
-1

VSS d
-1
for thermophilic reactors [13]. Organic loading rates
(OLR) of up to 104 kg COD m
-3
d
-1
have been reported for
anaerobic digestion of sugar substrate under thermophilic
conditions [14]. According to Soto et al. [15], excellent
stability and high treatment efficiency was achieved with
hydraulic residence times as low as 2 h at an OLR of 6 kg
COD m
-3
d
-1
, the percent COD removals being 95% (30C) and
92% (20C). Borja and Banks [16] reported COD removal
efficiencies of 64-99% at OLR values of 12-17 kg COD m
-3
d
-1
.
Higher OLR values of up to 45 kg COD m
-3
d
-1
have been
reported only for hybrid reactors using a combination of UASB
reactor and a bentonite packing as a biomass support [17].
Gonalves et al. [8] treated sewage anaerobically at 20 C
in an up flow anaerobic reactor (no GLS) operated at up flow
velocities of 3.2, 1.7, 1.6, 0.9, 0.75 and 0.6 m h
-1
,
corresponding to HRTs of 1.1, 2.1, 2.3, 2.8, 3.3 and 4.3 h,
respectively. They showed deterioration of removal efficiency
as up flow velocity increases, varying from a value of 70% SS
removal at 0.75 and 0.9 m h-1 to 51% at 3.4 m h
-1
. The
removal efficiency at an up flow velocity of 0.60 m h
-1
was,
contradictory to these observations, only 60 % because of
starting of methane production due to increase of HRT and
accordingly the SRT. An increase in up flow velocity from 1.6
to 3.2 m h
-1
resulted in a relatively small loss in SS removal
efficiency, from 55% to nearly 50%, which indicates the role of
adsorption and entrapment [8,9].
Petrochemical wastewater contains some nondegradable,
toxic or inhibitor components that influence on reactor
performance and its applicable organic loading rates. This may
limit the operation to OLRs of less than 1.0 kg COD m
-3
d
-1
.
Kleerebezem [18] reported 90% COD removal efficiency for a
reactor treating PET effluents at OLR=22 kg COD m
-3
d
-1
.
Others have reported 52-90% COD removal efficiency for
reactors treating PET effluents at OLR=4.8-9.0 kg COD m
-3
d
-1

[19-22]. Also M.De et al., [23] reported 97% COD removal
efficiency for a hybrid reactor treating PCP and some organic
acids at PCP concentration 2-21 mg L
-1
.
In this study, the effect of organic and hydraulic loading
rates on hybrid reactor treating petrochemical effluent was
investigated at different influent COD concentrations. Also, the
effect of up flow velocity on the reactor performance was
studied. These are important parameters and only limited
information is available about the steady-state performance of
hybrid reactors treating petrochemical effluents.

II. MATERIALS AND METHODS
A. Location
This study was conducted from December 2009 to June
2012 in a petrochemical plant in south of IRAN. In this
complex, variety of products including chemicals and polymers
are produced.
B. Experimental setup
In this study, a Plexiglas column (15 cm in diameter and
120 cm in height) was used as the anaerobic hybrid reactor.
The upper 20 cm of the reactor was operated with fixed bed of
corrugated plastic sheet with 170 m
2
m
-3
specific surface areas.
The total volume of the reactor was 18.5 L and the volume of
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
100
liquid was 15.4 lit. Recycle, being designed only for
emergency conditions, such as clogging of the distribution
system, was not used continuously during the experimental
study. There arent any solids/liquid/gas separation devices in
the reactor. The schematic diagram of the model reactor is
given in Fig.1.
The reactor was operated under mesophilic conditions and
temperature of the influent flow adjusted to 35C by a heat
exchanger before entering to the reactor. Also two
automatically adjustable heating devices placed at the bottom
and middle of the reactor adjusted the temperature of the liquid
inside the reactor.
C. Feed
There is an existing wastewater treatment plant (WTP) in
this petrochemical complex. This WTP consists of some
physicochemical units followed by an activated sludge system
for treatment of wastewater. The output of API oil separator
entered to the reactor. Because of increase in the production
capacity of the existing plants, the flow and strength of
wastewater was increased more than the WTP design criteria.
Basic composition of wastewater is presented in table 1.
Biological treatment processes require macronutrients such
as nitrogen as nitrate or ammonium salts and phosphorus as
phosphorus salts for bacterial metabolism, growth, activity and
stability of process. Also, all methanogns use ammonia as
nitrogen source [24]. The TCOD:N:P ratio of the wastewater is
1726:45.2:1.5 or 700:18.33:0.61. But the suitable TCOD: N: P
ratio for anaerobic is about 700:5:1 [25]. The comparison of
these two ratios shows that the amount of phosphorus is low.
Thus, phosphoric acid is added to wastewater for compensating
phosphorus.
D. Seeding
The use of appropriate seed is very important at the start up
of the reactor. Because sufficient seed quality will result in
process stability and minimize the start up period. In IRAN,
anaerobic process is nowhere used to treat petrochemical
waste. Hence, there is no seed culture that is acclimatized to
this type of wastewater. Therefore the reactor was seeded with
flocculent sludge from a UASB Plant treating dairy
wastewater.
E. Start up
The results of BOD tests at different dilutions and
comparing the curves with typical BOD curves showed that
there is a lag period and increase in the toxicity to bacteria to
degrade petrochemical wastes, thus it is necessary to adapt the
microbial cells to these wastes.
At the beginning of this study (before measuring of BOD
values), the reactor was run for 5 months without adaptation
but it was unsuccessful. So it was considered to adapt the
sludge in two stages. In the first stage, the synthetic wastewater
made from dry milk was fed to the reactor. Then in second
stage, concentration of COD was increased in the feed at 10%
increment per cycle till it reached 100%.

Fig 1. Schematic diagram of the hybrid model reactor.
TABLE 1. Basic composition of wastewater from the Petrochemical
complex after API oil separator
Parameter Average
Standard
Deviation
Number
of samples
pH
T, C
*
CODtot , mg L
-1

CODtot , mg L
-1

CODSUS /CODtot
BOD5 /COD
BOD20 /COD
TDS, mg L
-1

TKN, mg L
-1

TP, mg L
-1

Alkalinity, mg L
-1

6.12
34.5
2075
1726
0.856
0.684
0.776
672
45.2
1.5
366
3.46
1.19
1075
846
0.102
0.107
0.123
232.5
34.8
1.25
56.4
590
145
590
590
53
19
19
53
53
53
53
* Before API separator unit
F. Operational conditions
After successful start-up was completed on week 40, the
influent COD concentration changed stepwise from 1000 to
4000 mgL
-1
. At each COD changing steps, the HRT of the
reactor changed from 48 to 24, 12, 8 and 4 hr, respectively that
resulted in different OLRs. By changing the hydraulic retention
time and influent COD concentrations, 25 different operational
conditions were applied and COD removal efficiencies
measured after reaching to hydraulically steady state
conditions. When hydraulically steady state conditions were
reached, changing to other HRTs were tired. The influent and
effluent COD concentration among the reactor operation time
are shown in Fig.2.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
101
The criteria for hydraulic steady state were the following:
(a) an operation period of more than 10 times the HRT (and
more than 2 weeks) [26]; and (b) variations in effluent
concentration lower than 10% [27]. Elmitwalli [28] and
Mahmoud [29] also considered these criteria satisfactory. A
real steady state would only be achieved in the sludge bed, and
consequently in the reactor, if the operation period is at least
three SRTs [30].
G. Analytical methods
Samples of the influent and effluent of the model reactor
were taken and analyzed according to Standard Methods for
the Examination of Water and Wastewater [31]. pH, COD,
alkalinity and biogas volume were measured daily. The COD
concentration was determined by the colorimetric method,
using a spectrophotometer Hacth DR2010 at wavelength 640
nm. The pH value was measured with 692 pHmeter metrohm.
Gas production rates were measured using volume
displacement method.
H. Experimental design
The experimental protocol was designed to examine the
effect of different OLRs, HRTs and up flow velocities on the
operational and performance of the reactor. All experiments
were performed under hydraulically steady state conditions.

III. RESULTS AND DISCUSSION
A. Startup
The startup of the reactor was relatively long because the
system had not been adapted to the petrochemical wastes
previously. After 30 weeks, adaptation period had been
completed and a COD removal of 70.3% was obtained at
OLR=2.0 kgm
-3
d
-1
and HRT=18 h.
0
1000
2000
3000
4000
5000
0 50 100 150 200 250 300 350 400 450 500 550
Operational Time , d
I
n
f
l
u
e
n
t

C
O
D

C
o
n
.

,

m
g
/
l
l
0.0
10.0
20.0
30.0
40.0
50.0
60.0
H
R
T
,

h
r
Influent COD , mg/l
Effluent COD , mg/l
SB effluent COD , mg/l
HRT,hr
Figure 2. Influent and effluent COD concentration among the reactor operation
time
B. Steady state performance
The influent and effluent COD of the reactor during the
operation period, and the results for different organic and
hydraulic loading rates along with performance indicators are
presented in Table 2.
1) Removal efficiency
The performance of the experimental hybrid reactor based
on total COD removals at various HRTs and OLRs is shown in
Figures 3 and 4, respectively. Also, the performance of the
reactor at various up flow velocities is shown in Fig. 5.
The COD reduction of the system ranging from 42.1 to 85.9 %
was achieved. The maximum COD reduction is obtained at
influent COD concentration of 3000 mg L
-1
, HRT=24 h and
OLR=3.0 kg m
-3
d
-1
. The minimum COD reduction is obtained
at influent COD concentration of 4000 mg L
-1
, HRT=4 hr and
OLR=24 kg m
-3
d
-1
. The COD reduction at about average COD
concentration of this petrochemical complex (1726 mg L
-1
) was
ranging between 43.4-80.9 % depends on operational
conditions (Table 2).
2) Hydraulic retention time
The results of the reactor performance versus HRT (Fig.3)
showed that the reduction of COD reached to a maximum at
HRT=24 h and then decreased gradually with increase of HRT.
It can be the result of decrease in biogas production and up
flow velocities that resulted in lower mixing and contact
between substrate and biomass. At certain HRT, the TCOD
reduction will increase by increasing the influent COD
concentration because of more biogas production resulted in
more agitation and contact between substrate and biosolids, as
shown in fig.3.
40
50
60
70
80
90
0 8 16 24 32 40 48
HRT , hr
R
e
d
.

,

%
S0=1000 mg/l S0=1500 mg/l S0=2000 mg/l S0=3000 mg/l S0= 4000 mg/l
Figure 3. Variation of TCOD removal efficiencies at different HRTs
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
102
TABLE 2. Summary of the conditions during the operation period of the experimental setup
Phase of
study
Time
d
C0
mg L
-1

HRT
h
OLR
kg m
-3
d
-1

Vup
m h
-1

Effluent COD
mg L
-1

COD Red.
%
1 1-105 1000
48
24
12
8
4
0.50
1.00
2.00
3.00
6.00
0.021
0.042
0.083
0.125
0.250
381
448
423
396
568
61.9
55.2
57.7
60.4
43.2
2 106-200 1500
48
24
12
8
4
0.75
1.50
3.00
4.50
9.00
0.021
0.042
0.083
0.125
0.250
385
353
398
408
675
74.3
76.5
73.5
72.8
55.0
3 201-301 2000
48
24
12
8
4
1.00
2.00
4.00
6.00
12.00
0.021
0.042
0.083
0.125
0.250
456
418
383
756
1133
77.2
79.1
80.9
62.2
43.4
4 302-422 3000
48
24
12
8
4
1.50
3.00
6.00
9.00
18.00
0.021
0.042
0.083
0.125
0.250
493
423
681
1248
1614
83.6
85.9
77.3
58.4
46.2
5 423-560 4000
48
24
12
8
4
2.00
4.00
8.00
12.00
24.00
0.021
0.042
0.083
0.125
0.250
669
608
965
1822
2316
83.3
84.8
85.0
54.5
42.1


3) Organic loading rate
The results of the reactor performance versus OLR are
shown in fig.4. It can see from this figure that up to a certain
limit, the treatment efficiency increases with increasing OLR
depend on influent COD concentration. The results showed
that reduction of COD reached to a maximum at OLRs
ranging 2.5 to 3.7 kg m
-3
d
-1
. A further increase of OLR by
increasing the HRT and influent COD concentration resulted
in less COD reduction because of biosolids wash out.
As shown in fig.4, at certain OLR, same to HRT effect,
the TCOD reduction will increase by increasing the influent
COD concentration because of more biogas production
resulted in more agitation and contact between substrate and
biosolids. The applied organic loading rate is related to the
HRT and influent substrate concentration. Using applied
loading rate alone as a process parameter, by doubling the
OLR while holding the influent concentration constant,
would be expected to decrease efficiency by 5 to 43 %.
Young [6] found this value by about 18-15%.
4) Up flow velocity
As shown in figure 5, constant up flow velocity, the
reduction performance increase with COD concentration
increasing, because of more agitation and contact between
biosolids and substrate resulted from more biogas
production. The maximum COD reduction of about 85%
achieved at up flow velocity ranging 0.02-0.04 mh
-1
and
COD concentration of 3000 mgL
-1
. Increasing of up flow
velocity resulted in biomass wash out in the effluent because
the biosolids are flocculent type nor granular.

40
50
60
70
80
90
0 5 10 15 20 25
OLR, kg/m
3
.d
R
e
d
.

,

%
S0=1000 mg/l S0=1500 mg/l S0=2000 mg/l S0=3000 mg/l S0= 4000 mg/l

Figure 4. Variation of TCOD removal efficiencies at different OLRs

Also, at constant COD concentration, the reduction
performance decrease with increasing of up flow velocity
because increasing the up flow velocity could increase the
hydraulic shearing force, which counteracts the removal
mechanism through exceeding the settling velocity of more
particles and detachment of the captured solids and
consequently deteriorates the removal efficiency.
5) Biogas production
Biogas production is an important parameter for
anaerobic treatment systems. The specific biogas production
rate versus the organic loading rate is plotted in fig.6, which
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
103

40
50
60
70
80
90
0 1000 2000 3000 4000
Influent TCOD, mg/l
T
C
O
D

r
e
d
u
c
t
i
o
n
,

%
Vup=0.021 m/h Vup=0.042 m/h Vup=0.083 m/h
Vup=0.125 m/h Vup=0.250 m/h

Figure 5. Variation of TCOD removal efficiencies at different up flow
velocities
confirms that the biogas production rate was a function of
the organic loading rate and that it could be described
similarly to organic substrate removal kinetics [32].
The biogas production rate can be expressed as follows:
) / (
) / (
max
r i B
r i
V QS G
V QS G
G
+
=

Where, G, is specific biogas production rate (LL
-1
d
-1
) ,
Gmax is maximum specific biogas production rate (LL
-1
d
-1
) ,
QSi/Vr, is organic loading rate (g L
-1
d
-1
) and GBis constant
value. The inverse of the biogas production rate is plotted
against the inverse of the OLR; a straight line portion of
intercept and slope of line gives 1/Gmax and GB/Gmax,
respectively.
y = 0.0797x + 0.1899
R
2
= 0.8522
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0 4 8 12 16 20 24
OLR , kg/m
3
.d
G

,


L
/
L
.
d

Figure 6. Specific biogas production rate versus the organic loading rate

This graph is given in figure 7. From this figure, Gmax
and GB can be estimated as 11.173 LL
-1
d
-1
and 85.83 g L
-1
d
-1

respectively with high correlation coefficient (R2=0.90),
respectively. Buyukkamaci and Filibeli [33] found these
values as 33.3 LL
-1
d
-1
and 88.45 g L
-1
d
-1
for synthetic
substrate made from molasses, respectively. Therefore, the
above equation comes to this form:
y = 7.6815x + 0.0895
R
2
= 0.9029
0
5
10
15
20
0.0 0.5 1.0 1.5 2.0
V/QS
i
, L.d/g
1
/
G

,

L

Figure 7. Determination of biogas production kinetic constants
) / ( 83 . 85
) / ( 17 . 11
r i
r i
V QS
V QS
G
+
=

IV. CONCLUTIONS
The startup results showed that there is a lag period for
starting up the reactor; therefore, it is necessary to
acclimatize the seed sludge to the petrochemical feed. The
results of the study showed petrochemical wastewater can be
satisfactorily treated by means of high-rate anaerobic
processes, specifically with the use of hybrid reactor. High
TCOD removals of between 42 and 86% at OLRs of 0.5-
24.0 kg COD m
-3
d
-1
and HRTs between 4 and 48 h were
achieved in this study. The maximum specific biogas
production rate of 11.17 LL
-1
d
-1
was in the same order of
magnitude as the rates achieved in earlier studies.

REFERENCES
[1] Macarie H. Overview of the application of
anaerobic treatment to chemical and petrochemical
wastewaters, Journal of Water Science and Technology,
2000:42:5-6:201-214.
[2] Sayed S. K. I.,Anaerobic treatment of slaughterhouse
wastewater using the UASB process. Ph.D. thesis,
Department of Environmental Technology, Wageningen
University, Wageningen, The Netherlands, 1987.
[3] Ruiz I., Veiga M. C., Santiago P. de and Blzquez R.
Treatment of slaughterhouse wastewater in a UASB reactor
and an anaerobic filter. Biores. Technol., 1997:60:251-258.
[4] Kalyuzhnyi S., Santos L. E. de los and Martinez J. R.
Anaerobic treatment of raw and preclarified potato-maize
wastewaters in a UASB reactor. Biores. Technol.,
1998:66:195-199.
[5] Elmitwalli T. A., Zandvoort M., Zeeman G., Bruning
H. and Lettinga G. Low temperature treatment of domestic
sewage in upflow anaerobic sludge blanket and anaerobic
hybrid reactors. Water Sci. Technol., 1999:9(5):177-185.
[6] Young,J.C. Factors affecting the design and
performance of upflow anaerobic filters, Water Science and
Technology , 1991:24:8:133-155.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
104
[7] Wang Kaijun, Integrated anaerobic and aerobic
treatment of sewage. Ph.D. thesis, Department of
Environmental Technology, Wageningen University,
Wageningen, The Netherlands, 1994.
[8] Gonalves R. F., Cha Lier A. C. and Sammut F.
Primary fermentation of soluble and particulate organic
matter for waste water treatment. Water Sci. Technol.,
1994:30 (6):53-62.
[9] Zeeman G., Sanders W. T. M., Wang K. Y. and
Lettinga G. (1996). Anaerobic treatment of complex
wastewater and waste activated sludge- application of an
upflow anaerobic removal (UASR) reactor for the removal
and pre-hydrolysis of suspended COD. IAWQ-NVA
conference for Advanced wastewater treatment, 23-
25.9.96.
[10] Metcalf and Eddy , Wastewater Engineering-
treatment and reuse, 4th edn., McGraw Hill, New York,
USA 2003.
[11] Wiegant W. M. , Experiences and potential of
anaerobic wastewater treatment in tropical regions. Water
Sci. Technol.,2001:44(8):107-113.
[12] Syutsubo K, Harada H, Ohashi A, Suzuki H.
Effective start-up of thermophilic UASB reactor by seeding
mesophilically-grown granular sludge. Water Sci Technol
1997;36(6-7):391-8.
[13] Syutsubo K, Harada H, Ohashi A. Granulation and
sludge retainement during start-up of a thermophilic UASB
reactor. Water Sci Technol 1998;38(8-9 part):349-57.
[14] Wiegant WM, Lettinga G. Thermophilic anaerobic
digestion of sugars in an upflow anaerobic sludge blanket
reactors. Biotechnol Bioeng 1985;27:1603-7.
[15] Soto M, Ligero P, Vega A, Ruiz I, Veiga MC,
Blazquez R. Sludge granulation in UASB digesters treating
low strength wastewaters at mesophilic and psychrophilic
temperatures. Environ Technol 1997;18(11):1133-41.
[16] Borja R, Banks CJ. Performance and kinetics of an
upflow anaerobic sludge blanket (UASB) reactor treating
slaughterhouse wastewater. J Environ Sci Health
1994;A29:2063-85.
[17] Borja R, Banks CJ, Wang Z. Performance of a
hybrid anaerobic reactor, combining a sludge blanket and a
filter, treating slaughterhouse wastewater. Appl Microbiol
Biotechnol 1995;43:351-7.
[18] Kleerebezem R. , J.Mortier , L.W.Hulshoff Pol and
G.Lettinga, Anaerobic Pretreatment of a Petrochemical
Wastewater- Terphetalic Acid Wastewater , Journal of
Water Science and Technology, 1997:36:2-3:237-248.
[19] Macarie H. Anaerobic Treatment of a Wastewater
of a Petrochemical plant producing an aromatic compound,
terphtalic acid , PhD thesis, 1992.
[20] Young,J.C., Kim, I.S., Page, I.C., Wilson, D.R.,
Brown, G.J. and Cocci, A.A. Two stage treatment of
purified terphtalic acid production wastewaters , Water
Science and Technology , 2000:42:5-6:277-282.
[21] Page, I.C., Cocci, A.A., Grant, S.R., Wilson, D.R.
and Landine, R.C. Single stage anaerobic hybrid treatment
of a polyester intermediate production wastewater. Prepint.
Int. Conf. Waste minimization and End of Pipe Treatment in
Chemical and Petrochemical industries, 14-18 November,
Merida, Yucatan, Mexico, 1999:529-532.
[22] Page, I.C., Wilson, D.R., Cocci, A.A.. and Landine,
R.C. Anaerobic hybrid treatment of terphtalic acid
wastewater. Proc. 71 st Annual Water Environment
Federation Conf., 3-7 October 1998 , Orlando, Florida: 575-
586.
[23] M.De Almedia Prado Montenegro, E.De Mattos
Moraes, H.Moreria Soares and R.Filomena Vazoller,
Hybrid reactor performance in pentachlorophenol (PCP)
removal by anaerobic granules, Journal of water Science
and Technology,Vol.44 ,No.4 , 2011, pp.137-144.
[24] Singh R.P., Surendra Kumar, Ojha C.S.P, Nutrient
requirement for UASB process, A review, Biochemical
Engineering Journal, 1999:3:35-54.
[25] Bitton, Gabriel , 2nd edition, Wastewater
Microbiology, John Wiley and sons Ltd, 1999,
ISBN:0471320471, 330-348.
[26] Noyola, A., Capdeville, B., and Roques, H.,
Anaerobic treatment of domestic sewage with a rotating
stationary fixed-film reactor, Water Research, 1988:
(22)12:1585-1592.
[27] Polprasert, C., Kemmadamnrong, P., and Tran, F.T.,
Anaerobic baffle reactor (ABR) process for treating
slaughterhouse wastewater, Environmental Technology
1992:13:857-865.
[28] Elmitwalli, T.A. Anaerobic treatment of domestic
sewage at low temperature, Ph.D. thesis, 2009,Wageningen
University, Wageningen, the Netherlands.
[29] Mahmoud, N.J.A., Anaerobic pre-treatment of
sewage under low temperature (15C) conditions in an
integrated UASB-digester system, Ph.D. thesis, Wageningen
University, Wageningen, The Netherlands, 2002.
[30] van Haandel, A.C. and Lettinga, G. (1994),
Anaerobic sewage treatment. A practical guide for regions
with a hot climate, John Wiley & Sons Ltd., Chichester,
England.
[31] Standard Methods for the Examination of Water and
Wastewater, 22th Edition. American Public Health
association (APHA), American Water Works Association
(AWWA) and Water Environment Federation (WEF).
Washington DC, USA, 2012.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
105
[32] Yu H., Wilson F. and Tay J., Kinetic analysis of an
anaerobic filter treating soybean wastewater , Water
Research, 1998:32:11:3341-52.

[33] Buyukkamaci N. & Filibeli A. Determination of
kinetic constant of an anaerobic hybrid reactor, Process
Biotechnology, 2002:38:73-79.





Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
106
Association Rules in the measurement of air
pollution in the city of Santiago de Chile

Santiago Zapata Caceres
Department of Informatics and Computation
Engineering Faculty
Metropolitan Technological University of Chile
szapata@utem.cl
Juan Torres Lopez
Department of Informatics and Computation
Engineering Faculty
Metropolitan Technological University of Chile
jtorres@utem.cl


Abstractsome time ago, the use of the computation was
operating alone. In the 1970s there is a change in the mindset of
companies and organizations. Are recognized continuously
recorded data as the raw material that would lead position in the
market. The needs changed, requires additional storage capacity,
data processing, new tools to address the information available.
One such tool is known as Knowledge Discovery in Databases
(KDD).
In this paper we shall describe the main difficulties encountered
in the discovery process either inherent fears or make a change
that arise in data from different data sources. Finally, we present
a practical application using data from Air Quality in the city of
Santiago de Chile from the years 2000-2012, recorded at different
monitoring stations intended for this purpose, and seek to
establish a relationship between materials PM10 and PM2.5
particulate concentrations for 24 hours.
Keywords: Association Rules, Data Mining, Causation,
Extraction of Knowledge (KDD), Environmental Contamination
I. INTRODUCTION
The work is intended to apply the tools of Knowledge
Discovery in Databases and emphasizing the importance of the
process of acquiring new knowledge in order to support
decision-making in companies, organizations and institutions,
and create the conditions to improve the decision-making
process.
In the project development explains the sequence of steps
to be executed for the purpose of exploiting the data, different
algorithmic approaches that are part of the process
The work devotes more attention to one of the data mining
techniques, which corresponds to the Rules of Association,
explaining the different types of rules that exist, ending with
Fuzzy Association Rules, indicating its importance at the time
of search association rules in databases, in which the values of
their attributes are numeric and categorical
KDD application applies to data collected by the
measurement stations of air quality in the city of Santiago de
Chile.
In the work is made a practical application of KDD process,
with data collected from the RED MACAM (Automatic
Monitoring Network Air Pollutants), from which we obtained
data from the years 2000-2012 of various air pollutants,
including particulates, troposphere ozone, and carbon
monoxide. With data mining tool Clementine, using a MySQL
database, you work with the data to establish a relationship
between particulate materials through scatter plots, and other
graphical tools.
II. WORK DEVELOPMENT
A. Objectives of the work

Emphasize the importance of information in today's society
and the benefits provided by the use of tools such as
Knowledge Discovery in Databases to perform exploratory
data process generating benefits studied research field.
Similarly, applying the tools provided by data mining in the
process of acquiring and generating knowledge.
To achieve these objectives, we made a practical
application of data mining on a data set from the Metropolitan
Health Service Environmental (SESMA) indices corresponding
to level of contamination detected in the city of Santiago de
Chile, captured at stations Air Monitoring in the years 2000-
2012 using the data mining software called Clementine and
MySQL database storage level of detected contaminants.
The purpose of the application is to extract association rules
from data stored in a MySQL database that records information
related to environmental pollution levels of various pollutants
and seeks to establish a relationship between particulate
materials capable of determining the level of relative hazard of
PM2.5 to PM10.
B. Problem
The pre-emergencies in Chile are established when the
pollution levels exceeds the values indicated in the
environmental law, this has a serious impact in the commerce,
and in some critical cases it could become a sanitary
emergency, because its dangerous to the peoples health,
therefore its needed to count with methods capable to study
the historical situation of the Santiagos basin, which have
characterized for having trouble with the pollution and the dust
in suspension, and so to predict anomalous ventilation
situations, that permits the authorities to act efficiently.

C. Current Prediction and measurement model
Currently, a predictive model created by Joseph Cassmassi
is used the model was developed from the air quality
information measured by the Automatic Monitoring Network
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
107
of air quality (MACAM II Network) and the tall
meteorological information from the central zone of the
country.
The forecasting methodology of MP10 concentrations is
based in calculus algorithms developed by applying statistical
techniques of multiple regression variables, focused in find
relations between possible predictor variables and a variable to
predict. The possible predictions include observed weather
variables, observed weather condition indexes, observed
concentrations and expected variations in rates of emissions.
The Cassmassi model forecast the maximum value of
average concentration in 24 hours of breathable particulate
material (PM10), forecasted for 00-24 h period of the following
day, expressed in (ug/m3), in each one of the stations of the
MACAM 2 Network classified as PM10 Monitoring Stations
with demographic representatively (EMRP). These are: Av. La
Paz, La Florida, Las Condes, Parque OHiggins, Pudahuel,
Cerrillos and El Bosque, according with the resolution
N11481 of 1998 from SESMA
The forecasted concentration for the next day is calculated
by different equations for each air quality monitoring station.
The required variables for the equation solving are obtained
from the related information with the expected conditions
shifting by day of the week, from the PM10 concentrations
measured in the MACAMII Network, from tall meteorological
information obtained from the radio probes realized by the
Weather Direction of Chile and the weather conditions of
synoptic and regional observed and forecasted scale for the
region.
The operational application of this methodology considers
two prediction algorithms for each monitoring station. A first
algorithm includes the index of meteorological potential,
forecasted for the next day. The second algorithm is based in
observations only (same day and previous day). That way, if
the first algorithm cannot be applied, the second one is used.
With the model previously described, environmental
measures are enacted according with the following table:







.

Table II.1: ICAP Levels and Environmental Measures

For example: on the level 299 is enacted Environmental
laws during that day, however, the air quality in level 299 is
not very different that level 300. We believe that the different
levels must be replaced by linguistic variables with pond rated
values by degrees of truth Warning, but in 300 Pre-
Emergency is enacted (special and more restrictive). All this,
so the implemented system can become a decision-making
support tool at the time of enacting environmental measures.
III. APPLICATION OF KDD DATA TO AIR QUALITY
Application is done using the Clementine tool to apply the
methods, generate models and discover the relationships
among data, we used the management system MySQL database
in order to create a data warehouse that allows storing
operational data concerning Pollution Air of the City of
Santiago de Chile, from Automatic Monitoring network of Air
Pollutants in the period from 2000 to 2010, taking the
following metadata: CO - Carbon monoxide, PM2.5 -
particulate matter 2.5, PM10 - particulate matter 10 NOX -
Oxides of Nitrogen, O3 - Troposphere Ozone, SO2 - Sulfur
Dioxide.
A. History
For several years there is continuous monitoring of the air
quality both in Chile and in other regions of the country. The
first air monitoring stations were established in 1964 in order to
calculate the particulate materials blackening and acidity of
gases, later joined by measurements of total suspended
particulates (TSP), sulfur dioxide and nitrogen dioxide, which
were done through a network of semi-automatic quality
monitoring. In 1988 he began to evaluate the PM10, CO, SO2,
NOX and O3 with automatic monitoring network of five
stations and a central data capture. In 1997, the network
expands to eight stations online, while monitoring is performed
with two portable stations, and based on the Basic Law
declares environment Santiago as saturated zone for PM10,
TSP, CO and Ozone and SO2 latent area. For this reason they
are generating plans that help the decontamination time to time
adopted new emission control measures and restrictions on
certain activities.
B. Description of Pollutants

Particulate Matter PM10
One of the pollutants that cause more damage to health
is PM10 Particulate Matter, that reaches the
atmosphere through various sources, and whose degree
of risk varies depending on the source that emits.
Particulate Matter PM2.5
The PM2.5 particulate matter causes more damage
than the MP10 due to its smaller size. He is responsible
for the deterioration of human health as a percentage of
between 50% and 70%. Its small size facilitates
entering houses remain suspended a greater amount of
time in the atmosphere, entering the lungs, damaging
the defense mechanisms of people causing respiratory
infections, cardiovascular disease and even causing
death.


ICAP Level Air Quality Enacted Measure
0-99 Good None
100-199 Regular None
200-299 Bad Environmental Alert
300-399 Critical Pre-emergency
400-499 Dangerous Pre-emergency
500 or more Exceed Emergency
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
108
Carbon Monoxide (CO).
Carbon monoxide is produced by incomplete
combustion of natural gas or carbon containing
products (kerosene, oil, etc.), Car engines, stoves, and
portable heating systems for indoor exhaust pipe cars,
trucks or buses etc. It causes thousands of deaths in
North America and is the leading cause of deaths from
poisoning by inhalation of this gas.Nitrogen Dioxide
(NOx)
Exposure to Nitrogen Dioxide generates acute and
chronic effects on the health of individuals, studies
confirm the WHO (World Health Organization) and
the Environmental Protection Agency (EPA). Damages
result in irritation of the lungs and / or reduced
resistance to respiratory infections in people with
overcoats asthmatic problems.
Troposphere Ozone (O3)
This gas is generated on sea level, resulting from the
burning of fossil fuels (gasoline, natural gas or carbon,
etc.), so a greater volume is in large cities and
industrial areas, is injurious to health and the
environment, in times of high temperatures generated
by the so-called "photochemical smog" when
combined with car exhaust and factories.
Sulfur Dioxide (SO2)
Exposure to this pollutant acute and chronic affects on
the health of people, so their environmental
emergencies concentrations defined in an hour, and in
areas surrounding the issue. This minimizes lung
capacity so that its effects are amplified with physical
activity, with hyperventilation when breathing cold,
dry air, or in people with bronchial hyperactivity.

C. Calidad del aire
Measuring the Air Quality Index for particulate matter
(ICAP) in Chile to measure air quality, for a long time was
considered the MP10 as the most important source of pollution,
in fact, their average concentrations exceed 24 hours in
Santiago often considered normal, however, new studies
indicate that PM2.5 is the most dangerous due to their small
diameter which allows you to have a greater presence in the
environment, and also easier to get into the bloodstream.
D. Data Collection
The collected data delivery rates air quality referred to
particles (ICAP), this comes from different monitoring stations
spread over different districts of Santiago de Chile, with
temporary archive and a measurement period variable
depending on the associated station. Also, the presence of
pollutants, determines the impact of contamination in the field
of measurement. These indices are not captured by all
monitoring stations.
E. Preparation of the data
The data coming from different sources, require a
treatment to maintain consistency for further manipulation, the
process is carried out through Clementine software
functionality, which makes access to the data and brings
together in a single file, maintaining for each attribute its
original data type. Initially the data are recorded in a database
buffer, and then a series of transformations to the data,
populate the data warehouse created with historical data from
environmental pollution levels detected by the various
monitoring stations distributed in different districts of the city
Santiago de Chile, during the period 2000-2012.
To maintain relative normality of the data, these are
subjected to a validation process by the Metropolitan Health
Ministerial Secretary, who is running a background check of
the network stations (power outages, filter changes monthly or
bimonthly telephone network problems, preventive
maintenance and corrective maintenances, calibration of
analysis equipment, etc.) in order to obtain operational data
validated.
F. Visual Data Exploration

In the knowledge discovery process is important to have a
vision about the information they can provide data prior to
handling, which is why using tools to visualize data (scatter
plots, histograms, etc.).

1) Monitoring infrastructure.

For this purpose, generate graphs with Microsoft Excel
tool that allows visual comparison between PM10 and
PM2.5 Particulate Matter. Because in 2000 there was no
law that regulated the concentration of particulate matter
PM2.5, graphics include the current standard for the
material PM10 in micrograms per cubic meter, which as
explained above has to PM2.5 by considered all
aerodynamic diameter less than 10 microns.
Figure 1: Measurement of Particulate Matter PM2.5 in EML
Monitoring Station (L - Florida) for 2000.
This measurement generally remains within the
established standard for the pollutant reaching the highest
PM10 concentration levels between mid-April and late
September, with peaks of overcoming norm between the
months of May and September.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
109

2) Scatter Chart 5.6.2 MP10 vs MP2, 5

Figure 2: Graphic dispersion of pollution levels PM10 and PM2.5
Particulate Matter in Monitoring Station of Florida (L).
Figure 2 illustrates the greater danger of the PM2.5
particulate material of MP10 opposed reference to the
pollution levels regulated latter. It can be appreciated
that the agent reaches PM2.5 levels considered bad,
with indices below 25 g/m3, while the other requires
levels greater than 200 g/m3, and even with an amount
close to 50 g/m3 begin to appreciate a MP10
dangerous levels above 300 g/m3, the graph shows the
most dangerous of the material against the PM10
PM2.5.

This results, define a trend graph in each of the
monitoring stations, lower rates of contaminant
compared MP2.5 MP10, but with the same degree of
hazard.
3) Particulates relationship Pollution Levels and Moni-
toring Stations

In Figure 3 shows the graphs on regular days and
good days through MP2.5/MP10 rate, which shows
that it is higher in the good days unlike what happens
in the days when the level quality is considered
hazardous, it is repeated that the contamination levels
of both particulate materials reach a certain level of
similarity. According to the graphs shown, MP2.5
contamination levels are in a lesser degree in the
environment, unlike what happens with MP10,
therefore when their concentrations are similar levels
of contamination are reached critical dangerous bad or
exceeded.

Figure 3: Relationship between particulates for Monitoring Station in
Florida
4) Contamination levels Meshes

As can be seen in Figure 4, corresponding to that
recorded in the monitoring station of Florida in 2000, the
rates achieved during travel the regulated pollution levels,
however the good level is what is repeated more frequently,
and in the winter and summer months, which can be
explained by higher rainfall levels that take place in the
winter months, and decreased during the summer polluting
vehicles, keeping a lesser extent the concentration of
particulates.
Figure 4: Levels of contamination by season in 2000.
Figure 5 shows the monthly record of concentrations
of particulate matter in 2000, shows that concentrations
"dangerous", "beyond", "reviews" and "bad" are repeated
throughout the year except in the month of November,
although in any month was a greater frequency of a certain
level of contamination.
Figure 5: Months of 2000 that are recorded pollution levels
considered "bad"
a) Mesh critical levels

Figure 6: Months of the year that critics are recorded pollution
levels.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
110
According to the data recorded for the year 2000, in the
only months when there were no critical levels of
contamination were in the months of January, February and
November.
b) Mesh dangerous levels.

Figure 7: Months of 2000 that are recorded pollution levels
considered "dangerous"
During 2000, are recorded dangerous levels of
contamination between the months of March to
December except for the months of October and
November.

c) Exceeds Levels of Particulate Matter

Figure 8: Level Exceeds Particulate Matter Pollution per month.

The extreme values of contamination occurred in
the period March to December with the exception of
the months October and November.

d) Mesh stations contamination levels

Figure 9: Relation between levels seasons Pollution Monitoring
Station in Florida
In the commune of Florida levels "good" of
contamination found during the summer.

IV. GENERATION OF ASSOCIATION RULES USING THE
CLEMENTINE SOFTWARE.

In order to generate association rules use the node
"Apriori" Clementine software, also defines a "medium"
and "low confidence" of the "rule" of 50%. Recall that is
the "support" that defines the frequency at which the items
appear together and is "confident" that determines the
cohesion of the data.
Minimum Support = 50% rules
Minimum Confidence = 50% rules
Maximum background = 5

The Mobile Media concentrations of 24 hours is available for
both Particulate Matter (PM10 and PM2.5) from the year 2001
which will be used as a basis this year to continue until 2012.
For testing purposes will apply Air Quality Standard of PM10
to PM2.5 particulate matter, so have a basis to regulate and
standardize the latter contaminant approximate.


Table 1.2: Association rules extracted from data recorded in Florida.
Higher levels of trust achieved in the first three rows.
The way to structure the data explains that the support of
the first and third rule matching in proportion, and that both
have the same consistent because both are governed by the
Air Quality Standard for particulate matter defined MP10,
if done comparisons between them makes the difference in
the percentage of confidence that the Apriori algorithm
assigns data.

From the above rules is the third rule that states "If the
contaminant is then the level of PM10 Air Quality in
Florida is good", which expresses a truth more tangible as it
has its own Statement of Regulatory Quality Air. For this
reason there is a certainty of more than 97% chance that
this is true for the contaminant represented, but "unknown"
if this will be as good for particulate matter PM2.5, since
we assume that this last and MP10 are governed by the
same rules. If the PM2.5 less polluting than the MP10 not
generate greater problem because the Trust is very close to
100% and the air quality is good, but if the same level of
support, own standard for Particulate Matter PM2.5 define
poor air quality, support almost 100% would take steps to
reduce pollution levels.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
111

V. CONCLUSIONS

With regard to the results, a relationship is established
between PM10 and PM2.5 particulate material, which
stresses that when the contaminant reaches a higher index
MP2.5 or close to 85 g/m3, the level of contamination of
the material PM10 particulate is considered bad, which
means that lower levels of this contaminant could be
causing health problems in people. Now, if this assumption
is compared with graphics relationship between these
particulate materials may be accounted for when the rate
shown MP2.5/MP10 similar to a straight line, the levels of
contamination are classified as "dangerous", "bad" or
"exceeds" and this is because the rates of particulate matter
PM2.5 to PM10 approach, surpassing their threat and
almost reaching the contaminant than 10 microns in
diameter.

VI. REFERENCES

[1] Aristteles, La Metafsica.
[2] Mario Bunge. Causalidad. Editorial Universitaria, Buenos Aires, 1978.
[3] Hobbes, Thomas. 1996 (1651). Leviatn (Mxico, Fondo de Cultura
Econmica).
[4] Jean Wahl. Introduccin a la filosofa .Fondo Cultura Econmica,
Mxico, 1954.
[5] Fernando Berzal, Ignacio Blanco, Daniel Snchez. and Mara Amparo
Vila, Measuring the accuracy and interest of association rules: A new
framework, Department of Computer Science and Artificial Intelligence,
University of Granada, E.T.S.I.I, March 2002.
[6] C. Silverstein, S. Brin, et al. [1998] Scaleable Techniques For Mining
Causal Structures," Proceedings. 1998 International Conference Very
Large Data Bases, NY, 594-605
[7] G. Cooper [1997] A Simple Constraint-Based Algorithm for Efficiently
Mining Observational For Causal Relationships in Data Mining and
Knowledge Discouvery, v 1, n 2, 203-224
[8] J. Pearl, J. [2000] Causality: Models, Reasoning, And Inference,
Cambridge University Press, NY.
[9] W. Frawley and G. Piatetsky-Shapiro and C. Matheus, Knowledge
Discovery in DataBases: An Overview. AI Magazine, Fall 1992, pgs
213-228.
[10] RED MACAM (Red de Monitoreo Automtico de Calidad del Aire y
Meteorologa), www.conama.cl/rm/568/article-1114.html
[11] Universidad de Santiago de Chile, Normativa Ambiental en Aire.
(Fuentes Fijas),
[12] Geofsica de la Atmsfera: Pronosticando la Contaminacin
Atmosfrica mediante Redes Neuronales, Departamento de Geofsica
Universidad de Chile.
[13] Secretara Regional Ministerial de Salud Regin
Metropolitana(SESMA).Aire:InformacinGeneral,
http://www.asrm.cl/sitio/pag/aire/indexjs3aire.asp
[14] C. Arguedas. INFORME. Anlisis de las Normas de Calidad del Aire
en Chile, Estados Unidos, Mxico y la Comunidad Europea, SESMA,
Chile, 2002.
[15] Zapata Caceres Santiago, Escobar Ramirez Luis, Reyes Pastore Carlos,
Cortez Torres Jhons, Fuzzy Approach to the Management of the
Environmental Contamination in Santiago city of Chile, 2007
International Conference on Artificial Intelligence (ICAI 2007), vol.II,
2007, Las Vegas, Nevada, USA
[16] Zapata Caceres Santiago, Escobar Ramirez Luis, Intelligent Analysis to
the Contamination in the City of Santiago from Chile.Advances in
systems, computing sciences and software engineering, Proceedings of
SCSS 2005, Sobh, Tarek; Elleithy, Khaled (Eds.), 2006, XIV, 437 p., T.
Sobh, University of Bridgeport, Bridgeport, USA; K. Elleithy,
University of Bridgeport, Bridgeport, USA (Eds.)
[17] C. Kuok, A. Fu, M. Wong. Fuzzy Association Rules in Databases,
Department of Computer Science and Engineering, The Chinese
University of Hong Kong, Shatin, New Territories, Hong Kong. 1998.
[18] Geofsica de la Atmsfera: Pronosticando la Contaminacin
Atmosfrica mediante Redes Neuronales, Departamento de Geofsica
Universidad de Chile, Direccin de Internet:
http://www.geofisica.cl/English/pics3/FUM6.htm
[19] L. X. Wang, J. Mendel. Generating Fuzzy Rules by Learning from
Examples, IEEE Transactions on Systems, Man, and Cibernetics 22,
1414 -1427, (1992).
[20] Zapata C. Santiago, Maruri B. Christian, Rojas B. Ronald, Creation of a
Data Warehouse using the F-Cube Factory Software to Resolve
Problems with Degrees of Truth, Proceedings of the 2nd European
Conference of Computer Science (ECCS '11), WSEAS, Puerto De La
Cruz, Tenerife, Spain, December 10-12, 2011




,

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
112
Effects of the advance ratio on the Evolution of
Propeller wake

D. G. Baek / J. H. Jung
: Department of Naval Architecture and Ocean Engineering
Pusan National University
San 30, Jangjeon 2-Dong, Gumjeong-Gu
Busan 609-735, Korea
pi@pusan.ac.kr / vof@pusan.ac.kr
H. S. Yoon
: Global Core Research Center for Ships and Offshore Plants
Pusan National University
San 30, Jangjeon 2-Dong, Gumjeong-Gu
Busan 609-735, Korea
lesmodel@pusan.ac.kr


Abstract This study numerically carried out the propeller
open water test (POW) by solving Navier-Stokes equations
governing the three-dimensional unsteady incompressible viscous
flow with the turbulence closure model of the - SST model.
Numerical simulations were performed at wide range of advance
ratios. A great difference of velocity magnitude between the inner
region and the outer region of the slipstream tube forms the thick
and large velocity gradient which originates from the propeller
tip and develops along the downstream. Eventually, the strong
shear layer appears and plays the role of the slipstream
boundary. As the advance ratio increases, the vortical structures
originated from the propeller tips quickly decay. The contraction
of the vortices trace is considerable with decreasing the advance
ratio.
Keywordspropeller; wake; tip vortex; slipstream; advance
ratio; KP505
I. INTRODUCTION
As marine vehicles become larger and faster, the loading on
their propeller blades increases. This increased propeller
loading may lead to problems such as noise, hull vibration, and
cavitation at high speed. The geometry of a propeller should
therefore be designed to minimize them. In general, modern
propeller blades have a complicated geometry, making the
wake behind the propellers complicated too. Therefore, any
serious attempt to optimize the geometrical shape of modern
propellers will require a reliable wake analysis based on
detailed experimental measurements.
The flow field analysis around a propeller is complicated
by many factors as unsteadiness, three-dimensionality, and
high turbulence levels. These properties have been pointed out
in many previous experiments such as Laser Doppler
Velocimetry (LDV) measurements and PIV measurements.
Stella et al. (1998) measured the axial velocity component of a
propeller wake, and Chesnaks and Jessup, (1998) investigated
the tip vortex flow using LDV. Cotroni et al. (2000) have used
PIV and particle tracking velocimetry (PTV), respectively, to
investigate the near-wake of an isolated marine propeller in
longitudinal planes. Calcagno et al. (2002) investigated the
complicated 3-D flow behind a marine propeller in the
transverse and longitudinal planes using a stereoscopic PIV
(SPIV) technique. Lee et al. (2004) have compared the flow
structures of the same marine propeller for the cases of open
free surface and closed surface flows at a rather low Reynolds
number.
Recently, due to the improvements of computer
performances, Reynolds Averaged Navier Stokes (RANS)
solvers are becoming the practical tool. Abdel-Maksoud et al.
(1998) investigated viscous flow simulations for conventional
and high skew marine propellers. Chen and Stern (1999)
evaluated computational fluid dynamics of four-quadrant
marine-proposer flow. Watanabe et al. (2003) examined
simulation of steady and unsteady cavitation on a marine
propeller using a RANS code. Rhee and Joshi (2005) estimated
computational validation for flow around marine propeller
using unstructured mesh based Navier-Stokes solver.
Kawamura et al. (2006) investigated Simulation of unsteady
cavitating flow around marine propeller using a RANS CFD
code. Mitja Morgut and Enrico Nobile (2009) evaluated
Comparison of Hexa-Structured and Hybrid-Unstructured
Meshing Approaches for Numerical Prediction of the Flow
around Marine Propellers.
As authors literature survey, there is a few research that
provided correlations between the vortical structures and wake
in obedience to advance ratio. Therefore, the present study
focuses on the propeller induced flow structures such as of the
propeller slipstream and the tip vortex according to the advance
ratio.
TABLE I. PRINCIPAL PARTICULARS OF KP505
KP505 Principal
Scale Ratio 31.6
Diameter, D(m) 0.250
Pitch/Diameter mean 0.950
Ae/A0 0.800
Hub ratio 0.180
No. of Blades 5
Section NACA66
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
113
II. NUMERICAL DETAILS
A. Governing equations
Ship hydrodynamic problems are generally solved with the
numerical code in the framework of the Reynolds-Averaged
Navier-Stokes (RANS) equations. The continuity equation is

( ) 0 v
t

+ =

(1)

and the momentum equations are
( ) ( ) ( ) v vv p g F
t

+ = + + +

(2)

where p is the static pressure, is the stress tensor, and
g and F are the gravitational body force and external body
forces (e.g., that arise from interaction with the dispersed
phase), respectively, F also contains other model-dependent
source terms such as porous-media and user-defined sources.
In the Reynolds averaging, the dependent variables in the
instantaneous (exact) Navier-Stokes equations are
decomposed into the mean (ensemble-averaged or time-
averaged) and fluctuating components. So the Eqs.(1) and (2)
can be written in Cartesian tensor form as

( ) 0 u
i
t x
i



+ =

(3)
( ) ( )
( )
2
3

u u u
i i j
t x
j
u
p u u j
i l
u u
ij i j
x x x x x x
i j j j i l



+ =

(
| |

(
| + + +
| (
\
(4)

where
ij
is the Kronecker delta and
i j
u u are the
unknown Reynolds stresses

2
3
u
u u j
i l
u u k
i j t t ij
x x x
j i l

| |
| |
| = + +
|
|

\
\
(5)

The equations are made closed with the turbulence model, and
here the k SST model is employed:

( ) ( )
k
k ku G Y S
i k k k k
t x x x
i j j

| |

| + = + +
|

\

(6)

( ) ( ) u G Y D S
i
t x x x
i j j



| |

| + = + + +
|

\

(7)

In these equations,
k
G represents the generation of
turbulence kinetic energy due to mean velocity gradients, G


the generation of ,
k
and

the effective diffusivities of


k and , respectively,
k
Y and Y

the dissipations of k and


due to turbulence, D

the cross-diffusion term, and


k
S
and S

the user-defined source terms.



B. Numerical methods
The RANS formulations are used and equations are solved
in a sliding interface method used for unsteady-flow mode.
The pressure-velocity coupling and the overall solution
procedure are based on the SIMPLEC algorithm. The second-
order scheme is used for pressure, convection terms and
second-order central difference scheme for diffusion terms.

C. Computational schemes
In the case of unsteady simulation, the whole domain
should be computed with the sliding mesh technique. The
computational domain is defined with a cylinder of 8.6D
diameter surrounding the propeller and hub. The inlet and
outlet boundaries are located at 2.4D upstream and 5.3D
downstream the center of the propeller respectively. The
domain is split into global stationary part and moving part
which is specified by a smaller cylinder enclosing the blades
and hub entirely. Trimmer mesh is employed for the global
stationary block.

Fig. 1. Geometry of KP505

Fig. 2. Coordinate system of KP505

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
114

Fig. 3. Schematic of the computational domain and boundary conditions.

(a) (b)
Fig. 4. Generated grid of (a) overview and (b) propeller around

D. Validation
The computed and measured open water data are
compared in Fig.6, in which three groups of computed
T
K ,
Q
K and
0
are included. The results for propeller
performance is similar with the experimental results (Fujisawa
et al. 2000) especially at smaller J values, such as at J =
0.2~0.5 with the differences of smaller than 7% for
Q
K and at
all J values the differences are smaller than 5% for
T
K .
As J grows larger, the difference for performance becomes
larger. The reason of the big differences may partly be due to
the very small absolute value of performance and the doubted
experimental results at large values of J may be also
considered.



Fig. 5. Validation of open water test and difference
III. RESULTS AND DISCUSSION
The typical instantaneous velocity vectors and the axial
velocity contours in the plane normal to the propeller plane at
phase angle =0 are plottd in Fig. 6 where representatively
two advance ratios of J=0.2 and 0.7 are considered. The
propeller axis is aligned with y/D = 0, and the propeller plane
locates at x/D = 0. In genreal, the flow behind the propeller is
composed of the slipstream tube and the tip vortices, which is
more clearly identified with decreasing the advance ratio. At
J=0.2 which is the smallest advance ratio or the heaviest load
condition among the the advance ratios considered in this
study, the freestream and the propeller induced flow
superpose, forming the slipstream with the relatively strong
flow with large velocity vectors forms within the propeller tip
out of which the frestream velocity is predominent to the flow,
as shown in Fig. 6(a).
This strafication of the flow is clarified by the contours of
the axial velocity in Fig. 6(b). The velocity magnitude in the
slipstream tube is much larger than that ouside of the
slipstream. This big difference of velocity magnitude between
the inside and the out side of the slipstream tube forms the
thick and large velocity gradient which originates from the
propeller tip and develops along the downstream. Eventually,
the strong shear layer appears and play the role of the
slipstream boundary.
As the advance ratio increases, the propeller load becomes
light. Therefore, the difference of velocity magnitude between
the slipstream and the outside becomes minor, resulting in
almost disappearance of the shear layer which clearly
appeared at lower advance ratio of J=0.2 in Figs. 6(a) and 6(b).
These variations of the wake according to the increases of the
advance ratio are clarified by the velocity vectors and the axial
velocity contours as shown in Figs. 5(c) and 5(d), respectively.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
115


(a)


(b)
Fig. 6. Typical instantaneous velocity vector fields and axil velocity contour
in the longitudinal plane at =0 (a) J=0.2, (b) J=0.7.
The other component of the near wake of the propeller is
the tip vortices which are shed successively from the tip of
each blade with a regular interval. Paik et al. (2007) used the
Galilean decomposition method to understand the coherent
vortex structure of the wake behind a rotating propeller. Thus,
the present study also adopts the Galilean decomposition
method to identify the regular appearance of the tip vortices
along the downstream behind of the propeller.
Fig. 7 shows the appropriately decomposed instantaneous
velocity field for two advance ratios of J=0.2 and 0.7. The
several different convection velocities as the translational
velocity of vortices are considered to obtain the proper value of
the convection velocity (Uc) which is subtracted from the axial
component of the instantaneous velocity as already shown in
Fig. 6 to detect the rotational flow corresponding to the tip
vortices. Eventually, in the present study, Uc =1.2U0 is
adapted to the Galilean decomposition.


(a)


(b)
Fig. 7. Contour of vorticity(left) and Typical instantaneous velocity vector
fields subtracted by convection velocity(Uc=1.2U0) of J=0.2 and J=0.7 at the
phase angle =0
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
116

(a) (c)

(b) (d)
Fig. 8. KP505 iso-surface of Lamda2=-5,000 for four different loading
conditions during transient acceleration (a) J=0.2, (b) J=0.7.
At both advance ratios of J=0.2 and J=0.7, the rotational
flow motion appears near the tip and a large velocity gradient
occurs in the wake sheet. The convection velocity of the wake
sheet is larger than that of tip vortices in the slipstream region.
Especially, at lower advance ratio, the convection velocity of
the wake sheet is much larger than that of tip vortices in the
slipstream region.
Successively, the rotational flow motion in the further
downstream is not clearly captured as shown in Fig. 7(a), even
though the Galilean decomposition is used to detect the
rotational flow. Consequently, the large difference of the
convective velocity between the tip vortices and the wake sheet
contribute to the distinct formation of the shear layer and the
spatial evolution of vortical structure. It could be confirmed
with vorticity contour as shown in Fig.7 (a). In conformity with
the preceding, the vorticity from the tips elongated and moved
downstream. As the load condition becomes light, vorticity
contour has a strong asymmetry shape like an oval with a short
minor axis. The asymmetry of the vorticity was caused by the
interaction between the tip vortex and the wake sheet as shown
in Fig. 7(b). The present results of the formation of the
rotational flow motion and the wake sheet are consistent with
the findings of Paik et al. (2007).
In order to define three-dimensional vortical structures
originated from the propeller tip, we adopted the method given
by Jeong & Hussain(1995), who defined a vortical region as
the region with negative, the second largest eigenvalue of,
where and are the strain-rate tensor and rotation-rate tensor,
respectively.
As the advance ratio increases or the load condition
becomes light, the vortical structures originated from the
propeller tips quickly decay, as shown in Fig. 8. This result is
supported by the velocity vectors and the axial velocity
contours in Figs. 6 and 7 which showed that the shear layer
formed by the large difference of the velocity gradient between
the slipstream and the outside of the slipstream becomes
stronger with decreasing the advance ratio.


(a)


(b)
Fig. 9. Contour of pressure coeffcient in the longitudinal plane at =0.
(a) J=0.2, (b) J=0.7
Figure 9 shows the isosurface of and the contours of
pressure coefficients in x-y plane at =0 for J=0.2 and J=0.7.
As early shown in Fig. 8, the heavy load condition of J=0.2
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
117
sustains the vortical structures originated from each tip to
further downstream, compared to the case of light load
condition of J=0.7, which can be clarified by comparing
between Fig. 9(a) and Fig. 9(b) for J=0.2 and J=0.7,
respectively.
At the heavy load condition of J=0.2, the strong shear layer,
which is derived by big difference of velocity magnitude
between the inside and the outside of the slipstream tube as
early observed in Figs. 6(a) and 6(b), contributes to the
addition of the rotational motion to the tip vortices and
eventually the long survival of the propeller tip vortices to
farther downstream. Thus, as the advance ratio decreases, the
lower pressure forms at the center of each tip vortices, which
can be clarified by comparing between Fig. 9(a) and Fig. 9(b)
for J=0.2 and J=0.7, respectively. Additional, in the slipstream
region, the pressure becomes much lower with decreasing the
advance ratio.
The traces of the tip vortices shed from each propeller tip
are plotted in Fig. 10 where the location of the tip vortex is
identified by the center of the tip vortex (maximum vorticity,
the place of the lowest pressure) as observed in Fig. 8. As the
advance ratio decreases, the contraction of the trace is
considerable. Especially, the slope of the contraction is rapid
near wake region owing to the stronger interaction between the
tip vortices with higher rotational energy, regardless of the
advance ratio. Then the trace becomes saturated earlier with
increasing the advance ratio.


Fig. 10. Location of tip vortices on the X-Y plane and contraction rates(right)
for different advance ratio of J=0.2~0.8

IV. CONCLUSIONS
This study numerically carried out the propeller open water
test (POW) by solving Navier-Stokes equations governing the
three-dimensional unsteady incompressible viscous flow with
the turbulence closure model of the - SST model.
Numerical simulations were performed at wide range of
advance ratios. A great difference of velocity magnitude
between the inner region and the outer region of the slipstream
tube forms the thick and large velocity gradient which
originates from the propeller tip and develops along the
downstream. Eventually, the strong shear layer appears and
plays the role of the slipstream boundary. As the advance ratio
increases, the vortical structures originated from the propeller
tips quickly decay. The contraction of the vortices trace is
considerable with decreasing the advance ratio.
ACKNOWLEDGMENT (Heading 5)
This work was supported by National Research Foundation
of Korea (NRF) grant founded by the Korea government
(MSIP) through GCRC-SOP (No. 2011-0030013).
Also, this work was supported by the Technology
Innovation Program (10033689, Technology development of
propeller and rudder for a ship with low vibration and high
efficiency) funded by the Ministry of Knowledge Economy
(MKE, Korea)
REFERENCES
[1] Jeong, J. & Hussain, F.(1995). On the identification of a vortex, Journal
of Fluid Mehanics, vol. 285, pp. 69-94.
[2] Abdel-Maksoud, M., Menter, F., and Wuttke, H. (1998). Viscous flow
simulations for conventional and highskew marine propellers. Ship
Technology Research, 45:64 71.
[3] Stella A, Guj G, Di Felice F, Elefante M (1998). Propeller wake
evolution analysis by LDV. Proc of 22nd Symposium on Naval
Hydrodynamics, Washington D.C., pp 171-188
[4] Chesnaks C, Jessup S (1998). Experimental characterisation of propeller
tip flow. Proc of 22nd Symposium on Naval Hydrodynamics,
Washington D.C., pp 156169
[5] Chen, B. and Stern, F. (1999). Computational fluid dynamics of four-
quadrant marine-propulsor flow. Journal of Ship Research, 43(4):218
228.
[6] Cotroni A, Di Felice F, Romano GP, Elefante M (2000). Investigation of
the near wake of a propeller using particle image velocimetry. Exp
Fluids 29:S227236
[7] Calcagno G, Di Felice F, Felli M, Pereira F (2002). Propeller wake
analysis behind a ship by stereo PIV. Proc of 24th Symposium on Naval
Hydrodynamics, Fukuoka, 3:112127
[8] Watanabe, T., Kawamura, T., Takekoshi, Y., Maeda, M., and Rhee, S.
H. (2003). Simulation of steady and unsteady cavitation on a marine
propeller using a rans cfd code. In Fifth International Symposium on
Cavitation, CAV2003, Osaka, Japan
[9] Sang Joon Lee, Bu Geun Paik, Jong Hwan Yoon, Choung Mook
Lee(2004), Three-component velocity field measurements of propeller
wake using a stereoscopic PIV technique, Experiments in Fluids 36,
575585
[10] Rhee, S. H. and Joshi, S. (2005). Computational validation for flow
around marine propeller using unstructured mesh based navier-stokes
solver. JSME International Journal, Series B, 48(3):562 570.
[11] Kawamura, T., Takekoshi, Y., Yamaguchi, H., Minowa, T., Maeda, M.,
Fujii, A., Kimura, K., and Taketani, T. (2006). Simulation of unsteady
cavitating flow around marine propeller using a rans cfd code. In Sixth
International Symposium on Cavitation, CAV2006, Wageningen, The
Netherlands.
[12] Bu-Geun Paik, Jin Kim, Young-Ha Park, Ki-Sup Kim, Kwon-Kyu
Yu(2007). Analysis of wake behind a rotating propeller using PIV
technique in a cavitation tunnel, Ocean Engineering (34)
[13] Mitja Morgut, Enrico Nobile(2009). Comparison of Hexa-Structured and
Hybrid-Unstructured Meshing Approaches for Numerical Prediction of
the Flow Around Marine Propellers. First International Symposium on
Marine Propulsors smp09, Trondheim, Norway
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
118
Economic and emission dispatch problems
using a new hybrid algorithm

MIMOUN YOUNES
Faculty of Technology
)
Djillali Liabes University
Sidi Bel Abbes, 22000, Algeria,
younesmi@yahoo.fr
Fouad KHODJA / Riad Lakhdar KHERFENE


Faculty of Technology
)
Djillali Liabes University
Sidi Bel Abbes, 22000, Algeria,
khodjafouad@gmail.com/rilakh@yahoo.fr


Abstract Environmental legislation, with its increasing
pressure on the energy sector to control greenhouse gases, is a
driving force to reduce CO
2
emissions, forced the power system
operators to consider the emission problem as a consequential
matter beside the economic problems, so the economic power
dispatch problem has become a multi-objective optimization
problem. This paper sets up an new hybrid algorithm combined
in two algorithm, the harmony search algorithm and ant colony
optimization (HSA-ACO), to solve the optimization with
combined economic emission dispatch. This problem has been
formulated as a multi-objective problem by considering both
economy and emission simultaneously. The feasibility of the
proposed approach was tested on 3-unit and 6-unit systems. The
simulation results show that the proposed algorithm gives
comparatively better operational fuel cost and emission in less
computational time compared to other optimization techniques.
KeywordsEconomic Power Dispatch (EPD); Harmony
Search Algorithm (HSA); Ant Colony Optimization (ACO.)
I. INTRODUCTION
The success of any stochastic search method heavily
depends on striking an optimal balance between exploration
and exploitation. These two issues are conflicting but very
crucial for all the metaheuristic algorithms. Exploitation is to
effectively use the good solutions found in the past search
whereas exploration is expanding the search to the unexplored
areas of the search space for promising solutions. The
reinforcement of the pheromone trail by the artificial ants
exploits the good solution found in the past. However,
excessive reinforcement may lead to premature convergence.
Many metaheuristic or optimization algorithms need some
parameters to be set in order to obtain good solutions. Usually,
those values are calculated in an empirical (or heuristical)
way. But this method is time consuming and it is not falling on
the good values of the parameters.
Our work contributes to this problem by applying another
metaheuristic method. The Harmony search algorithm (HSA)
to find suitable values of parameters to validate our work we
use to solve the problem of multi-objective optimization; the
problem consists in combining the economic control system
and the gas emission with the production of electrical energy.
The problem which has received much attention. It is of
current interest of many utilities and it has been marked as one
of the most operational needs. In traditional economic dispatch,
the operating cost is reduced by the suitable attribution of the
quantity of power to be produced by different generating units.
However the optimal production cost can not be the best in
terms of the environmental criteria. Recently many countries
throughout the world have concentrated on the reduction of the
quantity of pollutants from fossil fuel to the production of
electrical energy of each unit. The gaseous pollutants emitted
by the power stations cause harmful effects with the human
beings and the environment like the sulphur dioxide (SO
2
),
nitrogen oxide (NO
x
) and the carbon dioxide (CO
2
), etc. Thus,
the optimization of production cost should not be the only
objective but the reduction of emission must also be taken into
account. Considering the difference in homogeneity of the two
equations, the equation of the cost of fuel given in $/hr, and the
equation of emission of gases to the production of electrical
energy given in Kg/hr.
This method was tested on 3-unit and 6-unit systems. The
algorithm was developed MATLAB environment
programming.
The proposed approach results have been compared to
those that reported in the literature recently. The results are
promising and show the effectiveness and robustness of the
proposed approach.
II. ECONOMIC POWER DISPATCH FORMULATION
A. Problem formulation
1) Minimization of fuel cost
The goal of conventional EPD problem is to solve an
optimal allocation of generating powers in a power system [1].
The power balance constraint and the generating power
constraints for all units should be satisfied. In other words [2],
the EPD problem is to find the optimal combination of power
generations which minimize the total fuel cost while satisfying
the power balance equality constraint and several inequality
constraints on the system [3].
The total fuel cost function is formulated as follows [4]:
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
119
( ) ( )
Gi
NG
i
i G
P f P f

=
=
1
(1)


i Gi i Gi i Gi i
c P b P a P f + + =
2
) ( (2)

Where ) (
G
P f ,is the total production cost ($/h).
) (
Gi i
P f is the fuel cost function of unit i in $/h;
Gi
P is the real power output of unit i in MW;
i i i
c b a , , the
cost coefficients of the i th generator.
B. Minimization of pollutants emission
The most important emissions considered in the power
generation industry due to their effects on the environment are
sulfur dioxide (SO
2
) and nitrogen oxides (NO
x
) [5]. These
emissions can be modeled through functions that associate
emissions with power production for each unit. One approach
to represent SO
2
and NO
x
emissions is to use a combination of
polynomial and exponential terms [6]:
( ) ( ) ( )
gi i i i gi i gi i
P P P Pg EC exp
2
+ + + =


0 =
L
P (3)
where
i
,
i
,
i
,
i
and
i
are coefficients of the ith
generator emission characteristics..
The bi-objective combined economic emission dispatch
problem is converted into single optimization problem by
introducing price penalty factor h as follows.
Minimise F=FC+h*EC
Subjected to the power flow constraints of equations. The
price penalty factor h blends the emission with fuel cost and F
is the total operating cost in $/h.The price penalty factor hi is
the ratio between the maximum fuel cost and maximum
emssion of corresponding generator [7].
) (
) (
=
max
max
gi
gi
i
P EC
P FC
h
The following steps are used to find the price penalty factor
for a particular load demand
1. Find the ratio between maximum fuel cost and maximum
emission of each generator.
2. Arrange the values of price penalty factor in ascending
order.
3. Add the maximum capacity of each unit
max
gi
P one at a
time, starting from the Smallest hi unit until
d gi
P P
max

4. At this stage, hi associated with the last unit in the
process is the price penalty factor h for the given load.
The above procedure gives the approximate value of price
penalty factor computation for the corresponding load demand.
Hence a modified price penalty factor (hm) is introduced in this
work to give the exact value for the particular load demand.
The first two steps of h computation remain the same for the
calculation of modified price penalty factor. Then it is
calculated by interpolating the values of hi corresponding to
their load demand values.
C. Problem constraints
1) Active Power Balance equation
For power balance an equality constraint should be
satisfied. The generated power should be the same as total load
demand added to the total line losses. It is represented as
follows:
L
j
Dj
i
Gi
P P P + =

= =
ND
1
NG
1
(4)

=
ND
1 j
Dj
P is the total system demand;

=
NG
1 i
Gi
P is the total system production;
L
P is the total transmission loss of the system in MW;
NG is the number of generator units in the system;
ND is number of loads.
2) Active Power Generation limits
Generation power of each generator should be laid between
maximum and minimum limits. There are following inequality
constraints for each generator
max min
Gi Gi Gi
P P P
(5)
min
Gi
P ,
max
Gi
P are the minimum and maximum generation
limits of the real power of unit i.
III. HARMONY SEARCH ALGORITHM (HSA)
Harmony search algorithm is a novel meta-heuristic
algorithm, which has been conceptualized using the musical
process of searching for a perfect state of harmony. This meta-
heuristic is based on the analogy with music improvisation
process where music players improvise the pitches of their
instruments to obtain a better harmony. In the optimization
context, each musician is replaced with a decision variable, and
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
120
the possible notes in the musical instruments correspond to the
possible values for the decision variables.
The harmony in music is analogous to the optimization
solution vector, and the musicians improvisations are
analogous to local and global search schemes in optimization
techniques.
Musical performances seek to find pleasing harmony (a
perfect state) as determined by an aesthetic standard, just as the
optimization process seeks to find a global solution (a perfect
state) as determined by an objective function [8].
The parameters of HS method are: the harmony memory
size (HMS), the harmony memory considering rate (HMCR),
the pitch adjusting rate (PAR), and the number of
improvisations (NI). The harmony memory is a memory
location where a set of solution vectors for decision variables is
stored. The parameters HMCR and PAR are used to improve
the solution vector and to increase the diversity of the search
process. In HS, a new harmony (i.e., a new solution vector) is
generated using three rules: 1) memory consideration, 2) pitch
adjustment, and 3) random selection. It is convenient to note
that the creation of a new harmony is called improvisation. If
the new solution vector (i.e., new harmony) is better than the
worst one stored in HM, this new solution updates the HM.
This iterative process is repeated until the given termination
criterion is satisfied. Usually, the iterative steps are performed
until satisfying the following criterions: either the maximum
number of successive improvisations without improvement in
the best function value, or until the maximum number of
improvisations is satisfied [9].
A. Initialize the problem and algorithm parameters
The optimization problem is defined as follows:
Minimize f(x) subject to x
i
X
i
, i=1,., N. where f(x)
is the objective function, x is the set of each decision variable
(x
i
); X
i
is the set of the possible range of values for each
design variable, that is X
iL
< X
i
< X
iU
.
Where X
iL
and X
iU
are the lower and upper bounds for each
decision variables.
The HSA parameters are also specified in this step. They
are the harmony memory size (HMS) [10], or the number of
solution vectors in the harmony memory; harmony memory
considering rate (HMCR); bandwidth (BW); pitch adjusting
rate (PAR); number of improvisations (NI) or stopping
criterion and number of decision variables (N).
B. Initialize the harmony memory (HM)
The harmony memory is a memory location where all the
solution vectors (sets of decision variables) are stored. HM
matrix is filled with as many randomly generated solution
vectors as the HMS.

HM=
( )
( )
( )(
(
(
(
(

HMS HMS
N
HMS HMS
N
N
x f
x f
x f
x x x
x x x
x x x
2
1
2 1
2 2
2
2
1
1 1
2
1
1


(6)

C. Improvise a new harmony
A new harmony vector, ) ,...., , (
' '
2
'
1
'
N
x x x x = , is generated
based on the three rules: (1) memory consideration, (2) pitch
adjustment and (3) random selection.
Generating a new harmony is called (improvisation).
The value of the first decision variable
'
1
x for the new
vector can be chosen from any value in the specified HM
range ) (
1
'
1
HMS
x x .
Values of the other design variables ) ,...., (
' '
2 N
x x are
chosen in the same manner.
HMCR, which varies between 0 and 1, is the rate of
choosing one value from the historical values stored in the HM,
while (1- HMCR) is the rate of randomly selecting one value
from the possible range of values.

{ }
( )

HMCR) - (1 y probabilit with X x


HMCR y probabilit with x ,...., x , x x
x
i
'
i
HMS
i i i
'
i '
i
2 1
(7)

For instance, a HMCR of 0.95 indicates that the HSA will
choose the decision variable value from historically [16] stored
values in the HM with the 95% probability or from the entire
possible range with the 100-95% probability. Every component
of the New Harmony vector,
) ,...., , (
' '
2
'
1
'
N
x x x x = , is examined to determine whether it
should be pitch-adjusted. This operation uses the PAR
parameter, which is the rate of pitch adjustment as follows:

) PAR - 1 ( y probabilit with No


PAR y probabilit with Yes

i x' for decision
adjusting Pitch


(8)

The value of (1- PAR) sets the rate of doing nothing. If the
pitch adjustment decision for x' is Yes, x' is replaced as
follows:
BW rand x x
i i

' '

(9)

Where BW is an arbitrary distance bandwidth for the
continuous design variable and rand is a random number
between 0 and 1. In step 3, HM consideration, pitch adjustment
or random selection is applied to each variable of the New
Harmony vector in turn.
D. Update harmony memory
If the new harmony vector,
) ,...., , (
' '
2
'
1
'
N
x x x x =
, is better
than the worst harmony in the HM, from the point of view of
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
121
objective function value, the new harmony [11] is included in
the HM and the existing worst harmony is excluded from HM.
E. Check the stopping criterion
If the stopping criterion (i.e.) maximum number of
improvisations is satisfied, computation is terminated.
Otherwise, Step 3 and 4 are repeated.
IV. ANT COLONY OPTIMIZATION
Colony Optimization is another powerful technique to solve
hard combinatorial optimization problems. In ACO algorithms
a finite number of artificial ants work together to search for the
best solutions to the optimization problem under consideration.
Each ant builds a solution and exchanges its information with
other ants indirectly [12]. Although each ant can build a
solution, high quality solutions are only found with this
cooperation and information exchange [13].
In ACO algorithms a structural neighbourhood is defined
for the given problem. Each ant builds a solution by moving in
a sequence trough out the neighbourhood architecture. While
building a solution each ant uses two different information
sources.
The first source is private information which is the local
memory of an ant and the second source is the publicly
available pheromone trail together with problem specific
heuristic information [14].
To build a feasible solution ants keep a tabulated list to
keep the previously visited nodes. Publicly available
pheromone trail provides knowledge about the decisions of
ants from the beginning of the search process [15]. An ant-
decision table defined with the functional combination of this
pheromone trail and problem specific heuristic values is used
to direct the search. Pheromone evaporation strategies are used
to avoid stagnation due to large accumulations. Different ACO
approaches like Ant System, Ant Colony System and Max Min
Ant System are available in the literature [16]. The general
structure for the ACO algorithms is as follows:
1. Initialize:
Set t=0
Set NC=0
For every edge (i,j) set an initial value ij(t)=c for trail
intensity and ij=0
Place the m ants on the n nodes
2. Set s=1
For k=1 to m do
Place the starting town of the k-th ant in tabuk(s)
3. Repeat until tabu list is full
Set s=s+1
For k=1 to m do
Choose the town j to move to, with probability ) (t P
k
ij

given
by equation (4)
Move the k-th ant to the towa j
Insert town j in tabuk(s)
4. For k=1 to m do
Move the k-th ant from tabuk(n) to tabuk(1)
Compute the length Lk of the tour described by the k-th ant
Update the shortest tour found
For every edge (i,j)
For k=1 to m do
0
) , (
=
0
otherwise
tabu by described tour j i if
L
Q
k
k
k
ij

k
ij ij ij
+ =
;
5. For every edge (j,j) compute
( )
n t
ij
+
according to
equation
( ) ( )
ij ij ij
t n t + = +

Set t=t+n
Set NC=NC+1
For every edge (i,j) set
0 =
k
ij

6. If (NC<NCMAX) and (not stagnation bahavior)
then empty all tabu lists
Goto step 2
Elese
Print shortest tour
Stop
Where:
t: is the time counter
NC: is the cycles counter
S: is the tabu list index

( )

1 =
=
n
i
i
t b m (10)

m: is the total number of ants
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
122
( )
( ) | | | |
( ) | | | |

othervise
allowed j if
t
t
t P
k
allowed k
ij ij
ij ij
k
ij
k
0




(11)
) (t P
k
ij
: is the transition probability from town i to town j
for the k-th ant as
N: is the set of towns
where allowed k = {N - tabuk}

ij
ij
d
1
= (12)

( ) ( )
ij ij ij
t n t + = + (13)
is a coefficient such that (1 - ) represents the
evaporation of trail between time t and t+n,
ij

: is the visibility
and are parameters that control the relative importance
of trail versus visibility.

1 =
=
m
k
k
ij ij
(14)
k
ij
: is the quantity per unit of length of trail substance
(pheromone in real ants) laid on edge (i,j) by the k-th ant
between time t and t+n; it is given by
0
Q : is a constant and Lk is the tour length of the k-th ant.
V. APPROACH HSA-ACO
The reactive framework proposed in this paper focuses on
and which have a great influence on the solution process.
The weight of the pheromone factor , is a key parameter
for balancing intensification and diversification.
Indeed, the greater , the stronger the search is intensified
around solutions containing components with
high pheromone trails, i.e., components that have been
previously used to build good solutions.
The weight of the heuristic factor , determines the
greediness of the search and its best setting also depends on the
instance to be solved. Indeed, the relevancy of the heuristic
factor usually varies from an instance to another. More, for a
given instance, the relevancy of the heuristic factor may vary
during the solution construction process.
Adaptation of parameters and was performed by the
HSA algorithm.
The proposed procedure steps are shown in Fig. 1.
The ACO parameter q0=0.8; 1 8; 1 8, = 0.5;
Number of Ants (m) =57.

Fig. 1. Flow chart for EPD using HSA-ACO.
VI. SIMULATION RESULTS
To assess the efficiency of the approach HSA-ACO, the
following two case studies are carried out. The program was
developed using MATLAB and run on a 3.0 GHz, Pentium-IV
machine with 256 MB RAM.
A. Test system 1
The 3 generators test system [17] whose data are given
below. The system demand is 850 MW; is considered as test
system 1, the fuel and the emission coefficients including the
limits of generation for the generators are presented in tables I,
II and III.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
123
The results obtained by the proposed algorithm are
compared to those reported in the literature like Tabu search
(Roa-Sepulveda et al., 1996) [18], NSGA-II (AhKing &
Rughooputh, 2003) [19], DE/BBO [20]. From the comparison,
it is noticed that the proposed approach (HSA-ACO) gives
reduction in fuel cost and the SO
2
emission and NO
x
emission
(Table IV). The convergence proles of the best solution for
the fuel cost and the SO
2
emission and NO
x
emission are
shown in Fig. 3, 4 and 5, respectively. It is noticed also from
these gures that the convergence of the proposed approach
(HSA-ACO) is promising , we got the results after only 30
iterations.
TABLE I. FUEL COST COEFFICIENTS (3-UNIT SYSTEM)
Bus
No

Power limit
(MW)
Cost Coefficients

min
Gi
P

max
Gi
P

i
a
i
b
i
c
1 150.0 600 0.001562 7.92 561.0
2 100 400 0.00194 7.85 310.0
3 50 200 0.00482 7.97 78.0
TABLE II. SO2 EMISSION COEFFICIENTS (3-UNIT SYSTEM )
Unit i
i

i

i

1 1.6103e-6 0.00816466 0.5783298
2 2.1999e-6 0.00891174 0.3515338
3 5.4658e-6 0.00903782 0.0884504
TABLE III. NOX EMISSION COEFFICIENTS (3-UNIT SYSTEM)

i

i

i

1 1.4721848e-7 -9.4868099e-5
0.04373254

2 3.0207577e-7 -9.7252878e-5 0.055821713
3 1.9338531e-6 -3.5373734e-4 0.027731524
TABLE IV. COMPARISON OF TEST RESULTS OF 3-UNIT SYSTEM USING
DIFFERENT METHODS FOR BI-OBJECTIVE.
Variable DE/BBO
[20]
NSGA-
II [19]
Tabu
[18]
Emission
minimum
HSA-ACO
PG1 435.1978 436.366 435.69 411.951833
PG2 299.9696 298.187 298.828 298.595129
PG5 130.6604 131.228 131.28 153.490052
cost ($/hr) 8344.58319 8344.651 8344.598 8342.952303
Emission
SO2(ton/h)
9.02194 9.02541 9.02146
8.983050

Emission
NOx
(ton/h)
0.098686 0.098922 0.09863 0.088011
PL(MW) 15.8289 15.781 15.798 14.0370
T (s) / / / 0.62500
0 5 10 15 20 25 30
8340
8350
8360
8370
8380
8390
8400
8410
iterations
C
o
s
t
(
$
/
h
)

Fig. 2. Convergence characteristic for fuel cost minimization (3-unit system,
demand 850 MW).
0 5 10 15 20 25 30
8.9
9
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
iterations
S
0
x

E
m
is
s
i
o
n

(
T
o
n
/
h
)

Fig. 3. Convergence characteristic for SO2 emission minimization (3-unit
system, demand 850 MW).
0 5 10 15 20 25 30
0.088
0.088
0.0881
0.0881
0.0882
0.0882
0.0883
0.0883
0.0884
0.0885
0.0885
iterations
N
0
x

E
m
i
s
s
i
o
n

(
T
o
n
/
h
)

Fig. 4. Convergence characteristic for NOx emission minimization (3-unit
system, demand 850 MW).
B. Test system 2
A 6 generator test system [21] whose data are given below.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
124
The system demand is 1200 MW; is considered as test
system 2, the fuel and the emission coefficients including the
limits of generation for the generators are presented in tables V
and VI [21].The results obtained by the approach ( HSA-ACO)
are compared to those reported in the literature like QOTLBO,
TLBO and DE [22]. From the comparison, it is noticed that the
proposed approach (HSA-ACO) gives reduction in fuel cost
and emission (Table VII).
The convergence proles of the best solution for the fuel
cost and the pollution emission are shown in Fig. 6 and 7,
respectively. It is noticed also from these gures that the
convergence of the proposed approach (HSA-ACO) is better;
we got the results after only 30 iterations. These results clearly
show the effectiveness and performance of the HSA-ACO over
other methods.
TABLE V. POWER GENERATION LIMITS, COST CO-EFCIENT DATA OF
GENERATING UNITS OF 6-UNIT SYSTEM.
Bus
No

Real Power
Output limit
(MW)
Cost Coefficients


i Gi i Gi i Gi i
c P b P a P f + + = ) (
2


min
Gi
P
max
Gi
P
i
a
i
b
i
c
1 10 125 0.15247 38.5390 756.7988
2 10 150 0.10587 46.1591 451.3251
3 35 210 0.03546 38.3055 1243.5311
4 35 225 0.02803 40.3965 1049.9977
5 125 325 0.01799 38.2704 1356.6592
6 130 325 0.02111 36.3278 1658.5696

TABLE VI. POWER GENERATION LIMITS, EMISSION CO-EFCIENT DATA OF
GENERATING UNITS OF 6-UNIT SYSTEM.
Bus
No

Cost Coefficients
) ( ) (

ND
1
2
=
+ + =
j
i gi i gi i
P P Pg EC

i

i

i

1 0.00419 0.32767
13.8593

2 0.00419 0.32767
13.8593

3 0.00683 -0.54551
40.2669

4 0.00683 -0.54551
40.2669

5 0.00461 -0.51116
42.8955

6 0.00461 -0.51116
42.8955





TABLE VII. COMPARISON OF TEST RESULTS OF 6-UNIT SYSTEM USING
DIFFERENT METHODS FOR BI-OBJECTIVE.
QOTLBO
[22]
TLBO
[22]
DE [22] Emission
minimum
HSA-ACO
PG1 107.3101 107.8651 108.6284 123.151188
PG2 121.4970 121.5676 115.9456 142.179593
PG5 206.5010 206.1771 206.7969 187.070320
PG8 206.5826 205.1879 210.0000 168.631233
PG11 304.9838 306.5555 301.8884 319.977087
PG13 304.6036 304.1423 308.4127 297.412602
cost
($/hr)
64912 64922 64843 64136.543135
Emission
(Ib/h)
1281 1281 1286
1280.107267

PL(MW) 51.4781 51.4955 51.700 38.4000
T (s) 1.91 2.18 3.09 0.20313


0 5 10 15 20 25 30
6.45
6.5
6.55
6.6
6.65
6.7
6.75
6.8
6.85
6.9
6.95
x 10
4
iterations
c
o
s
t
(
$
/
h
)

Fig. 5. Convergence of cost obtained for 6-unit test system.

0 5 10 15 20 25 30
0
1
2
3
4
5
6
x 10
4
iterations
E
m
i
s
s
i
o
n

(
l
b
/
h
)

Fig. 6. Convergence of emission for 6-unit test system.
VII. CONCLUSION
In this article we have applied a new approach that involves
a combination of two Meta heuristic methods based in
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
125
Harmony search Algorithm (HSA) and ant colony algorithm
(ACO). Proposed approach was tested on 3-unit and 6-unit
systems.
The obtained results were compared to those of other
researchers. The results show clearly the robustness and
efficiency of the proposed approach in term of precision and
convergence time
REFERENCES
[1] R. D. Zimmerman, C. E. Murillo-S_anchez, and R. J. Thomas,
\Matpower's extensible optimal power ow architecture," Power and
Energy Society General Meeting, 2009 IEEE, July 26-30 2009, pp. 1-7.
[2] H. W. Dommel, Optimal power dispatch, IEEE Transactions on Power
Apparatus and Systems, Vol. PAS93, 1974, pp. 820-830.
[3] O. Alsac, J. Bright, M. Prais, and B. Stott, Further developments in LP-
based optimal power flow, IEEE Transactions on Power Systems, Vol.
5, 1990, pp. 697-711.
[4] J. Nanda, D. P. Kothari, and S. C. Srivatava, New optimal power-
dispatch algorithm using Fletchers quadratic programming method, in
Proceedings of the IEE, Vol. 136, 1989, pp. 153-161.
[5] Basu M. Dynamic economic emission dispatch using nondominated
sorting genetic algorithm II. Electr Power Energy Syst 2008;30(2):140
9.
[6] Jiang X, Zhou J, Wang H, Zhang Y. Dynamic environmental economic
dispatch using multi-objective differential evolution algorithm with
expanded double selection and adaptive random restart. Int J Electr
Power Energy Syst 2013;49:399407.
[7] Zhang R, Zhou J, Mo L, Ouyang S, Liao X. Economic environmental
dispatch using an enhanced multi-objective cultural algorithm. Electr
Power Syst Res 2013;99:1829
[8] X.S. Yang,: Harmony Search as a Metaheuristic Algorithm in Music-
Inspired Harmony Search Algorithm, Theory and applications SCI
191,2009, Zong Woo Geem (Ed.), 1-14,Springer-Verlag, ISBN 978-3-
642-00184-0, Berlin Germany.
[9] M. Fesanghary, M. Mahdavi, M. Minary and Y. Alizadeh: Hybridizing
harmony search algorithm with sequential quadratic programming for
engineering optimization problems, Comput. Methods Appl. Mech.
Eng,2008, 197(33-40), pp. 30803091.
[10] X.-S. Yang: Harmony search as a metaheuristic algorithm, Music-
inspired harmony search algorithm,2009, pp. 1-14.
[11] Z.W .Geem, J.H .Kim. and G.V .Loganthan: A new heuristic
optimization algorithm: harmony search, Simulation, , 2001, pp.76, 60-
68.
[12] Dorigo M., Di Caro G., The ant colony optimization metaheuristic, in
Corne D., Dorigo M., Glover F., New Ideas in Optimization, McGraw-
Hill,( 1997), p. 11-32.
[13] Den Besteb,M.-Sttzle., T.-Dorigo M., Ant colony optimization for the
total weighted tardiness problem, Proc. 6th Int. Conf. Parallel Problem
Solving from Nature, Berlin, (2000), p.611620.
[14] Merkle,D.-Middendorf,M., An ant algorithm with a new pheromone
evaluation rule for total tardiness problems, Proceedings of the
EvoWorkshops (2000), Berlin, Germany: Springer-Verlag, Vol. 1803
then Lecture Notes in Computer Science, (2000), pp. 287-296.
[15] Colorni,A.., Dorigo,M.-Maniezzo., V.- Trubian M.: Ant system for job-
shop scheduling,Belgian J. Oper. Res., Statist. Comp. Sci. (JORBEL),
Vol. 34, No. 1, (1994), pp. 39-53.
[16] Allaoua,B., LAOUFI. A., Collective Intelligence for Optimal Power
Flow Solution Using Ant Colony Optimization , Leonardo Electronic
Journal of Practices and Technologies ISSN 15831078, Issue 13,
(2008),pp. 88-105.
[17] A. J. Wood and B. F. Wollenberg, Power Generation, Operation, and
Control , John Wiley & Sons, Inc., 1984.
[18] Roa-Sepulveda, C. A., Salazar-Nova, E. R., Gracia-Caroca, E., Knight,
U. G., & Coonick, A. (1996). Environmental economic dispatch via
hopeld neural network and tabu search. In UPEC96 (pp. 10011004).
[19] AhKing, R. T. F., & Rughooputh, H. C. S. (2003). Elitist multi-objective
evolutionary algorithm for environmental/economic dispatch. Congress
on Evolutionary Computing, 2, 11081114.
[20] Aniruddha Bhattacharya, P.K. Chattopadhyay, Hybrid differential
evolution with biogeography-based optimization algorithm for solution
of economic emission load dispatch problems, Expert Systems with
Applications 38 (2011) 1400114010.
[21] Basu M. Economic environmental dispatch using multi-objective
differential evolution. Appl Soft Comput 2011;11(2):284553.
[22] Provas Kumar Roy, Sudipta Bhui, Multi-objective quasi-oppositional
teaching learning based optimization for economic emission load
dispatch problem, Electrical Power and Energy Systems 53 (2013)
937948.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
126
Modeling of Thermophilic Anaerobic Digestion of
Municipal Sludge Waste using Anaerobic Digestion
Model No. 1 (ADM1)

Taekjun Lee and Young Haeng Lee
Center for Water Resource Cycle
Korea Institute of Science and Technology
Seoul, South Korea
younglee@kist.re.kr


AbstractAnaerobic digestion model no. 1 (ADM1) model of
international water association was applied to a lab-scale
thermophilic anaerobic digestion process for the treatment of
activated sludge wastes originating from a municipal wastewater
treatment plant. The aim of the present study is to compare the
results obtained from the simulation with the experimental values.
The simulated results showed a good fit for cumulative produced
methane gas volume and the concentration profile of total volatile
fatty acids (VFAs).
Keywordsanaerobic digestion; ADM1; thermophilic; methane
gas; municipal sludge wastes
I. INTRODUCTION
Anaerobic digestion has been worldwide used for the
treatment of numerous types of organic wastes [1]. Anaerobic
digestion of the municipal sludge wastes under mesophilic or
thermophilic conditions can contribute efficiently in organic
waste reduction and biogas production [2]. Because of the
importance of anaerobic digestion as an organic waste
treatment process, the anaerobic digestion model no. 1
(ADM1) was developed by an international water association
(IWA) specialist group [3, 4, 5, 6, 7]. Its main feature is the
consideration of the main steps of anaerobic digestion process
that are disintegration (non-biological step), hydrolysis,
acidogenesis, acetogenesis, and the methanogenesis with seven
different microbial groups [6, 8].
In the present study, the ADM1 was applied for the
simulation of a dynamic behavior of a lab-scale thermophilic
anaerobic digestion process which treated municipal sludge
wastes.
II. METHODS
A. Anaerobic digestion model no.1 (ADM1)
The ADM1 model developed by the IWA group with the
objective to build a full mathematical model based intimately
on the phenomenological model was used in order to simulate
thermophilic anaerobic digestion process.
The ADM1 model includes, as a first step, the
disintegration of organic solid complexes (non-biological step)
into carbohydrates, lipids, proteins and inert materials (soluble
and particulate inert). The second step is the hydrolysis process
of the disintegration products under an enzymatic action to
produce sugars, amino acids and long chain fatty acids
(LCFA), successively. Then, amino acids and sugars are
fermented to produce VFAs, hydrogen and carbon dioxide gas
(acidogenesis). Then LCFA, propionic acid, butyric acid and
valeric acid are anaerobically biotransformed into acetic acid,
carbon dioxide and hydrogen gas (acetogenesis). Finally,
methane gas can be produced through two paths: the first one is
based on acetate whereas the second one is through the
reduction of carbon dioxide by molecular hydrogen.
B. Reactor and experimental monitoring
As mentioned above, experimental data were obtained from
the monitoring of an anaerobic digestion of municipal sludge
wastes carried out in the lab-scale batch digester (total liquid
volume of 10 L, total headspace volume of 2.3 L) which was
operated under thermophilic conditions (541C) at an organic
loading of 10.7 gCOD/L
reactor
.
During the operation period, the digestion process was
monitored by analyzing pH, total solids (TS) and volatile solids
(VS), chemical oxygen demand (COD), volatile fatty acids
(VFAs), biogas volume produced, biogas composition (CH
4
,
CO
2
, and H
2
) [9].
III. RESULTS AND DISCUSSION
A. Characterization of municipal sludge waste
The lab-scale batch thermophilic anaerobic digestion
reactor was monitored for around 2 months. Typical
characteristics of municipal sludge waste were 37 gTS/L,
VS/TS ratio of 74%, 49 gTCOD/L, and VFAs of 1.6 gCOD/L.
The substrate was characterized according to the ADM1.
Therefore, the model input data were calculated on the basis of
characteristics of municipal sludge waste.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
127
B. Estimation of kinetic parameters
Kinetic parameters concerning the disintegration and the
hydrolysis reactions were first estimated by experimental batch
study. The observed kinetic rate constants were 0.8 and 1.6 d
-1

for disintegration and hydrolysis reactions, respectively (see
Table 1). All values of kinetics and stoichiometry constants
were then maintained as in the ADM1 model.
C. Modeling of reactor performance
After the estimation of the kinetic rate constants of
disintegration and hydrolysis reactions, the obtained
experimental data of acetic acid concentration, total VFAs
concentration, and cumulative produced methane volume were
used for kinetic parameters estimation (see Fig. 1) using the
Simplex minimization algorithm [10]. The values of the kinetic
parameters involved in the uptake rates of the VFAs, estimated
after the model fitting are listed in Table 4. They are also
compared with the ADM1 parameter values suggested in the
scientific and technical report of ADM1 which applied the
model in the case of the anaerobic digestion of sludge waste [4].
TABLE I. ESTIMATED VALUES OF KINETIC PARAMETERS
Kinetic
parameters
Description (unit)
Default
values in
ADM1
Estimated
values
(this study)
Kdis
Composites disintegration rate
(d
-1
)
0.5 0.8
Khyd Hydrolysis rate (d
-1
) 10 1.6
km_pro Propionate uptake rate (d
-1
) 13 17
Ks_pro
Half saturation coefficient for
propionate uptake (kgCOD/m
3
)
100 92
km_ac Acetate uptake rate (d
-1
) 8 4.7
Ks_ac
Half saturation coefficient for
acetate uptake (kgCOD/m
3
)
150 251
Ypro
Propionate degraders yield on
substrate
0.04 0.05
Yac
Acetate degraders yield on
substrate
0.05 0.04

Time (Days)
0 10 20 30 40 50
T
o
t
a
l

V
F
A
s

(
m
g
C
O
D
/
L
)
0
200
400
600
800
1000
1200
1400
Simulated Data
Experimental Data
Time (Days)
0 10 20 30 40 50
C
u
m
u
l
a
t
i
v
e

C
H
4

p
r
o
d
u
c
t
i
o
n

(
L
)
0
10
20
30
40
50
Simulated Data
Experimental Data

Fig. 1. Comparison of model simulated data with the experimental data of
cumulative produced methane gas volume (left) and concentration profile of
total VFAs (right).

Obtained results concerning the cumulative produced
methane gas volume and the concentration profile of total
VFAs were presented in Fig. 1. Total VFAs quickly
disappeared in 6.5 days and methane gas was produced up until
55 days. As shown in Fig. 1, the simulation results with
optimized parameters were in good agreement with the
experimental data over all operation period. However, there
was a slight difference for the cumulative produced methane
gas volumes. Similar to the previous report [11], it was
possible that the gas-liquid transfer and separation of methane
gas took some time due to the high viscosity of sludge wastes,
which would explain the time gap between methane gas
production and VFAs utilization.
IV. CONCLUSION
The present study focused on the anaerobic fermentative
methane gas production from municipal sludge waste under
thermophilic conditions. The application of ADM1 model to
the obtained experimental data from the lab-scale batch
methanogenic reactor for all operation period was successful.
Therefore, the ADM1 could be a valuable tool for managing
and process design of anaerobic digestion even in the case
under thermophilic conditions.
ACKNOWLEDGMENT
This work was supported by the Green City Technology
Flagship Program funded by the Korea Institute of Science and
Technology (KIST-2013-2E23992).
REFERENCES
[1] J. Mata-Alvares, S. Mac, and P. Libres, Anaerobic digestion of
organic solid wastes. An overview of research achievements and
perspectives Bioresour. Technol., vol. 74, pp. 316, 2000.
[2] D. Bolzonella, L. Innocenti, P. Pavan, P. Traverso, and F. Cecchi,
Semi-dry thermophilic digestion of the organic fraction of municipal
solid wastes: focusing on the start-up phase Bioresour. Technol., vol.
86, pp. 123129, 2003.
[3] D.J. Batstone and J. Keller, Industrial applications of the IWA
anaerobic digestion model No. 1 (ADM1) Water Sci. Technol., vol. 47,
pp. 199-206, 2003.
[4] D.J. Batstone, J. Keller, I. Angelidaki, S.V. Kalyuzhnyi, S.G.
Pavlostathis, A. Rozzi, W.T.M. Sanders, H. Siegrist, amd V.A. Vavilin,
Anaerobic Digestion Model No. 1. International Water Association
(IWA), Publishing, London, UK, 2002.
[5] F. Blumensaat and J. Keller, Modelling of two-stage anaerobic
digestion using the IWA anaerobic digestion model no. 1 (ADM1)
Water Res., vol. 39, pp. 171183, 2005.
[6] J. Lauwers, L. Appels, I.P. Thompson, J. Degreve, J.F. Van Impe, and R.
Dewil, Mathematical modelling of anaerobic digestion of biomass and
waste: Power and limitations Prg. Energy Combust., vol. 39, pp. 383-
402, 2013.
[7] T.S.O. Souza, A. Carvajal, A. Donoso-Bravo, M. Pena, and F. Fdz-
Polanco, ADM1 calibration using BMP tests for modeling the effect of
autohydrolysis pretreatment on the performance of continuous sludge
digesters Water Res., vol. 47, pp. 32443254, 2013.
[8] K. Derbal, M. Bencheikh-lehocine, F. Cecchi, A.-H. Meniai, and P.
Pavan, Application of the IWA ADM1 model to simulate anaerobic co-
digestion of organic waste with waste activated sludge in mesophilic
condition Bioresour. Technol., vol. 100, pp. 1539-1543, 2009.
[9] Y.H. Lee, Y.-C. Chung, and J.-Y. Jung, Effects of chemical and
enzymatic treatments on the hydrolysis of swine wastewater Water Sci.
Technol., vol. 58, pp. 1529-1534, 2008.
[10] J.A. Nelder and R. Mead, A simplex method for function
minimization Comput. J., vol. 7, pp. 308-313, 1965.
[11] H.-S. Jeong, C.-W. Suh, J.-L. Lim, S.-H. Lee, and H.-S. Shin, Analysis
and application of ADM1 for anaerobic methane production Bioprocess
Biosyst. Eng., vol. 27, pp. 81-89, 2005.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
128
Sustainable pneumatic transport systems of cereals

Mariana Panaitescu
1,a
, Gabriela Simona Dumitrescu
1,b
, Andrei Alexandru Scupi
1,c

1
Department of Engineering Sciences in Mechanical and Environmental Field
Constanta Maritime University
Constanta, Romania
a
marianapan@yahoo.com
b
mona22februarie@yahoo.com
c
andrei.scupi@gmail.com

Abstract Technological pneumatic transport installations
are designed to move materials from one place to another in
various phases of the production process. For example: loading-
unloading materials (cereals) using rail and marine transport, air
tunnel container transport, supplying combustion installations
with burning coal dust. The main parameter in pneumatic
transport installations is the velocity of air. For the regime of
motion with material particles in suspension, for a given flow
material, the higher the velocity is the greater the pressure loss
will be and thus the energy consumption for transportation will
increase. In horizontal pipes at the beginning of motion flow we
have a compact regime, and then due to decrease air velocity a
continuous layer regime is forming. This is the apparent motion
in wich the pressure losses increase with the decrease of velocity.
By reducing the air velocity the thickness of the deposited
material increases and the real air passage section decreases and
therefore the real air velocity increases, which explains the
increase in pressure loss. In vertical pipes if the air velocity
decreases below the lower limit of volant transport, after a
critical area of instability, a transportation fluidized bed is
established, the pressure losses being much larger than the
particles in suspension mode. If the velocity further decreases the
particles can not be entrained in the air.
Keywordstransport, pneumatic, sustainable, installations,
cereal, aspiration, silo, cyclone, pipeline.
I. INTRODUCTION
In pneumatic transport installations [1] the air circulation is
done in order to transport solid materials under dynamic
pressure effect of the air flow in pipes. Transportation of
materials can be mechanically (conveyors, bucket, etc.),
through pneumatic pipelines using air as the carrier or
combined mechanical and pneumatic. The units consist of two
parts: a device for picking up the material (along with the air)
in a transportation network and a retaining device (separation)
of the material transported.
The material must fulfill a number of requirements to be
transported in good condition: to present a size composition
and a density for which the transport and the separation should
be economical, not to adhere to the surface of the pipes, not to
degrade by crushing during transport, the required temperature
for transport should not affect the resistance of pipes and of
the equipment used, not to emit explosive or corrosive vapors,
do not change their chemical properties during transport.
Pneumatic transport is based on the principle of entraining
solid material particle by a current of air or other gas moving
with a certain velocity through a pipe. With this type of
installations we can transport solid tiny elements: wheat, corn,
oats, barley, ash, clay, cement, wood chips, sawdust, cellulose.
Moving of material is made on a horizontal plane over a
distance of 350m 400m or on a vertical plane over a distance
of 45m.

II. PNEUMATIC TRANSPORT SYSTEMS
Building a pneumatic transport system and its requirea
equipment along with economic indicators vary from one units
to another depending on system pressure.
From the pressure point of view we can distinguish:
systems with discharge low, medium or high pressure;
systems with aspiration (fig. 1.1);
combined systems (mixt) - open or closed systems.
In pneumatic transport systems by aspiration (Fig. 1), the
material is transported with the help of a exhauster mounted at
the end of pneumatic unit end so that it lies entirely on air
depression. The exhauster produces a depression of 0,5 0,6
bar necessary for transport of material. Granular material with
is sucked along with the air through the suction head and
transported to the silo discharge pipe. Separation of grain last
air is entrained in a cyclone. Adjusting depression is by nature,
grain size and friction losses that occur along the entire length
of the installation. Pneumatic conveying aspiration is effective
in downloading materials from cars, platforms, trailers etc. At
distances up to 120 m,
Figure 1 Pneumatic transport system with suction [1]
1 - suction head, 2 - Condor transport 3 - Silo, 4 - cyclone, 5 -
exhaustive.
III. MATHEMATICAL MODELING OF THE
PROCESS
A. The speed of material in air transport pipes
In the pipeline the material transported behaves differently
from air, especially due to its higher mass. The forces exerted
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
129
on the particle (e.g. friction with the pipe walls, the impact of
particles one with another, its spin, weight, drag, etc.) produce
acceleration or slowing down of the material, so that to
maintain the required speed transport its necessary an
additional energy consumption.
a) operating speed of the material - is the speed limit that
the particles in dynamic equilibrium have. The relationships
are calculated for a rarefied flow (volant transport).
For relative speeds: v
r
= v
a
v
m
and a Re>210
5
, the
relation is :
2
1
] ) 1 (
2
[ 1
* *
5 , 0
* * * *
Fr
Fr
Fr
Fr
Fr Fr
v
v
a
m

+
= (1)
For high speed transmission, when considered = 0, the
relation becomes:

5 , 0
* *
)
2
( 1
1
Fr v
v
a
m

= (2)
in which:
v
a
, v
m
velocity of air and material in the pipe [m/s];
v
p
- floating velocity of the material [m/s];
l* - initial friction coefficient depending on the material
transported;
b - coefficient of friction between the moving material
particles and pipeline;
b =1 for vertical pipes; b=v
p
/v
a
for horizontal pipe;
Fr Froude criterion, is Fr = v
a
2
/(gD) si Fr* = v
p
2
/(gD);
D - diameter of the pipe [m].
For all types of particles, including for dust, the default
relation is recommended:
0
2
'
*
2
2
=
|
|

\
|

|
|

\
|

|
|

\
|

gD
v
v
v
m
p
r

(3)
where ' and are the aerodynamic drag coefficients.
b) velocity of the material during acceleration - between time
of placing material in pneumatic pipeline and achieve its
operating velocity, there is a period of acceleration of the
material, which its corresponds a length of acceleration. Upon
the particle is acting the ascension (F
A
) force (fig.2) and the
gravity (G) force, the retention force due to collisions between
particles (Fi). All of the forces create a resultant which
represents the Newtonian acceleration force (F
acc
).
;
2
2
a r
A
v
m S c F

= (4)
; g m G = (5)
;
2
2
m
i
v
m i F =
dt
dv
m F
m
acc
= (6)
in which:
c - coefficient of drag;
S the section of particle [m
2
];


Fig. 2. The action of the forces of acceleration on the particle[1]

v
r
= v
a
- v
m
,relative speed [m/s];

a
- the air density [kg/m
3
];
m - mass of material transported [kg];
i - coefficient of impact, depending on the size and nature
of the particle.
Vertical projection of forces equilibrium equation is obtained:
0 = +
i A acc
F G F F (7)
0
2 2
2 2
= +
m a r m
v
m i mg
v
m S c
d
dv
m

(8)

2 2
) (
2 2
m a m a m
v
m i mg
v v
m S c
d
dv
m + +

(9)
If note:
;
2 2
i S c
A
a
+

= ;
a a m
S c v v B =
;
2
2
g
v
S c C
a
+ =
Differential equation can be written as:
C v B v A
d
dv
m m
m
+ =
2

(10)
Admitting that
a
v is constant, we can separate the variables
and then integrate them:


=
+
t v
m m
m
dt
C Bv Av
dv
m
m
0 0
2
(11)
After integration is obtained the following:
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
130
(
(

|
|

\
|
+

=
0
2
2
2
4 2
4 2
ln
4
K
AC B B Av
AC B B Av
AC B
m
t
m
m

(12)
The value of constant K
0
is obtained by introducing the
initial values: t=0; v
m
=0

AC B B
AC B B
K
4
4
2
2
0
+

= (13)
Replacing in equation (12) K
0
with is formula, we obtain:

( ) C v AC B B
C v AC B B
AP B
m
t
m
m
2 4
2 ) 4 (
ln
4
2
2
0
2
+

= (14)
Equation is solved with the notations:
;
4
2
m
AC B
=

+

=
m B
m B
(15)
Material resulting in acceleration speed:

x
x
a m
e
e
v v

=
1
1
(16)
Acceleration time is determined from the relationship
generally being an inverse function of velocity:

=
a
m
a
m
acc
v
v
v
v
1
1
ln
1
(17)
The length of the acceleration resulting from the
integration of the equation:

=
acc
d v l
m acc

0
(18)
Length of straight pipelines for acceleration is determined
by the time required for the velocity of material to reach 95%
of the operating velocity. Pressure difference required to
accelerate the material is determined by the relationship:

2
1 2
785 , 0 D
v v
G p
m m
m

= (19)
in which:
G
m
-mass flow of material [kg/s];
V
m1
, V
m2
- initial velocity, final velocity (after
acceleration) of the material [m/s];
D - the diameter of the transport [m].

c) velocity of the material in curved pipelines
In curved pipelines centrifugal force due to air material
separation occurs, the particles form a layer lining on the outer
wall of the pipeline wall due to friction, the layer of material is
slowed down drastically, the velocity decreases but the
pressure changes very few percentage loss due to clean air
passage. After the curved pipeline the material has to be
accelerated on the straight sections of pipeline.
The forces acting on the particle (Fig. 3) are: normal force
at the pipe wall (N), the friction force (F
f
), inertial force (F
i
)
(mass x acceleration). Forces resulting from the project:
;
d
dv
m F
i
=
R
mv
N F
f
2
= = (20)

For an element of a length Rd vd ds = = ,
where
d
dv
v R = . Substituting into the equation and
introducing simplification and putting boundary conditions,
we obtain:

;
2
0
2
1

=

d
v
dv
v
v
=
1
2
ln
v
v
;

= e
v
v
1
2
(21)

in which:
f curve angle [radian];
h the coefficient of friction of the material of the pipe wall,
experimentally determined;
R radius [m].


= e v v
1 2
(22)

Note that the velocity at the exit of curve does not
depend on the radius of curvature and is even lower with both
friction coefficient is higher, which is why these plants are not
allowed to curve segments are shown curves enamel or other
processing to reduce the coefficient of friction. Velocity v
2
is
even smaller the higher the opening angle of the curve is,
which leads to the recommendation on the composition of a
set of two curves 45 which is inserted between a straight
section that will foster and return to operating speed compared
to a single curve of 90.


Fig. 3. Particle motion in a horizontal elbow [1]

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
131
IV. NUMERICAL SIMULATION OF THE PROCESS
USING FINITE VOLUME METHOD
For the numerical simulation we have used a numerical
software Ansys-Fluent v.13.0. This program is based on the
finite volume method.
The geometry created is similar to Fig.3 and has the following
dimensions (Fig. 4):
- the pipelines arms length = 3m;
- the pipelines diameter = 0.6 m.


Fig. 4 Geometry representation

After the creation of the geometry we have discretised the
body into 4159 discretisation cells with 8836 nodes (Fig. 5). A
special attention was given to the part were the pipe bends,
consisting in a larger number of cells, because here the air and
the wheat changes direction and forms eddies.


Fig. 5 Body discretisation

On one side of the pipe enters the air and wheat with a
volume fraction of 50% each with a speed of 15 m/s. The two
phases will mix and will reach the other side at atmospheric
pressure of 101,325 Pa. The density of the air was equal to
1.225 kg/m
3
and the wheat density was 800 kg/m
3
. The
pipeline is on horizontal plane so no gravity is used.
To calculate the two-phase transportation (air and wheat)
we have used the volume fraction method. The volume
fraction method relies on the fact that two or more fluids are
not interpenetrating. For each additional phase that is added to
the model, a variable is introduced: the volume fraction of the
phase in the computational cell. In each control volume, the
volume fraction of all phases sum to unity. Thus, the variables
and properties in any given cell are either purely
representative of one of the phases, or representative of a
mixture of the phases, depending upon the volume fraction
values [5].
( ) ( )
(

\
|
+ = +

=
n
p
pq pq
q q q q
q
m m S
t
q
1
1


where: qp m

is the mass transfer from phase q to phase p and


pq m

is the mass transfer from phase p to phase q;


q
is the
volume fraction of the phase q and
q
S

is a specific constant.
For turbulent model k-epsilon: we used the equation for
turbulent kinetic energy k (23), disipation epsilon (24) and the
energy equation (25) [5]:

( ) ( )


m m k
k
m t
m
m m
G k
k v k
t
+
|
|

\
|
=
= +

,
,

(23)
( ) ( )
( )

2 ,
,
C G C
k
v
t
m k l
m t
m
m m
+
|
|

\
|
=
= +

(24)
( ) ( ) ( )
( ) ( )
h
eff j
j j eff
S v J h T k
p E v E
t
+ + =
= + +


(25)
where:

=
=
N
i
i i m
1
, (26)

=
=
=
N
i
i i
N
i
i
i i
m
v
v
1
1


, (27)



2
,
k
C
m m t
= , (28)
( ) ( ) m
T
m m
m t m k
v v v G + = :
, ,
, (29)
k
eff
is effective conductivity; J
j
fluid diffusion flux j; S
h
heat
due to chemical reaction. In the equation (25) we have:

2
2
v p
h E + =

(30)
h enthalpy; for ideal fluids (9) and for real fluid (10)

j j j
h Y h = (31)

p
h Y h
j j j
+ = (32)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
132

=
T
T
j p j
ref
dT c h
,
(33)
The velocity variation shows us a increase of the velocity
mixture from 15 m/s to 17 m/s when the mixture changes
direction.

Fig. 6 Velocity variation

The density representation shows actually the volume fraction
distribution of phases (Fig. 7). The blue color is associated
with air density, while the red color is associated with wheat
density. The green color shows that in that specific region the
computer averaged the density values that was depending on
volume fraction of the phases.

Fig. 6 Density variation
V. CALCULATION FOR PNEUMATIC GRAIN
TRANSPORT SYSTEM

A) INITIAL DATA
v
a
: = 14 air velocity [m/s];
d: = 210
-3
6 10
-3
diameter of particle [m];
Q=250 350 [t/h] air flow of system.

B) RESULTS
1. Necessary diameter of the pipe conduct

a a
i
v
Q
d

6 , 0 = (34)
Q=250 350 [t/h] air flow of system;

a
=1,2[kg/m
3
] air density;
=15 concentration of mixture, for systems with grain
suction;

Table 1. Calculation of necessary diameter to transport the
grains for a certain air velocity













2. Transport velocity

a
m
p
d
k v

' 4 , 28
= (35)
Where, d= 410
-3
[m]- diameter of particle;
k=0,57- shape coefficient;

a
=1,2[kg/m
3
] air density;

m
= 7000 8500 [N/m
3
] specific weight of grain.

Table 2. Calculation of transport velocities for different
specific weights

















3. Necessary air flow to transport the grains - Q
a
3 , 4
Q
Q
a
= (36)
Q=250 350 [t/h] air flow of system
Nr. gm
[N/m
3
]
d
[m]
ga
[kg/m
3
]
vp
[m/s]
1 7000 0.004 1.2 14.67312
2 7100 0.004 1.2 14.77755
3 7200 0.004 1.2 14.88126
4 7300 0.004 1.2 14.98424
5 7400 0.004 1.2 15.08653
6 7500 0.004 1.2 15.18812
7 7600 0.004 1.2 15.28904
8 7700 0.004 1.2 15.3893
9 7800 0.004 1.2 15.4889
10 7900 0.004 1.2 15.58788
11 8000 0.004 1.2 15.68622
12 8100 0.004 1.2 15.78396
13 8200 0.004 1.2 15.88109
14 8300 0.004 1.2 15.97763
15
8400 0.004 1.2 16.0736
16 8500 0.004 1.2 16.16899
Nr.
Crt.
Q
[t/h]
va
[m/s]
ga
[kg/m
3
]
di
[m]
1 250 15 14 1.2 0.597614
2 260 15 14 1.2 0.609449
3 270 15 14 1.2 0.621059
4 280 15 14 1.2 0.632456
5 290 15 14 1.2 0.64365
6 300 15 14 1.2 0.654654
7 310 15 14 1.2 0.665475
8 320 15 14 1.2 0.676123
9 330 15 14 1.2 0.686607
10 340 15 14 1.2 0.696932
11 350 15 14 1.2 0.707107
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
133
=15 concentration of mixture, for systems with grain
suction;

Table 3. Calculation of necessary air flow to transport the
grains for different air flows of the system












4. Necessary power of pneumatic transport installation

2
10
kL
P = [kW] (37)

tot
a
h
Q L

=
10000
10000
ln 10000 [daNm/s] (38)
k=1,1- coefficient that takes account of losses of leakage;
h
tot
= pressure drop;
Q
a
=air mass flow;
=efficiency of the pump.
We have calculated the necessary power of pneumatic
transport for all 11 cases of the necessary air flow, thus
resulting 11 tables. That is way we have presented here all the
first two tables and the lost one.

Table 4. Calculation of necessary power for Q
a
=3.875969 [t/h]








Table 5. Calculation of necessary power for Q
a
=4.031008 [t/h]





..


Table 6. Calculation of necessary power for Q
a
=5.426357 [t/h]





VI. CONCLUSIONS

We can observe that an increase of grain dimensions leads
us an increase of velocity, for a certain specific weight. Also
we can notice that the necessary power of the system increases
with increasing air flow and the velocity of transported
material implicitly. This shows that you have to make a
compromise: what velocity of transport should be chosen (the
bigger, the better) and what power or energy we have at hand.
Varying these parameters we can optimize the transportation
system.

References

[1] [1] O. Bancea,, Industrial ventilation , Politehnica
Publishing,Timisoara, 2009.
[2] [2]O.Bancea,Ventilation and air condition, lecture notes, Politehnica
Publishing,Timisoara,1996.
[3] [3] W. BARTH,Absetzung, Transport und Fiedersufwirbelung von
staubfrmigem Gut im Luftstrom. Chemie Ing.Technic no.3/1963.
[4] [4] E.CARAFOLI, T.,OROVEANU, Fluids mechanics, Romanian
Academy Publishing Bucharest,vol.I, pp.295-387,1952, vol.II, pp.489-
552, 1955.
[5] Ansys-Fluent v.12 Theory guide, 2007.
Nr. Q
[t/h]
Qa
[t/h]
1 250 15 3.875969
2 260 15 4.031008
3 270 15 4.186047
4 280 15 4.341085
5 290 15 4.496124
6 300 15 4.651163
7 310 15 4.806202
8 320 15 4.96124
9 330 15 5.116279
10 340 15 5.271318
11 350 15 5.426357
Nr.

Qa
[t/h]
L
[Nm/s]
h k P
[kW]
1 3.875969 514468.5 0.8 1.1 7.073942
2 3.875969 514496.3 0.8 1.1 7.074324
3 3.875969 514524 0.8 1.1 7.074705
4 3.875969 514551.8 0.8 1.1 7.075087
5 3.875969 514579.6 0.8 1.1 7.075469
6 3.875969 514607.3 0.8 1.1 7.075851

Nr.

Qa
[t/h]
L
[Nm/s]
h k P
[kW]
1 4.031008 535047.3 0.8 1.1 7.3569
2 4.031008 535076.1 0.8 1.1 7.357297
3 4.031008 535105 0.8 1.1 7.357694
4 4.031008 535133.9 0.8 1.1 7.358091
5 4.031008 535162.8 0.8 1.1 7.358488
6 4.031008 535191.7 0.8 1.1 7.358885
Nr.

Qa
[t/h]
L
[Nm/s]
h k P
[kW]
1 5.426357 699677.2 0.8 1.1 9.90352
2 5.426357 699714.9 0.8 1.1 9.904054
3 5.426357 699752.7 0.8 1.1 9.904588
4 5.426357 699790.4 0.8 1.1 9.905123
5 5.426357 699828.2 0.8 1.1 9.905657
6 5.426357 699866 0.8 1.1 9.906192

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
134
A new consideration about floating storage and
regasification unit for liquid natural gas


Mihai Sagau
1
, Mariana Panaitescu
1
, Fanel-Viorel Panaitescu
1
, Scupi Alexandru-Andrei
1
1
Department of Engineering Sciences in Mechanical and Environmental Field
Constanta Maritime University
Constanta, Romania
marianapan@yahoo.com


AbstractIn this paper we want to present the details of new
project about floating liquid natural gas (LNG) regasification
terminal based on conversion of an existing LNG carrier . LNG is
sent from the tanks to the regasification skid fwd. The
regasification skid essentially comprises booster pumps and
vaporizers This project can boost both transport and economy
sector of Central European countries by introducing a less
expensive fuel, more environmental friendly and with a good
perspective in the future. The project consists in building a LNG
import terminal in Constanta, harbor from where the
merchandise (LNG in this situation) can easily be delivered on
Danubes basin and reach central European countries.
Keywordsfloating storage; regasification; style; liquid natural
gas; process; diagram; modelling flow
I. INTRODUCTION
Up to now there was little or none import and use of LNG
in countries like: Germany, Austria, Slovakia, Hungary, Serbia,
Bulgaria, unlike other European countries that have over 20
years of using LNG. The reason these countries didnt profit
from the benefits of LNG usage is that there was little access to
import LNG and if there was, it was very expensive. In order to
deliver LNG to Austria a railway or truck transport should
have been organized in special containers from the LNG
terminal near a sea to Austria[1].
Through the terminal built in Constanta specialized vessels
could transport LNG at lower prices to all countries upstream
Danube. The project is in accordance with EUROPEAN
TRANSPORT STRATEGY TEN-T INTEGRATION (AXIS
NO. 18) and is built in the context of similar functional LNG
terminals in EU: SPAIN (6), FRANCE (3), UNITED
KINDOM (3), ITALY (2), PORTUGAL (1), BELGIUM (1),
SWEDEN (1), GREECE (1). As we can observe, Constanta
LNG terminal is the only one in the Black sea, and except the
one in Greece, the only one in Central-Eastern Europe.
The general goal of this project is to extend the energy
sources to ones that maintain a cleaner and more sustainable
environment, a source of energy that is less voluminous and
can be transported in big quantities.
In this paper we present more informations about
regasification process[2].
II. FLOATING STORAGE AND REGASIFICATION UNIT FOR
LNG
A. Technical specifications for the FSRU in LNG import
terminal in Constanta-South-Agigea
FSRU will be placed FSRU will be placed offshore, 4
[km] from the shore; Storage capacity: 135.000 [m
3
]; FSRU
specifications: Length-289 m; Beam- moulded breadth=48 [m];
Deadweight 82100 [t]; Draft 12.5 [m]; Gross tonnage 115000
[t]; Net tonnage- 34550 [t]; Minimum depth for offshore
mooring an FSRU is 15 [m]; Minimum depth for pierside or
jetty mooring is > 14 [m]; Designed for benign waters-Agigea
waters are considered very suitable; Design lifetime: 30 years
at location; Regas send-out requirements: minimum rate 50
[t/h]; maximum rate 250 [t/h; Vaporizers: with propane and sea
water; Steam from existing ship boilers used for regas; Fuel
consumption 50 [t] LNG per day /100% regas capacity; DNV
class notation: REGAS-1. The FSRU is supplied with LNG by
shuttle tankers berthing side by side to it, through 4 loading
arms at a flow rate of 8000 [m
3
/h], this means that max 20 [h]
are required to download a vessel of 135 000 [m
3
] capacity at
max discharging rate including mooring and de-mooring time.
The terminal is designed to perform 3-4 discharging
operations/month. The loading lines are kept in
refrigerated(cold) condition between 2 dischargings through a
small recirculation of LNG so as to shorten the cool-down
phase and therefore reduce the overall duration of unloading
operation. The FSRU is equipped with 3 Tri-Ex type units
using large quantities of seawater as a heating medium.
Operational condition Limit for LNGC unmooring and
loading arm disconnection. Max. Wind speed 27 [knot/s]~ 44
[m/s] ; Max. sea state Heights 2.5 [m]; Max. surface current 2
[knots/s]~1 [m/s].
B. LNG regasification process
The floating storage and regaseification unit (FSRU) is
now the first floating LNG regasification terminal based on
conversion of an existing LNG carrier. The FSRU must
receive LNG from offloading LNG carriers, and the onboard
regasification system will provide gas send-out through
flexible risers and pipeline to shore. The FSRU will be
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
135
designed in compliance with DNV class rules and relevant
international standards (Figure 1) [3], [4].



Fig. 1. Floating storage and regaseification unit -FSRU

LNG is sent from the tanks to the regasification skid fwd.
The regasification skid essentially comprises booster pumps
and vaporizers. The booster pumps will increase the pressure
to about 90 [bar], before the high pressure LNG is vaporized,
after which the gas passes a fiscal metering unit and is sent to
the sub sea pipeline via the gas swivel and flexible risers. The
regasification system shall have the following key design data:
maximum gas send-out pressure- 85 [bar]; maximum gas
send-out flow-240 [tonnes/h]; gas send-out temperature (min)-
0 [C] (Figure 2) [4].
The FSRU will be permanently moored approximately
600 [m] from port side and crew facilities [5].
The regasification equipment is arranged in parallel trains,
assembled as separate modules, or the whole plant can be built
in one module to fit to the available space. Each train consists
of one or two LNG booster pumps and one shell and tube heat
exchanger.
The booster pumps are fed by a common LNG buffer tank.
The regasification system can be designed for open or closed
loop operation, and the heating medium for LNG vaporization
can either be sea water or steam [6]. The system can be
designed to match any requirements to flow and pressure.



Fig. 2. The regasification process

The PT - Oman LNG diagram for thermodynamics of
regaseification [1] is presented in the Figure 3:


Fig. 3. The PT-Oman LNG diagram

The equipments of regasification process are presented in
Figure 4 [6]:

Fig. 4. The equipments of regasification process of LNG

C. Regaseification facilities equipments
The parameters of LNG storage as FSRU are: a) Cargo
tank operating pressure (vapour pressure)-normal: 0.2 - 0.5
[bar]; SRV set point: 0.7 [bar] ; b) Hold Space pressure-
pressure set point relative to atmosphere is unchanged; hold
space pressure set point relative to cargo tank pressure is
unchanged; c) In-tank Pump configuration -cargo Tank 1, 2
and 3 with one new 500 [m
3
/h] LNG transfer pump installed
inplace of one of the existing LNG cargo transfer pumps;no
change to pumps in Cargo Tank 4 and 5 .
The process of liquefying the natural gas involves
compression and cooling of the gas to cryogenic temperatures
(e.g. -160 Celsius). Prior to liquefaction the gas is first treated
to remove contaminants, such as carbon dioxide, water and
sulphur to avoid them freezing and damaging equipment when
the gas is cooled. At this destination, the LNG is offloaded to
special tanks onshore (Figure 5) [6], before it is either
transported by road or rail on LNG carrying vehicles or
revaporized and transported by e.g. pipelines. In many
instances more advantageous to revaporize the natural gas
aboard the seagoing carrier before the gas is off-loaded into
onshore pipelines. LNG is sent from the tanks to the
regasification skid situated forward. The regasification skid
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
136
essentially comprises booster pumps and steam heated
vaporizers.


Fig. 5. LNG storage as FSRU

The regasification facilities for tank pump are: a)
Purpose: supply of LNG to the booster pump suction drum
via a new independent liquid header. b) Location: replaces
one cargo pump in tanks 1, 2 and 3. c) Redundancy: 3 x 50%,
but existing cargo pumps may also be used. d) Rated Flow
:500 [m
3
]; e) Rated head: 150 [m].
The regasification facilities for booster pumps suction
drum are: a) Purpose:acts as a buffer volume during start-up,
unexpected shut downs and capacity changes heat sink for the
booster pumps during start-up. b) Location: on fwd deck,
stbd of regas modules. c) Technical Data:volume 14 [m
3
],
design pressure 10 [bar](Figure 6)[6,7].



Fig. 6. The booster pumps suction drum


The regasification vaporisers most commonly used are:
vaporizers in the system (VDS), Fig. 7 (a), (b)); combustion
vaporizers submersible saw (VCS), Fig. 7 (c)).



Fig. 7 a. Vaporizers in the system

Fig. 7 b . Vaporizers in the system


Fig. 7 c. Combustion vaporizers submersible saw

The following equipment is essential for each of the 3
regasification trains (Figure 8)[7,8]:2x50% Booster pumps; 1
off LNG vaporizer,printed circuit heat exchanger propane,
LNG heated from (-160)
0
C to approx (-15)
0
C; 2x100% trim
heaters (Figure 9)[7,8], shell / tube -sea water; propane loop:
1x propane tank

, 1x propane circulation pump, 2x Propane
evaporators, semi welded titanium plates.



Fig. 8. Regasification trains

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
137

Fig. 9. Trim heater, shell/tube



III. THE MODELLING OF FLOW THROUGH A MULTITUBULAR
IN SHELL COUNTER CURRENT HEAT EXCHANGER
Before we begin to modeling the flow, we must made some
assumptions: 1) we have two and both are single-phase fluids
and incompressible for a counter current heat exchanger in
mantel; the density is constant and there is no accumulation of
mass, dm / dt=a, 2) the fluid space is considered independent
variables extratubular T
sin
and m
s
for a number of nodes N; 3)
metal pipe has a high thermal conductivity, so that axial
temperature gradients are small and shape energy accumulation
and convective heat transfer for average values of heat transfer
coefficients; 4) for fluid pipes, T
tin
and m
t
are independent
variables, and n is the number of nodes, flow occurs in the
direction of increasing the number of nodes and node entry in
the node's previous output.
We must build source files and files of results follows:
source files for writing geometrical parameters, construction
and material, to write the values used in the heat transfer
coefficients, resulting files to calculate the geometric
characteristics and heat transfer coefficients for N and S tube
sections. The model is presented in Figure 12.
The inputs data are: dimensions for metal pipes (Figure 12
a) and tubular fluid(water parallelipiped)(Fig.12 b). Horizontal
pipe = 10 [m]; (the two); Vertical pipe = 5 [m]; (nine); Water
circulating through the parallelepiped-dimensions 10x2x1 [m].

Fig.12a. The geometry of study for metal pipes and fluid(side)

Fig.12b. The geometry of study for metal pipes and fluid (rotated)

Mesh network: number of elements 123 744 of which
tetrahedrons 112 044 Hexahedron and 11 700(Figure 13).
More presents 38 544 knots.



Fig. 13. Mesh network

Boundary conditions: water enters with a speed of 5 [m/s]
at a temperature of 303 [K] and comes out at atmospheric
pressure at a temperature of 278 [K]. LNG enters with a speed
of 2 [m/s] at a temperature of 113 [K] and comes out at an
atmospheric pressure at a temperature of 278 [K].
Turbulent model used k-.

A. Equations
We used for flow the equations of Navier-Stokes for gas
and liquids. For turbulent model k-epsilon: we used the
equation for turbulent kinetic energy k (1), disipation epsilon
(2) and the energy equation (3):

( ) ( )


m m k
k
m t
m
m m
G k
k v k
t
+

=
= +

,
,

(1)
( ) ( )
( )

2 ,
,
C G C
k
v
t
m k l
m t
m
m m
+

=
= +

(2)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
138
( ) ( ) ( )
( ) ( )
h
eff j
j j eff
S v J h T k
p E v E
t
+ + =
= + +


(3)
where:

=
=
N
i
i i m
1
, (4)

=
=
=
N
i
i i
N
i
i
i i
m
v
v
1
1


, (5)



2
,
k
C
m m t
= , (6)
( ) ( ) m
T
m m
m t m k
v v v G + = :
, ,
, (7)
k
eff
is effective conductivity; J
j
fluid diffusion flux j; S
h
heat
due to chemical reaction. In the equation (3) we have:

2
2
v p
h E + =

(8)
h enthalpy; for ideal fluids (9) and for real fluid (10)

j j j
h Y h = (9)

p
h Y h
j j j
+ = (10)

=
T
T
j p j
ref
dT c h
,
(11)
B. Results
We have the graphic interpretation of the simulation results
for the velocity (Figure 14), the pressure (Figure 15), the
temperature (Figure 16) :

Fig. 14. The velocity

Fig. 15. The pressure

Fig. 16. The temperature.


IV. CONCLUSIONS
The program is run for didactic use and no research as no
exchange of heat through the walls of the LNG and water
conditions imposed involves thermal radiation [W/m
3
] amount
of which requires a separate program. We propose to another
paperwork, for other conditions imposed (thermal radiation,
porosity, etc.) to simulate through art, the flow of water
through the pipe. The inlet temperatures and flow rates were
used can used to obtain parameters by calculation of the
operating point in steady flow.
REFERENCES
[1] Sagau, M., LNG import facility consisting of existing lng moss type
vessel into floating terminal with regasification unit, Journal of
Sustainable Energy, vol.4, no. 1-2, Romania, Oradea, May 2013,
[Conference of Energy Engineering, CEE 2013,vol.4, no.1-2, p.206,
2013].
[2] Balan, M., et oth., Refrigeration plants, pp.66-68, pp.73-76, Todesco
Publishing, 2000
[3] Hans Y.S. H., JungHan L.,YongSoo K.,Design Development of FSRU
from LNG Carrier and FPSO Construction Experiences, OTC 14098,
Offshore Technology Conference, 2002
[4] Vessel, IEEE Conference in Systems, 2009
[5] Hochung, K., JungHan, L., Design and construction of LNG
regasification vessel, Proceedings of Gastech 2005, Spain, pp. 3-12.
[6] Groves, T., "Terminal LNG tank management system promotes storage
safety", World Refining, May 2001,Vol.11, Iss. 4,pp.46-48.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
139

[7] GOLAR FREEZE FSRU PROJECT MOSS MARITIME brochure
Floating offshore LNG terminal, pdf, pp 4-6., 2012.
[8] Zellouf, Y., Portannier, B., First step in optimizing LNG storages for
offshore terminals, Journal of Natural Gas Science and Engineering,
Volume 3, Issue 5, pp. 582590, 2011.







Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
140
Pattern Recognition on Seismic Data for Earthquake
Prediction Purpose

Adel Moatti
Industrial Engineering
Tarbiat Modares University
Tehran, Iran
Adel.moatti@modares.ac.ir
Mohammad Reza Amin-Nasseri
Industrial Engineering
Tarbiat Modares University
Tehran, Iran
Amin_nas@modares.ac.ir
Hamid Zafarani
International Institute of Earthquake
Engineering and Seismology
Tehran, Iran
H.zafarani@iiees.ac.ir


Abstract Earthquakes has been known as a destructive
natural disaster. Due to high human casualties and economical
losses, earthquake prediction appears critical. The b-value of
Gutenberg Richter law has been considered as precursor to
earthquake prediction. Temporal variation of b-value before
earthquakes equal or greater than M
w
= 6.0 has been examined in
the south of Iran, the Qeshm island and around of this from 1995
to 2012. Clustering method by the k-means algorithm has been
performed to find pattern of variation of b-value. Three clusters
are obtained as optimum number of clusters by the Silhouette
Index. Before all mentioned earthquakes greater than M
w
= 6.0,
cluster 1, which is known as a decrease in b-value has been seen.
so decreasing b-value before main shocks as distinctive pattern
has been considered. Also an approximate time of decrease has
been determined.
Keywords earthquake prediction, long-term seismic hazard
analysis, pattern recognition, clustering, seismicity rate, b-value.
I. INTRODUCTION
The Earthquake prediction as a promising solution to
reduce the toll number of victims has been performed since 70
years ago by Ishimoto and Idia[1]. Efforts in this field are
divided into long-term and short-term prediction. The short-
term predictions is based on precursors such as foreshock,
seismic quiescence, decrease in radon concentrations and other
geochemical phenomenon[1,2]. In long-term prediction the
historical earthquake data has been used along with some
empirical equations like Gutenberg-Richter low to discover
seismic pattern. In fact, in many earthquake prone areas in the
world, the time and the location of earthquake sequences and
also the magnitude of major main shocks follow distinct
patterns. So extracting the seismic patterns from earthquake
parameters (e.g. times, locations and magnitudes) may be
useful to long-term predictions [3-5]. One of the empirical
relationships which have been used frequently in long-term
prediction is the Gutenberg-Richter low. This equation
expresses the relationship between earthquake magnitude and
total number of events as follow:

log N a bM =
(1)
The a and b parameters are constants, M is magnitude of
earthquake and N is the total number of earthquakes equal to or
greater than M[6]. Space and temporal variations of the b-value
have been known as an indicator to predict strong main shocks,
because it presents the tectonic setting and geophysical
characteristics of an area. The b-value over long time and large
areas is usually reported around 1, but it can vary from 0.5 to
1.5 with the decrease of exploring area[7]. The study of
temporal and spatial variations of b-value has been started by
Mogi and Scholz in 1968 [8,9] and many researchers have used
this parameter in order to find the pattern of medium-large
earthquakes. By careful inspection of 15 large earthquake in
the west of Indonesia Nuannin et al.[10] have reported a
significant reduction of b-value before happening all of these
events. In the southern Iran, temporal b-value variations from
2005 to 2011 show that, before two earthquake with magnitude
greater than M=6.0 the significant reduction of b-value has
been occurred [11].
Using three different approaches to study seismicity
variations within a radius of 30 km around the epicenter of the
largest shock (M
w
=6.4), Tsukakoshi and Shimazaki [12] found
the reduction of b-value from 1.2 to 0.7. Applying the sliding
time and space windows method, temporal and spatial
variations of b-value in the Andaman-Nicobar islands before
two major shock in 2002 (M
s
= 7.0) and 2004 (M
w
= 9.0)
shown that two significant drop on b-value in time and low b-
value in space[13].
All of the previous researches, only the variations of b-value
without specify the approximate time of these changes have
been conducted. It is very important to know how the sequence
of b-value changes has been achieved[14]. Clustering method
In this paper, has been used to investigate the sequence and
time of b-value variation in Qeshm island of Iran before
earthquakes equal to or greater than M
w
= 6.0 from 1995 to
2012by the use of performed method in the [14].
In the second part of this paper, the seismic catalog will be
introduced. In the third part, the methodology to b-value
temporal estimation, k-means algorithm and method of
selection optimal number of clusters is explained. Finally, by
represent every cluster before major earthquakes the pattern of
these events is presented.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
141
II. TECTONIC SETTING
The Zagros Mountains in southwestern of Iran are the
largest mountain range in this country. The Zagros fold and
thrust belt is bounded to the NE by the Main Zagros Thrust and
to the southeast (SE) by the Zagros Frontal Fault[15]. Base on
geomorphology and topography the Zagros is divided into two
difference zone. The High Zagros zone in the north-eastern and
bordering by Iranian plateau and the Simply Folded Belt in
south-western zone that is bordering the Persian Gulf[16]. It is
spread for about 1500 kilometers from southwestern Iranian
plateau to the Strait of Hormoz and is formed by collision of
the Eurasian and Arabian Plates[17]. The Zagros fold and
thrust belt is one of the most rapidly deforming and seismically
active in the world. The most active zone of the Zagros is the
Simply Folded Belt[11,16], Fig. 1. In south-eastern of the
Zagros, The Qeshm Island, at 110 km in length, and between
10 km and 35 km in width is the greatest island in the Persian
Gulf.

Fig. 1. Topographic map of southern, the Zagros Mountain in Iran showing
major faults and epicenters of earthquakes with magnitudes equal or greater
than Mw = 6.0 , between 1995 and 2012 extracted from ISC catalog.
It separated from mainland of Iran by the Strait of Khoran
by trends ENE along the northern Strait of Hormoz[16].
However there are no major fault trace in Qeshm island, it has
experience high seismic activities[11], Fig. 2.
III. DATA CATALOG
In this study, the seismic data from the International
Seismological Center (ISC) catalog has been used. The
examined region is limited by latitudes 26.5 to 30N and
longitudes 54 to 57.5E and includes the Qeshm Island and
spanning the period 2005.01.01 to 2012.06.19. During the
period of study and for the studied region, 2046 earthquakes
have been reported in ISC catalog. Magnitude and depth of the
earthquakes range from 1.8 to 6.5 Mw and 1 to 256 km,
consequently.
The minimum detectable magnitude in every region is
known as threshold magnitude or completeness magnitude and
is shown by Mc[18]. The minimum magnitude of
completeness, Mc, in most seismicity studies considered as an
important parameter. In seismicity study, to receive more high
quality results, it is necessary to use the maximum number of
event, so as much as Mc be lower, is better[19]. By Maximum
likelihood estimation and 90% probability, Mc is calculated.
Mc = 3.7 has been considered as threshold magnitude and all
events less than M=3.7 has been deleted. At last 1065 events
has remained. Also the overall b-value and a-value are
estimated 0.89+/-0.04 and 6.16, respectively, by Maximum
likelihood method, Fig. 3.

Fig. 2. Topographic map of the study area with major fault and earthquakes
with magnitude equal or greater than Mw = 3.0 , of the Qeshm island and
around of this region between 1995 and 2012. The earthquakes, with Mw >
6.0, marked with red stars.
In many seismicity study, declustering methods is
performed to eliminate foreshocks and aftershocks to received
independent data[13,17]. In this paper, the declustering method
doesnt performed, due to lack of data[20,21].

Fig. 3. Frequency magnitude distribution with respect to Mw. The strait line
is the best fit by (1).
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
142
IV. METHOD
A. Calculate temporally b-value
Temporal variation in b-value of the Gutenberg-Richter
relationship is calculated by sliding time windows method. The
constant size window and constant number of events in each
window are two options can be employed. Due to high
different between numbers of events in each windows may be
lead to uncertainty, constant number of events in each windows
has been considered[9]. The b-value has been calculated using
the Maximum likelihood method witch represent by Aki in
1965[22] in (2):

M )
mean min
log e b / (M =
(2)
The
Mean
M , denotes the mean magnitude in each window
and
Min
M is the minimum magnitudes of sample earthquakes
and is determined as
Mean c
M M M/ 2 = , where M is
magnitude bin and here has been selected M 0.1 = [13], and
C
M in each windows calculated separately. Different number
of events in each sliding window has been tried, that is, 40, 50,
60, 70, 80, 90 and 100. At least to achieve best time resolution,
70 events in windows with an overlap 1 event have been
selected. According to the above descriptions
C
M and the b-
value has been calculated by 1995 to 2012. The Fig. 4 show
temporally variation of completeness of magnitudes,
C
M in
each window. The standard deviation also is calculated with
bootstrapping method and is demonstrated by black color
curve.
1996 1998 2000 2002 2004 2006 2008 2010
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
4.8
5
M
c
Time / Decimal Year


MC
Std Mc

Fig. 4. Temporally variation of the Magnitude Completeness,
C
M , by
1995 to 2012 with the sliding time window method between 1/1/1995 and
19/6/2012.
Also temporally variations of the b-values have been shown
in Fig. 5. It can be observed, the b-value change between 0.8
and 1.4. According to Scholz 1968 and Gibowicz 1974 the high
b-value shows that the low stress in the seismogenic zone and
high b-value is related with high stress conditions[8,23].

1996 1998 2000 2002 2004 2006 2008 2010
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
b

v
a
l
u
e
Time / decimal Year


b value
Std Mc

Fig. 5. Temporal variation of the b-value of earthquakes with
C
M = 3.7
from 1/1/1995 to 19/6/2012.
B. The new dataset
Temporally b-value variation was calculated in pervious
section, has been used to make new dataset according to
Morales-Esteban et al. (2010)[14] for clustering purpose. Each
earthquake in seismic catalog represent by three features, the b-
value,
i
b , the date of occurrence,
i
T , and magnitude
i
M . Each
earthquake has been shown as:

(b , T , M )
i i i i
e =
(3)
After this, every five earthquakes has been grouped
chronologically and the following calculations has been
performed on each category[14]. Each group
i
A containing the
differences of two b-values at first and at the end of each
group,
i
b , the mean of five earthquakes in each group, i M ,
and the time elapsed of five earthquakes,
i
T , has determined.
Thus,
( M , b , T )
i i i i
A = [ ] N/5 i 1, ..., =
(4)
So that,
j
M M
i k
k j 4
=
=
j 5i = (5)
i j j 4
b b b =

j 5i =
(6)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
143
i j j 4
T T T =

j 5i = ;
(7)
Where N, is the number of earthquakes in seismic catalog;
As a final point, the new dataset, ND, is formed by all
chronologic
i
A , which is determined by defined (5), (6) and (7)
as:

ND {A , A , A , ..., A }
1 2 3 N/5


=
(8)
At last the ND has been performed to clustering for pattern
recognition of earthquakes with M
w
equal or greater than 6.0.
C. Clustering
1) The k-means algorithm
Cluster analysis is the task of grouping some objects in
such a way that objects in the same group are more similar to
each other than to those in other clusters. One of the most
popular clustering method is the k-means algorithm that was
introduced at first by Macqueen (1968)[24]. At first the
algorithm selects k points as the initial centroid. After this, the
algorithm collects rest of objects into k groups with the aim of
increasing intra-clusters similarity at the same time. Each
object is assigned to the cluster with the closest centroid.
Actually, this similarity is measured according to centroid of
each clusters and the aim is to reduce intra-cluster distance.
Indeed to reduce intra-cluster distances the squared error
function is used as follow:


j j
2
k
j j
i 1 X C
SSE X m ;
=
=


(9)

(9)
Where k is the number of clusters,
j
X , is the j-th object,
j
m , is the centroid of j-th cluster and
j
C , is the j-th cluster. In
this paper, the k-means has been used multiple times to escape
from entrapment in local minimum.
2) The silhouette index
In the most unsupervised clustering i.e. k-means, select the
optimum number of clusters is a crucial challenge. Also it is
very important to evaluate how much the result is accurate.
There are many various quality measures to evaluate clustering
result. The Silhouette index as one of the common index has
been used in this paper. This validity index computes silhouette
width for each object. Also this index calculate average
silhouette width for each clusters and overall silhouette width
for all dataset[25]. The following formula is used to measure
silhouette width for each data point:


b a
i i
i
max(a ,b )
i i
S

=
(10)
Where
i
a is the average dissimilarity of i-th object to all
other object in the same cluster;
i
b is the minimum of average
dissimilarity of i-th object to all objects in the other clusters.
The
i
S , is the value between -1 and 1. If
i
S was closed to 1, it
means that the object is assigned to proper cluster. If
i
S was
closed to zero, it means that, the object could be assigned to
another closest cluster and if the
i
S was -1, it means that the
object assigned to improper cluster. The average of all
i
S is the
overall average silhouette width for all objects in dataset.
Finally, the largest overall average, indicates clustering with
high accuracy[26].
In this paper, different number of clusters has been
performed to achieve optimum cluster. Figure 6, shows that the
maximum overall silhouette width averages is 0.5938 and is
related to 3 clusters. So the optimum cluster has been chosen
three.
0 0.5 1
1
2
3
mean
c
l
u
s
t
e
r
0 0.5 1
1
2
3
4
mean
c
l
u
s
t
e
r
0 0.5 1
1
2
3
4
5
mean
c
l
u
s
t
e
r
0 0.5 1
1
2
3
4
5
6
mean
c
l
u
s
t
e
r
0 0.5 1
1
2
3
4
5
6
7
mean
c
l
u
s
t
e
r
0 0.5 1
1
2
3
4
5
6
7
8
mean
c
l
u
s
t
e
r

Fig. 6. The mean silhouette index values versus number of clusters for 3 to 8
clusters
According to the silhouette index result, the ND has been
clustered by k-means algorithm by repeating 500 times and
Table 1 shows obtained centriods of clusters.
TABLE I. CENTROIDS OF OBTAINED CLUSTERS WITCH IS OBTAINED BY
K-MEANS
b T
M
Cluster
- 0.023 0.062 3.987 1
0.003 0.067 4.235 2
+ 0.039 0.062 4.531 3
It can be seen cluster 1 represenet decrease in b-value and
earthquakes with low magnitudes. The time interval in this
cluster is 22 day approximatly. Cluster 2, show that there are
no any changes in b-value. Cluster 3 demonstrate increase in b-
value with large magnitude earthquakes in time interval similar
cluster 1.
V. PATTERN RECOGNITION
According to clusters has been obtained from k-means
algorithm, each five grouped seismic data in new dataset has
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
144
been plotted chronologically with respect to cluster labels
versus time. As can be seen from Fig. 2, four major
earthquakes have been occurred in study area with magnitude
w
M 6.0 between 1995 and 2012. Table 2 has shown four
main shocks.
After clustering, it can be seen that all earthquakes with
magnitude
w
M 6.0 are presented by five-earthquake groups
belongs to cluster 3. According to cluster 3 represent increases
in b-value. Accordingly, Fig. 7 shows the patterns are extracted
from b-value variation. The blue circles have identified five-
earthquake groups and stars have specified main shocks
with
w
M 6.0 .
TABLE II. FOUR MAIN SHOCKS WITH MAGNITUDE EQUAL OR GREATER
THAN MW FROM 1/1/1995 TO 19/6/2012
Earthquake Magnitude Date Longitude Latitude
1 6.0 2003/7/10 28.30 54.15
2 6.1 2006/2/28 28.10 56.82
3 6.0 2006/3/25 27.55 55.66
4 6.1 2008/9/10 26.77 55.83
Figure 7, represent how five-earthquake groups clusters have
changes between 2000 and 2005 interval. It can be seen that,
before earthquake with
w
M 6.0 = , sequence of cluster 1, has
been occurred. According to table 1, cluster 1 represent
decrease in b-value by 22 days interval approximately. Indeed,
the main shock that shows by star is preceded by cluster 3-2-1.
This means that, significant decrease in b-value before main
shock has been occurred. Also, time interval of this reduction is
3 month approximately.
As the Fig. 7, trend of clusters changes has been reported
from 1/1/2005 to 29/12/2008 in Fig. 8. In this period, two large
earthquakes with magnitudes,
w
M 6.1 = and
w
M 6.0 = , have
occurred within less than 1 month. For both events, clusters
changes represent decrease in b-value. As can be seen in Fig. 8,
first event precede by cluster 1. Also second event is preceded
by five-earthquake groups classified to cluster 1. In fact
clusters changes from 3 to 1 for both main shocks. This means
that, significant reduction in b-value before main shocks has
been occurred.
Finally, the interval between 2008 and 2012 has been
investigated. In this period, an earthquake with magnitude
w
M 6.1 = is occurred. According to Fig. 9, it can be seen,
similar patterns have been occurred like previous results. A
sequence of cluster 1 has been occurred before main shock at
10/9/2008. This trend is happened in 3 month roughly and
reports significant decrease in b-value.

VI. CONCLUSIONS
In this paper, clustering method according to Morales-
Esteban et.al.(2010)[1414], has been performed to pattern
recognition of earthquake with magnitude equal or greater than
w
M 6.0 = , in south of Iran, the Qeshm island and around of
this area. With respect to clusters that are obtained from the k-
means, reduction of b-value before all events with
w
M 6.0
has been observed. This result is similar to past studies that
reports b-value decrease as large earthquakes precursor.

Fig. 7. changes of clusters between 2000 and 2005

Fig. 8. Changes of clusters between 2005 and 2008
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
145

Fig. 9. Changes of clusters between 2008 and 2012
REFERENCES

[1] A. Panakkat and H. Adeli, "Recent efforts in earthquake prediction
(19902007)," Natural Hazards Review, vol. 9, 2008, pp. 70-80.
[2] M. Wyss and D. C. Booth, "The IASPEI procedure for the evaluation
of earthquake precursors," Geophysical Journal International, vol.
131, 1997, pp. 423-424.
[3] M. Wyss, F. Pacchiani, A. Deschamps, and G. Patau, "Mean
magnitude variations of earthquakes as a function of depth: Different
crustal stress distribution depending on tectonic setting," Geophysical
research letters, vol. 35, 2008, p. L01307.
[4] B. CHEN, P. BAI, and T. LI, "THE b-VALUE AND
EARTHQUAKE OCCURRENCE PERIOD," Chinese Journal of
Geophysics, vol. 46, 2003, pp. 736-749.
[5] V. S. Rani, K. Srivastava, D. Srinagesh, and V. Dimri, "Spatial and
temporal variations of b-value and fractal analysis for the Makran
Region," Marine Geodesy, vol. 34, 2011, pp. 77-82.
[6] B. Gutenberg and C. F. Richter, Seismicity of the earth and
associated phenomena: Hafner New York, 1965.
[7] C.-H. Chan, Y.-M. Wu, T.-L. Tseng, T.-L. Lin, and C.-C. Chen,
"Spatial and temporal evolution of b-values before large earthquakes
in Taiwan," Tectonophysics, vol. 532, 2012, pp. 215-222.
[8] C. Scholz, "The frequency-magnitude relation of microfracturing in
rock and its relation to earthquakes," Bulletin of the Seismological
Society of America, vol. 58, 1968, pp. 399-415.
[9] O. Kulhanek, "Seminar on b-value," Dept. of Geophysics, Charles
University, Prague, 2005.
[10] P. Nuannin, O. Kulhnek, and L. Persson, "Variations of b-values
preceding large earthquakes in the AndamanSumatra subduction
zone," Journal of Asian Earth Sciences, vol. 61, 2012, pp. 237-242.
[11] M. R. Sorbi, F. Nilfouroushan, and A. Zamani, "Seismicity patterns
associated with the September 10th, 2008 Qeshm earthquake, South
Iran," International Journal of Earth Sciences, 2012, pp. 1-9.
[12] Y. Tsukakoshi and K. Shimazaki, "Decreased b-value prior to the M
6.2 Northern Miyagi, Japan, earthquake of 26 July 2003," Earth
Planets and Space (EPS), vol. 60, 2008, p. 915.
[13] P. Nuannin, O. Kulhanek, and L. Persson, "Spatial and temporal b
value anomalies preceding the devastating off coast of NW Sumatra
earthquake of December 26, 2004," Geophysical research letters, vol.
32, 2005, p. L11307.
[14] A. Morales-Esteban, F. Martnez-lvarez, A. Troncoso, J. Justo, and
C. Rubio-Escudero, "Pattern recognition to forecast seismic time
series," Expert Systems with Applications, vol. 37, 2010, pp. 8333-
8342.
[15] K. Hessami, F. Nilforoushan, and C. J. Talbot, "Active deformation
within the Zagros Mountains deduced from GPS measurements,"
Journal of the Geological Society, vol. 163, 2006, pp. 143-148.
[16] E. Nissen, F. Yamini-Fard, M. Tatar, A. Gholamzadeh, E. Bergman,
J. Elliott, J. Jackson, and B. Parsons, "The vertical separation of
mainshock rupture and microseismicity at Qeshm island in the Zagros
fold-and-thrust belt, Iran," Earth and Planetary Science Letters, vol.
296, 2010, pp. 181-194.
[17] A. Zamani and M. Agh-Atabai, "Temporal characteristics of
seismicity in the Alborz and Zagros regions of Iran, using a
multifractal approach," Journal of Geodynamics, vol. 47, 2009, pp.
271-279.
[18] P. A. Rydelek and I. S. Sacks, "Testing the completeness of
earthquake catalogues and the hypothesis of self-similarity," Nature,
vol. 337, 1989, pp. 251-253.
[19] S. Wiemer and M. Wyss, "Minimum magnitude of completeness in
earthquake catalogs: examples from Alaska, the western United
States, and Japan," Bulletin of the Seismological Society of America,
vol. 90, 2000, pp. 859-869.
[20] T. Parsons, "Forecast experiment: Do temporal and spatial b value
variations along the Calaveras fault portend M 4.0 earthquakes?,"
Journal of geophysical research, vol. 112, 2007, p. B03308.
[21] Y.-M. Wu, C.-C. Chen, L. Zhao, and C.-H. Chang, "Seismicity
characteristics before the 2003 Chengkung, Taiwan, earthquake,"
Tectonophysics, vol. 457, 2008, pp. 177-182.
[22] K. Aki, "17. Maximum Likelihood Estimate of b in the Formula
logN= a-bM and its Confidence Limits," 1965.
[23] S. J. Gibowicz, "Frequency-magnitude, depth, and time relations for
earthquakes in an island arc: North Island, New Zealand,"
Tectonophysics, vol. 23, 1974, pp. 283-297.
[24] J. MacQueen, "Some methods for classification and analysis of
multivariate observations," in Proceedings of the fifth Berkeley
symposium on mathematical statistics and probability, 1967, p. 14.
[25] P. J. Rousseeuw, "Silhouettes: a graphical aid to the interpretation and
validation of cluster analysis," Journal of computational and applied
mathematics, vol. 20, 1987, pp. 53-65.
[26] Z. Ansari, M. Azeem, W. Ahmed, and A. V. Babu, "Quantitative
Evaluation of Performance and Validity Indices for Clustering the
Web Navigational Sessions," World of Computer Science and
Information Technology Journal, vol. 1, 2011, pp. 217-226.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
146
The Natural Gas Addiction and Wood Energy Role in
Latvia Today and Future

Ginta Cimdina, Andra Blumberga, Ivars Veidenbergs, Dagnija Blumberga, Aiga Barisa
Institute of Energy Systems and Environment
Riga Technical University
Riga, Latvia

AbstractThe paper analyzes strategies for restricting Latvias
dependence on fossil fuel imports in line with an increasing
challenge to follow the leading EU Member States in greening the
energy sector. Availability of local biomass resources ensures the
necessary framework for building an arranged, environmentally
and climate-friendly economy. Primary attention is paid to
historical pattern of wood fuel use which shows reduction in
wood fuels share of primary energy consumption, however
energy efficiency improvement is not observed. A hypothesis for
development of wood fuel consumption up to 2020 is proposed.
Analysis indicates potential of reaching a 43% share of wood fuel
in national energy mix by 2020.
KeywordsWood fuel, green growth, primary energy use,
hypothesis
I. INTRODUCTION
Transition from fossil fuel-based to renewable energy
systems is challenging both at the European Union (EU) and
Member State level. Over the past decade, leading Member
States of the EU have taken practical steps to introduce the
concept of energy sustainability in their political agendas by
developing and adopting national long-term energy strategies,
see, e.g. [1, 2]. Particular attention is paid to feasible
utilization of available bioenergy (and especially wood fuel)
potential as the first step towards increasing the share of
energy produced from renewable sources in the national
energy mix [3, 4].
In terms of environmental, economic and technological
provisions, Latvia has all the necessary preconditions to
follow the challenge of leading European countries and build
an arranged, environmentally and climate-friendly economy.
Four types of international commitments directly impact
Latvias orientation towards green growth:
Obligation to improve end-use energy efficiency by at
least 9% by 2016 compared to 2009 level in the
framework of the National Energy Efficiency Plan;
Obligation to increase the share of renewable energy
sources (RES) in final energy consumption by 40% by
2020 according to Directive 2009/28/EC (33.6% in 2010);
Obligation to meet at least 10% RES share in the transport
sector by 2020 under the Directive 2009/28/EC (3.3% in
2010), and;
Obligation to reduce greenhouse gas (GHG) emissions by
21% over the period 2013-2020 compared to 2005 level
among participants of the EU Emissions Trading Scheme
(ETS) and allow no more than 17% GHG emission
increase in sectors outside the EU ETS.
Currently increased attention of countries worldwide,
including Latvia, is focused on two aspects of energy energy
dependence and climate change.
At the same time, scientists are engaged in research of the
use of renewable energy resources and creation of new
technologies. Such actions contribute to energy dependence on
increase of imported energy.
Global experience has proven that increase in energy
consumption causes energy deficit and government officials
think about the increase of energy resource imports instead of
reducing energy consumption.
Transition from a fossil fuel economy to a renewable
energy economy is a complex process and requires a long-
term development strategy and serious approach to its
implementation.
It is hard to think in the long term when facing short-term
problem energy consumption increase. There is great
temptation to solve this problem using the simplest of
solutions importing more energy. The more attention is paid
to short-term solution, the less it is dedicated to the long term
the use of RES.
This problem (and other addiction problems) from the
perspective of systemic thinking has been described by Peter
Senge, who is one of the founders of systemic thinking. In his
book The Fifth Discipline. The Art and Practice of the
Learning Organization [5], published in 1990, he has
developed several archetypes, one of which "distraction"
reflects the adverse effects of transport sector energy
dependence and short-term planning in long-term:
using symptomatic (short-term) solution, or;
fundamental (long-term) solution.
The system archetype is schematically given in Figure 1,
which illustrates a causal loop diagram. Archetype hypothesis
if a symptomatic solution is used at least once, this reduces
the symptoms of the problem and the need to implement a
fundamental solution. Using a symptomatic solution
repeatedly, attention is averted from a fundamental solution.
The symptomatic solution creates a side effect by using it
multiple times, the ability to apply the fundamental solution,
for example, additional purchase of renewable energy in the
situation of growing energy consumption, decreases.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
147
Dependence from
import
Symptomatic solution:
Import
Energy demand
Fundamental solution:
Renewable energy
B1 B2
P3 P
P P
V
V
V
delay delay
Fig.1. Causal loop diagram of the system archetype "distraction". (P
contrary processes, V equal processes, B1, B2 balancing loops, P3 a
balancing loop, a delay processes do not happen at the same time, but with a
delay relative to one another)
The best illustration of the circumstance mentioned above
is a specific situation with energy production from renewable
energy resources in Latvia. Despite the fact that it is already
possible to achieve 7080% share of renewable resources in
district heating energy supply in this decade, existing
regulatory measures and policy instruments hinder this
development. The government should change the existing
mandatory procurement system, whereby the sources of
renewable energy must compete on an equal basis with the
fossil fuel-powered co-generation power plants. The energy
tariffs of the natural gas cogeneration (condensation) stations
incorporate their installation cost essentially discriminating
against the set-up of renewable energy power stations.
II. METHODOLOGY
To transfer national economy from fossil to renewable
primary energy resources it is necessary to define directions,
objectives and principles for such transformation. The
algorithm of methodology presents four modules as showed in
Figure 2.
A. Module 1. Tracks and directions
Economically reasonable costs and technological
development determines how to meet the commitment of the
country and which are the priority sectors. Therefore, the
strategy must have at least three parallel tracks, which both
converge and classify, to achieve independence from fossil
fuels.
1) Track 1
Transition to more energy-efficient energy consumption
and use of renewable energy sources:
more energy-efficient use of energy;
increased use of biomass and biogas, and;
more renewable wind energy offshore and on land.
2) Track 2
Integration of new solutions in the energy sector and
transport system:
green transport sector;
new era of energy-policy management, and;
transition from fossil fuels to independent energy
systems in national and regional level.

Fig.2 Algorithm of methodology
3) Track 3
Research, development and demonstration:
national energy system modelling and establishment of
development evaluation, control and monitoring system;
comprehensive demonstration and preparation for a
wide range of market demands. Integration in transport
and energy systems, and;
development of the green energy system based on
research, demonstration and preparation for market
demand.
One of the most important aspects determining the
implementation of Track 1 is the availability of wood fuel
resources in Latvia. Research results of forest specialists [6]
show a high potential of bioenergy (see. Table I).
TABLE I WOOD FUEL RESOURCES
Resources Potential
volumes, mil.
loose-m
3
/year
Real volumes,
mil. loose-
m
3
/year
Wood logs 6.26 4.33
Forest residue 6.80 4.70
Stems 8.10 7.37
Other resources 3.10 2.13
Byproducts from wood
processing
12.92 8.92
Total 37.18 27.45
B. Module 2. Principles
The principles incorporated in the Green Energy
Development Strategy of the national energy industry sector
comprise financial, environmental, climate, socio-economic
and management aspects.
1) Cost Efficiency
Cost efficiency measures are economically justified; they
ensure a maximum energy supply security and a high
reduction of the use of fossil fuel in relation to each LVL
invested. Consequently, there is no emphasis on large-scale
equipment requiring high investment.
Contrary to the use of expensive technologies, full attention
should be paid to research, development and demonstration
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
148
which could increase the competitiveness of these
technologies long-term with lower inputs.
2) Minimum Impact on Public Funding
The cost and benefit distribution for transition to the so-
called Green Growth should not cause adverse effects on the
government budget. All outlays should be fully covered by
consumers of energy resources (both, enterprises and
households).
3) Renewed Competitiveness.
When implementing the transition to Green Growth, the
impact the latter will exercise on the competitiveness of the
business environment in Latvia should be noted. The
mechanisms of price and tariff formation and factors
influencing them should be transparent and predictable in the
long-term to facilitate making the relevant decisions for
consumers and investors targeted at further growth.
4) Flexibility Principle
The forecasts of energy sector development are based on
regular analyses of the operation and management of the
energy sector, furthermore, implementation of new technology
solutions should be adjusted to processes taking place in the
European Union and global energy market, e.g., in case of the
price increase for biogas, a transition should be made to less
costly renewable resources (wind, solar power, etc.) with the
help of higher investment.
5) Full-bodied Use of International Cooperation
The transition process should make use of all opportunities
available globally and within the European Union. An isolated
country from the power resource point of view is not our goal.
Latvia should start the uptake of advantages offered by the
international market.
6) Security of the Energy Supply System
There should be a secure national energy supply system in
place; however ensuring security measures must be
commercially justified. The current practice of achieving
security through investment into high power energy units
should be discontinued. The security of energy supply must be
achieved through modern diffused production and smart
network technologies. Energy prices will be stabilised by
lower primary power consumption and essential reduction of
fossil fuel.
7) Bottom-up Model
The involvement and activity modelling of every energy
consumer is more suitable to Latvias energy sector as a whole
as it provides better opportunities for the implementation of
realistic energy efficiency and renewable energy resources
projects on the consumer side, as well as achieve reduction of
the use of fossil fuel.
8) Support to Centralization Principle in Heating Supply
A centralized heating supply through the heat load centres
is commercially more advantageous and friendlier to the
environment and climate than the construction of individual
boiler systems for heating. The business economy principle
should be observed, i.e., wholesale goods should cost less than
retail goods.
9) Flexibility of Energy Industry System
Renewable resources that are commercially justified from
the circulation cycle analysis point of view, as well as are
environmentally and climate friendly should be selected. The
amounts of their use should vary depending upon: prices,
availability of resources, development of innovative
technologies and impact on the climate change mitigation
policies.
10) Market Model
During the period of transition to Green Growth,
competitive electric power and energy resources markets (e.g.
natural gas) should be established. The progress of the country
towards a united electrical energy market of the Nordic
countries and towards a revision of the transmission system
operator surcharges, which are higher in Latvia than in other
countries, will make a beneficial impact on the
competitiveness of this country.
11) Business Model
The production, management and marketing of the heating
power and electrical power should be based on a business
economy. Within the transition period to Green Growth, co-
funding is available exclusively in case of full withdrawal of
subsidies for fossil energy resources in any form of power
production and transition to the use of renewable energy
resources.
12) Gradual Approach
The issues related to the use of renewable energy resources
and the increase of energy efficiency should be addressed step
by step observing priorities and avoiding implementation of
all measures simultaneously.
13) Sustainable Development Model
The development of an energy network system must be a
long-term process thinking 30-40 years ahead in regard to
natural resources, mitigation of the climate change, socio-
economic development etc.
14) Level Mark Model
The regulation model of heat energy tariff should be
urgently adjusted to motivate owners of energy sources to
increase energy efficiency and seek commercially sound
solutions for the operation of energy sources. This would be a
long-term approach
15) Assessment Model
In order to reach the desired result or implement intended
measures, it is not enough to stop at setting the goal. Principle
of good governance mode must be applied to analyse and
assess the success of the intended operational measures and to
make appropriate adjustments for further steps towards the
Green Growth.
C. Module 3. Calculations
Technologies, which are now relatively expensive, can be
very important in the long run. Among them are electric cars,
solar energy, wind energy accumulation, as well as carbon
capture and storage systems.
The relationship between biomass, wind and solar energy
will depend on various factors which are currently not known.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
149
Option 1: the amount of biomass will grow and it will
alternatively 'not be costly'. This will reduce the need
for wind energy.
Option 2: increase in energy efficiency will not happen
as planned. This will increase the need for larger
amounts of biomass.
Option 3: there is a rapid development of other
renewable energy technologies. For example, the use of
solar energy technology will become economically
most advantageous.
All of this emphasizes the need for a flexible strategy,
which ends with opportunities for technology development.
Problems relevant to energy supply safety and impact on
climate change can also be resolved in different ways.
This paper only presents results of Option 1 analysis (see
below).
D. Module 4. Selection of scenario
Renewable energy development scenarios are affected by a
variety of factors: distribution among different renewable
energy sources, technology parameters, economic indicators
etc.
In the period up to 2020, the main emphasis must be on a
more complete use of biomass, without neglecting the use of
wind energy after 2020 and the use of solar energy primarily
in multi-apartment houses, thus ensuring hot water supply.
III. RESULTS AND DISSCUSSION
A. Analysis of results
The distribution of renewable energy resources is
evaluated on the basis of technological, economic and
environmental aspects. Each type of renewable energy
resource has its own niche both in respect to energy users, and
the energy conversion and transmission aspect.
Accordingly, during the time period till 2020 wood energy
will have the main role; therefore it is an important aspect to
analyse. The use of wood energy mainly depends on the
technological equipment and its energy efficiency.
Analysis of the current situation serves as raw data for
prediction models. Statistical data are not only a source of
information, but also provide development indicators for
current technological solutions and for the development of
state economy in general.
To understand the restrictions of wood fuel use in Latvian
energy sector and develop a baseline upon which future vision
is built, statistical data for last 4 years were analyzed.
As shown in Figure 3, data on annual primary energy
consumption and energy end-use present maximum in 2010
followed by a decline afterwards. However import of primary
energy increased dramatically by 21.6% in 2011 in
comparison with the previous year. The same graph also
shows Latvian energy resource production volumes, which
have been holding at the same level in the last three years.

Fig.3 Changes in energy use in Latvia over the period 2008-2011 [6]
Reasons for the observed decrease in energy consumption
in 2011 could be several. The most commonly agreed reason
for this reduction is the improvements in energy efficiency
introduced within the energy sector. Therefore energy
efficiency of primary energy use in Latvia was determined and
is presented in Figure 4.

Fig.4 Energy efficiency of primary energy use in Latvia
Figure 5 illustrates relationship between the primary
energy consumption during the period 2008-2011 and the
efficiency of primary energy use.

Fig.5. Relation between primary energy consumption and energy efficiency of
fuel use
According to the results, the highest energy efficiency of
primary energy use in the Latvian energy sector was reached
in 2009, 90.5% respectively. Meanwhile the lowest energy
consumption was observed in 2011. A reduction in energy
efficiency was observed in last two years (89.8%). In 2011, as
well the energy consumption decreased. Data in Figure 5 show
weak correlation between the primary energy consumption
and the efficiency of primary energy use. Results indicate that
the reduction of primary energy consumption is not caused by
0
10000
20000
30000
40000
50000
60000
2008 2009 2010 2011
E
n
e
r
g
y


u
s
e
,

G
W
h
primary
energy use
energy end
use
energy
resources
production
0,84
0,86
0,88
0,90
0,92
2008 2009 2010 2011
E
n
e
r
g
y


e
f
f
i
c
i
e
n
c
y

o
f

f
u
e
l

u
s
e

wood fuel energy resources
R = 0,0429
R = 0,1883
0,86
0,87
0,88
0,89
0,90
0,91
52000 53000 54000 55000 56000 57000
E
n
e
r
g
y

e
f
f
i
c
i
e
n
c
y

o
f

f
u
e
l

u
s
e

Primary energy use, GWh
energy
resources
wood fuel
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
150
a more efficient use of energy resources. Such tendencies state
that the development of energy sector goes contrary to
governmental statements.
A slightly different situation in the past years has emerged
in relation to the use of wood energy in Latvia (see Figure 6).
The maximum consumption of wood fuel was reached in
2009. An evaluation of the data illustrates an unprecedented
decrease of wood fuel use in 2010 and 2011, which could be
explained by the replacement of wood fuel with imported
fossil fuel.
The energy efficiency of wood fuel use is less than that of
primary energy use in the energy sector of Latvia. The highest
energy efficiency of wood fuel use (88.2%) was reached in
2009 (see Figure 4). Later on, in 2010 and 2011, a decrease in
energy efficiency was observed (87.2%). This means that the
reduction of wood energy consumption cannot be explained
by increased energy efficiency in the energy sector.

Fig.6. Wood fuel use in past and future in Latvia
B. Hypothesis of wood energy development
The matrix for creating a configuration of wood fuel
development hypothesis is built on green growth calculation
data. The test example presents results for Track1 and they are
based on the principles 1, 2, 3, 6, 7, 8, 11 and 15 from the
above "Modules". The forecast of wood fuel production
amounts and the share of wood fuel in primary energy
consumption and energy end-use is presented in Figures 6 and
7. It presents future development scenario with a gradual
growth in wood energy consumption.
Based on statistical data on historical wood energy
consumption pattern, the share (%) of wood fuel in primary
energy consumption for the last four years was determined.
(see Figure 7). The difference in the shape of curves for wood
fuel share in primary energy and energy end-use originates
from changes in the amounts of supply and energy efficiency
of wood fuel use. Mathematical modelling of the wood fuel
share is based on green growth calculation data and results of
historical wood fuel share assessment. A gradual step-by-step
increase over the next years will allow reaching a 43% wood
fuel share in energy end-use balance of Latvia.

Fig.7. Historical data and forecast of wood fuel share
IV. CONCLUSIONS
Analysis of the statistical data on annual consumption of
primary energy resources and energy end-use in Latvia
presents a maximum for both in 2010. However, the import of
primary energy increased dramatically (21.6%) in 2011 in
comparison with the previous year. Volumes of primary
energy resource production have been holding at the same
level in the last three years. A decrease in energy efficiency of
primary energy use is observed in the last two years (89.8%).
The reduction of primary energy consumption is not caused by
more efficient use of energy resources, which is in conflict
with governmental legislative documents.
Maximum consumption of wood fuel was reached in 2009.
Evaluation of data illustrates unpredicted decrease in wood
fuel consumption in 2010 and 2011, which could be explained
by the replacement of wood fuel by imported fossil fuel.
Energy efficiency of wood fuel use is less than that of primary
energy use in the energy sector. The highest energy efficiency
of wood fuel use in the Latvian energy sector was reached in
2009, 88.2 % respectively. Reduction of energy efficiency was
observed in the last two years (87.2 %). This indicates that the
decrease in wood energy consumption cannot be explained by
improved energy efficiency in the energy sector.
Slow step-by-step use through the years will allow
reaching a 43% share of wood fuel in the energy end-use
balance of Latvia by 2020.
REFERENCES
[1] Energy Strategy 2050 from coal, oil, gas to green energy. Summary
The Danish government 2011, pp.15.
[2] Frdplan fr ett Sverige utan klimatutslpp 2050. Sammanfattning.
2013, pp. 5.
[3] E. Perednis, V. Katinas, A.Marcinkevicius, Assessment of wood fuel
use for energy generation in Lithuania Renewable & sustainable energy
review Vol.16, Issue 7, pp 53915398, September 2012.
[4] K.Cemil (2011), A.Y. Balaban, Wood fuel trade in European Union
Biomass and Bioenergy Vol. 35, Issue 4, pp 15881599, April 2011.
[5] Senge P. (1990). The Fifth Discipline. The Art and Practice of the
Learning Organization, 1990.
[6] D.Dubrovskis (2011) Vietjo koksnes resursu mobilizcijas iespju
izvrtana jaunu augstks pievienots vrtbas produktu un
bioenerijas raoanai Apvienotais pasaules latvieu zintnieku 3.
Congress un letonikas 4.kongress. Tu krjums.
[7] Central Statistical Bureau of Latvia (2011). Energy - Key Indicators.
2011.
0
5000
10000
15000
20000
25000
30000
2
0
0
8

2
0
0
9

2
0
1
0

2
0
1
1

2
0
1
2

2
0
1
3

2
0
1
4

2
0
1
5

2
0
1
6

2
0
1
7

2
0
1
8

2
0
1
9

2
0
2
0

W
o
o

f
u
e
l

u
s
e
,

G
W
h
/
y
e
a
r

primary wood energy
wood energy end use
woodfuel production
23
28
33
38
43
0 5 10 15
W
o
o
d

f
u
e
l

s
h
a
r
e
,

%

Year
wood fuel use
in energy end
use
wood fuel in
primary energy
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
151
The Small Hydropower Plant Income Maximization
Using Games Theory

Antans Sauhats, Professor,
Institute of Power Engineering
Riga Technical University, RTU
Riga, Latvia
sauhatas@eef.rtu.lv
Renata Varfolomejeva, Inga Umbrasko, Hasan Coban,
PhD students
Institute of Power Engineering
Riga Technical University, RTU
Riga, Latvia
renata_varfolomejeva@inbox.lv


AbstractThis paper is devoted to the consideration of case of
the use of small rivers energy, the current system of support
relevant projects, its drawbacks and opportunities to remove
them. Cooperative game theory approach is used for analysis of
regime management of the small-scale hydro power plant
(SHPP). Technical and economical aspects of the issue are
observed. The obtained results demonstrate the validity of the
cooperation for obtaining additional income.
Keywordshydroelectric power generation, power generation
planning, smart grids, games theory
I. INTRODUCTION
The rise in energy consumption, the growing dimensions of
power systems, their degree of complication and significance,
the increase in the prices of energy carriers, the influence of
occasional factors and uncertainty all of the aforementioned
has sharpened a number of serious energy-related problems.
Efficiency and availability of power supply.
Unfortunately, the standards of living for different
layers of population differ even in developed countries
that are well provided with energy. Still larger are the
differences in living standards between industrially
developed countries and developing countries.
Provision with energy resources is very inhomogeneous
at various places of the world. As a result of this, one
fourth of the worlds population still have no access to
electric energy sources and, consequently, to most of
the benefits offered by modern civilization. The main
reason for this is the energy price, which is inaccessible
to the poorer layers of population. The growth in the
energy prices hampers the development of industrial
production and consequently limits the opportunities to
solve many social and environmental problems.
Reliability of power supply. Humanity has gradually
got accustomed to conditions that are unthinkable
without guaranteed energy supply and has adapted its
way of living accordingly. Even in case of short-term
power cuts, modern-day cities, industrial enterprises and
transportation systems suffer damage and large-scale
economic loss, emergency and catastrophe threats arise,
possibly even with large casualties.
Environmental impact. Energy production is practically
impossible without influencing climate, the air and
water basin, the natural sceneries and, as a result, the
human living environment.
Sustainability. This concept is linked to the limited
amount of basic resources available to modern society.
Although the amount of energy produced from
renewable sources has increased considerably over the
last decade, yet it is expected that almost 85 % of the
increase in the energy production amount will be related
to an increase in the consumption of fossil fuel.
The acuity of the above problems has resulted in decisions
on an international scale regarding the restructuring of power
systems and the use of market conditions and mechanisms in
the management of the development and operation of power
systems. The power system is divided into a number of legally
independent parts that compete with one another. Competition
is the main factor that can ensure rational development of
power systems. At competition conditions, it is inevitable that
those companies that make correct, technically and
economically substantiated decisions are more likely to
survive.
Division of a system into a number of parts diminishes the
dimensions of the objects to be managed. It seems that the
models and algorithms for management and decision-making
are simplified, yet at the same time, new problems emerge. To
solve the problems described above are generally recognized
two main ways:
1. Use of distributed generation. Preference is given to
renewable energy sources.
2. Application of smart grid technologies, which uses
information and communications technology to gather and act
on information about the behaviors of suppliers and consumers,
in an automated fashion.
Both ways have to be used to improve the efficiency,
reliability, sustainability and to decrease environmental inpact
of the production and distribution of electricity. The presence
in the formulation of optimization task of four above-
mentioned problems generates the equal number of targets.
Very often there are contradictions leading to facilitate some
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
152
problems due to other. Proof of this is the use of renewable
energy sources particularly small-scale hydropower plants, the
implementation of which, in many cases is supported by
legislation.
This paper is devoted to the consideration of case of the use
of small rivers energy, the current system of support relevant
projects, its drawbacks and opportunities to remove them.
II. SUPPORTING SCHEMES OF RENEWABLE GENERATION
Renewable energy in Latvia is promoted through different
support schemes as in the other European countries. Favorable
national feed-in support scheme for renewable generation in
Latvia provided for a guaranteed purchase price, which is
significantly higher than the electricity market price. Electricity
Market Law prescribes, that a producer who generates
electricity from renewable energy sources may acquire the
right to sell the produced electricity to a public trader within
the framework of the mandatory procurement at a guaranteed
purchase price (feed-in tariff). Feed-in tariff depends on the
kind of used energy resource and the installed capacity of the
power plant.
A significant drawback of this mechanism is the producer
revenues independence from market price fluctuations.
Producers that sell electricity under a mandatory procurement,
are not interested in harmonizing of their power generation
schedule to the market price schedule, as produced energy has
the same price at all time.
There are more than 200 medium and small rivers in Latvia
and more than 140 small-scale hydropower plants (5 MW or
less).
It would be possible to increase output by 10-20% by
modernizing the production process. Total hydroenergy
potential of small and middle size rivers is at least 4 times
bigger.
III. COALITION ESTABLISHMENT AND SHAPLEY VALUE
In game theory, the Shapley values a solution concept in
cooperative game theory [11], [12]. To each cooperative game
it assigns a unique distribution (among the players) of a total
surplus generated by the coalition of all players. A coalition of
players cooperates, and obtains a certain overall gain from that
cooperation. The Shapley value provides one possible answer
to this question.
The main idea of the paper is based on coalition creation
between SHPPs and public trader. In that case, as it is shown
below, appears:
the ability of getting the additional income;
the problem of equitable distribution of income among
members of the coalition [11],[12],[13].
This problem can be solved by using the methods of co-
operative games theory.
A coalition does not require the repeal of the existing
legislation on support of renewable energy sources. At the
same time the results of this work can be considered as an
argument for amendments of legislation in the future.
To formalize this situation, we use the notion of a
coalitional game: we start out with a set N (of n players) and a
function v :
N
2
with
0 ) ( = v , where denotes the
empty set. The function v that maps subsets of players to reals
is called a characteristic function.
The function v has the following meaning: if S is a
coalition of players, then v (S), called the worth of coalition S,
describes the total expected sum of payoffs the members of S
can obtain by cooperation.
The Shapley value is one way to distribute the total gains to
the players, assuming that they all collaborate. According to the
Shapley value, the amount that player i gets given a coalitional
game ) , ( N v is

( )
{ }
{} ( ) ( ) ( ) S v i S v
n
S n S
v
i N S
i


\
!
! 1 !
) ( , (1)
where n is the total number of players and the sum extends
over all subsets S of N not containing player i. The formula can
be interpreted as follows: imagine the coalition being formed
one actor at a time, with each actor demanding their
contribution {} ( ) ( ) S v i S v as a fair compensation, and then
for each actor take the average of this contribution over the
possible different permutations in which the coalition can be
formed.
In case if coalition is formed by all participants and
coalition is known, it is not necessary to determine
mathematical expectation of different coalitions variants and
the expression (1) can be written as [18]:
{} ( ) ( ) [ ]

=
R
R
i
R
i i
P v i P v
N
v
!
1
) ( , (2)
where the sum ranges over all ! N orders of the players and
R
i
P is the set of players in N which precede i in the order ..
Shapley allocation is inherent significant drawback because
the volume of calculations in determining the Shapley value ,
in common case, catastrophically increases with increasing
number of players [11]. Discussed below task really is
formulated for a large number of players, but the specific
features of their unification into a coalition lead to the ultimate
simplification of distribution calculations Shapley.
IV. SPECIFIC FEATURES OF SHPP UNIFICATION INTO A
COALITION
Consider a simplified description of a power system
business. We have an owner (power system operator) o, which
does not produce energy but provides regimes planning and
operation, meaning that without him no gains can be obtained.
Then we have k SHPP w
1
,..., w
k
, each I of whom contributes an
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
153
amount sip to the total profit. The contribution of each power
plant is only possible in a coalition with the operator and is
independent of the participation or non-participation of other
hydropower plants in the coalition. So N = {o, w
1
,...,w
k
} and
v(S)=0 if o is not a member of S and
v(S)=sum(s
1
p+s
2
p++s
k
p) if S contains the owner and k
SHPP. The coalition between the SHPP without an operator is
not possible because it does not give additional profit.
Computing the Shapley value for this coalition game leads to a
value of sum (s
1
p + s
2
p ++ s
k
p) / 2 for the owner and s
i
p / 2
for each worker.
V. SHPP REGIME OPTIMIZATION
The idea of regulation process is that in some periods of
time SHPP works with water consumption excessing inflow by
consuming water from the reservoir before dam and at other
periods of time SHPP uses less water than inflow and fill up
the reservoir before dam.
The time interval from the beginning of one period of
reservoir drawdown to the next drawdown period after its
filling up is called a regulation cycle.
The small capacity of water reservoir (without opportunity
of long regulation) does not allow using regular changes in
seasonal water inflow. According to this condition planned
drawdown and filling up of water reservoir with small capacity
can be made only in connection with regular changes of the
total electrical load in the power system, which has the daily
and weekly periodicity.
The change of water pressure on SHPP is caused by the
change of water level in upstream and downstream. This is due
to the water use through the turbines of SHPP. Hence, the
change of the water level should be restricted by
max
H from
the top and by
min
H from the bottom, i.e.

max min
H H H , (3)
The capacity of hydro unit is determined with the
expression:

j j HA j SHPP
H Q P = 81 , 9 , (4)
where
SHPP
P - SHPP capacity, kW; Q - water flow through the
turbine, m
3
/sec.; H - difference between water levels at the
SHPP, m;
HA
- efficiency factor of hydro unit in relative
units:
G turb HA
= , where
turb
- turbine efficiency factor
in relative units;
G
- generator efficiency factor in relative
units [4],[5],[6],[7], [17].
The mathematical task of SHPP maximal income deriving
in market conditions can be formulated as follows. It is
required to determine the SHPP operating schedule by
providing maximum income for the regulation cycle T.
max ) , ( ) ,..., , (
1
2 1
=
=
J
j
j j j j
P c I P P P I , (5)
under the condition (3) and condition of use of the set amount
of water W
J
in water reservoir

J j
J
j
j
W t Q =

=1
, (6)
where ) , (
j j j
P c I - income from sale of electricity, that is
produced on SHPP during the time interval
j
t by known
market price
j
c , ; the regulation cycle duration:
=
=
J
j
j
t T
1
;
j
Q - the water flow through the SHPP flap during
the time interval , m
3
/sec;
J
W - the set amount of water that
could be passed through the SHPP flap per regulation cycle
(day, week and etc.).
The time interval equals to 1 =
j
t hour at the daily
regulation cycle of SHPP. The power generation on SHPP
during to the j-th interval
j
t is defined as:
j j
t P . At the
known natural inflow (the natural inflow of the river, due to
which the reservoir is filled up)
flow
Q , the used water flow in
each time interval of regulation is determined by value
j
Q that
depends of the usage of water reservoir capacity (m
3
) [5], [6].
The water pressure in the dam at the SHPP in j-th time
interval varies depending on the amount of water worked
through the turbine

j j j
H H H =
1
, (7)
where
1 j
H - water pressure at
1

j
t time interval, m;
j
H -
water pressure change depending on worked out water amount
m
3
(or var =
j
h , m) and on natural water inflow amount of
river const Q
piepl
=
.
, m
3
(or express from the water level
increase of dam surface const h
piepl
= , m).
The operability of developed algorithm is illustrated on the
example of two SHPP regime optimizations.
1. The first SHPP main data which allows its regulation is
given: the maximal level of the water reservoir 8,2 m;
nominal capacity 300 kW; the year average inflow into the
water reservoir 2,4 m3/sec. Due to the regulations of the
environmental protection in Latvia the minimal level of the
water in the SHPP reservoir should not be less than 7,9 m [2],
[3].
2. The second SHPP main data which allows its regulation is
given: the maximal level of the water reservoir 8,3 m;
nominal capacity 500 kW; the year average inflow into the
water reservoir 3,0 m3/sec. Due to the regulations of the
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
154
environmental protection in Latvia the minimal level of the
water in the SHPP reservoir should not be less than 8,0 m.
The SHPP income is found for regime regulation
considering water inflow and water level restrictions SHPP
optimizes its working regime.
The optimization was made for the summary income
(objective function) for whole day period (24 hours). The
results have been found by using nonlinear programming
generalized reduced gradient method (GRG) [9]. Genetic
algorithms (evolutionary method) and dynamic programming
(DP) also can be used in that task. The GRG method usage can
provide more accurate result than DP, because GRG method
does not depend from the discretization, i.e. water level step
value. Superiority of GRG method over DP method in such
task is considered in [14]. Use of genetic algorithm is discussed
in [15], [16].
In respect that first 20 years from the date of taking of the
decision to grant the SHPP the right to sell the produced
electricity within the scope of mandatory procurement, SHPP
sells electricity at feed-in tariff, so it is actual to optimize the
power station operation regime at a constant price value (0,18
/kWh.) [8]. In this case, SHPP increases its income by
maximizing power production (Fig.2, Fig.4). The water level
(accumulation and drawdown of water) changes charts is
presented at Fig.1 and Fig.3.
7,9
7,925
7,95
7,975
8
8,025
8,05
8,075
8,1
8,125
8,15
8,175
8,2
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
H, m
0
0,02
0,04
0,06
0,08
0,1
0,12
0,14
0,16
0,18
0,2
Fixed price,
/kWh
Water level chart after optimization at fixed price Fixed electricity price

Fig. 1. Water level chart for the first SHPP, in optimization is used fixed
price
0
50
100
150
200
250
300
350
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
P, kW
0
0,02
0,04
0,06
0,08
0,1
0,12
0,14
0,16
0,18
0,2
Fixed price,
/kWh
The power generation after optimization at fixed price Fixed electricity price

Fig. 2. The price and generated power graphs for the first SHPP, in
optimization is used fixed price
8
8,025
8,05
8,075
8,1
8,125
8,15
8,175
8,2
8,225
8,25
8,275
8,3
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
H, m
0
0,02
0,04
0,06
0,08
0,1
0,12
0,14
0,16
0,18
0,2
Fixed price,
/kWh
Water level chart after optimization at fixed price Fixed electricity price

Fig. 3. Water level chart for the second SHPP, in optimization is used fixed
price
0
100
200
300
400
500
600
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
P, kW
0
0,02
0,04
0,06
0,08
0,1
0,12
0,14
0,16
0,18
0,2
Fixed price,
/kWh
The power generation after optimization at fixed price Fixed electricity price

Fig. 4. The price and generated power graphs for the second SHPP, in
optimization is used fixed price
The income at feed in tariff (from optimization considering
the natural inflow and the ability to store up water (Fig.1.-
Fig.4.)) for the first SHPP is about 703,24 , but for the second
SHPP is 891,415 .
The public trader (AS Latvenergo) buys and sells
electricity in the Nord Pool Spot [1] exchange stock and should
buy all electricity produced under mandatory procurement. As
previously mentioned, SHPPs are not interested in harmonizing
of their power generation schedule to the market price
schedule, as produced energy has the same price at all time.
They produce electricity at their own discretion and can work
at full capacity in the hours with minimal load that adversely
affect public trader. That is why it is important to optimize
regime of SHPP considering price changes in the market (for
example Nord Pool Spot).
Such approach can lead to additional income for the public
trader. To motivate SHPPs to work according to the market
price schedule public trader share this additional income with
SHPPs. Surely, SHPPs sell produced electricity to system
operator at the feed-in tariff.
The obtained results show (Fig.5-Fig.8) that when regime is
optimized by market price SHPP accumulates water in the case
when the electricity price at the market is relatively low and
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
155
exhaust water at the high price level, considering restriction (3)
and the maximum power restriction.
The income at market price (from optimization considering
that the SHPP produces power in dependence of the market
price, but sell produced electricity at fixed tariff Fig.5.-Fig.8.)
for the first SHPP is about 692,94 , but for the second SHPP
is 875,757 .
7,9
7,925
7,95
7,975
8
8,025
8,05
8,075
8,1
8,125
8,15
8,175
8,2
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
H, m
0
0,01
0,02
0,03
0,04
0,05
0,06
0,07
Market price,
/kW
Water level chart after optimization at market price Market price of electricity

Fig. 5. Water level chart for the first SHPP, in optimization is used market
price
0
50
100
150
200
250
300
350
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
P, kW
0
0,01
0,02
0,03
0,04
0,05
0,06
0,07
Market price,
/kW
The power generation after optimization at fixed price Fixed electricity price

Fig. 6. The price and generated power graphs for the first SHPP, in
optimization is used market price
8
8,025
8,05
8,075
8,1
8,125
8,15
8,175
8,2
8,225
8,25
8,275
8,3
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
H, m
0
0,01
0,02
0,03
0,04
0,05
0,06
0,07
Market price,
/kW
Water level chart after optimization at market price Market price of electricity

Fig. 7. Water level chart for the second SHPP, in optimization is used market
price
0
100
200
300
400
500
600
0:00 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00
hour
P, kW
0
0,01
0,02
0,03
0,04
0,05
0,06
0,07
Market price,
/kW
The power generation after optimization at market price Market price of electricity

Fig. 8. The price and generated power graphs for the second SHPP, in
optimization is used market price
So, how to distribute all additional income if is made
coalition of market participants.
The public trader (player 3) buys electricity from SHPPs,
and if it is not in coalition with them 0 ) 3 ( = . If SHPPs are
not in coalition with public trader, they get the income from
selling electricity by feed in tariff: the first SHPP (player 1)
24 . 703 ) 1 ( = , and the second SHPP (player 2)-
42 . 891 ) 2 ( = . If there is the coalition of two SHPPs, the
summary income is 65 , 1594 ) 2 , 1 ( = . The coalition of the
first SHPP with the public trader brings an income
43 , 713 ) 3 , 1 ( = , accordingly, the coalition of the second
SHPP with the public trader brings an income 51 , 910 ) 3 , 2 ( =
. The coalition of all three companies would provide the
income 936 , 1623 ) 3 , 2 , 1 ( = . In that way the gain of all
coalitions can be determined as:

{}
{ }
{ }
{ }
{ }
{ }
{ }

=
=
=
=
=
=
=
=
3 ; 2 ; 1 1623.936
3 ; 2 , 51 . 910
3 ; 1 , 43 . 713
2 ; 1 , 65 . 1594
3 , 0
2 , 42 . 891
1 , 24 . 703
) (
S
S
S
S
S
S
S
S .
Co-operative game ) , ( N v is called relevant if
( ) ( ) N v i v
N i
<

, (7)
The division of game ) , ( N v is a vector ( )
T
x x x x
3 2 1
, , = ,
which meets following conditions:
( ) N v x
N i
i
=

(condition of co-operative expediency),


( ) i v x
i
, N i (condition of individual expediency).
In that game the division will be vectors ( )
T
x x x x
3 2 1
, , = ,
which meets following conditions:
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
156

. 0
; 415 . 891
; 237 . 763
; 936 . 1623
; 509 . 910
; 427 . 713
3
2
1
3 2 1
3 2
3 1

= + +
+
+
x
x
x
x x x
x x
x x
.
For the insignificant value of n the calculation process of the
Shapley value is easy to describe in table form.
TABLE I. PARTICIPANT INCOME DETERMINATION
Variations
The Participants Income,
1 2 3
1, 2, 3 703,237 891,415 29,283
1, 3, 2 703,237 910,509 10,189
2, 1,3 703,237 891,415 29,283
2, 3, 1 713,427 891,415 19,094
3, 1, 2 713,427 910,509 0
3, 2, 1 713,427 910,509 0
Average
income
708,332 900,962 14,642

The result (Shapley vector) is given in the last row of the
Table 1:
( ) ( )
T T
x x x x 14.64 , 900.96 , 708.33 , ,
3 2 1
= = .
CONCLUSION
The maximal income of SHPP can be obtained in the
conduct of its regime depending on schedule of electricity price
change.
In terms of operation a SHPP in a period of low prices, it
can be shut down, accumulating water. It is required to
consider restrictions on the natural water flow on small rivers
and possible amount of water that may be consumed by a
SHPP during the day.
The example of this paper shows that participants could get
the additional income from cooperation in the game.
REFERENCES
[1] Nord Pool Spot home page: http://www.nordpoolspot.com/Market-
data1/Elspot/Area-Prices/ALL1/Hourly/
[2] Small Hydropower Association home page:
http://www.mhea.lv/component/content/article/62/73-brzes-dzirnavu-
hes.html/ (in Latvian)
[3] Latvijas Vides, eologijas un Meteorologijas Centrs home page:
http://www.meteo.lv/en/ (in Latvian)
[4] V.M.Gornshteyn, The most profitable operating regimes of hydro power
plant in the power systems, Moscow: Gosenergoizdat, 1959, p. 248. (in
Russian)
[5] J. Gerhards, A.Mahnitko, The power system regime optimization, Riga:
Riga Technical University, 2005, pp. 249. (in Latvian)
[6] L.P. Mikhailov, B.N. Feldman, T.K. Markanova and others, Small
Hydroenergetic , Moscow: Gosenergoizdat, 1989, p. 184. (in Russian)
[7] J.Balodis, Small hydropower plants, Riga: Latvian State Publishing,
1951, pp.155. (in Latvian)
[8] Y Cabinet Regulation No. 262 of 16 March 2010. Regulations
Regarding the Production of Electricity Using Renewable Energy
Resources and the Procedures for the Determination of the Price.
[9] Varfolomejeva, R., Umbrako, I., Mahitko, A. The Small Hydropower
Plant Operating Regime Optimization by the Income Maximization. No:
Powertech Grenoble 2013: Powertech Grenoble 2013, Grenoble: 2013,
pp.1.-6.
[10] R. Tiainen, T. Lindh, J. Ahola, M. Niemel, V. Srkimki, Energy price-
based control strategy of a small-scale head-dependent, hydroelectric
power plant, in: International Conference on Renewable Energies and
Power Quality, 2008.
[11] E. Faria, L.A. Barroso, R. Kelman, S. Granville and M.V. Pereira.
Allocation of Firm-Energy rights among hydro plants: An Aumann-
Shapley approach. In Power Systems, IEEE Transactions on, 24(2): 541-
551, 2009.
[12] M. Zima-Bockarjova, J. Matevosyan, M. Zima, and L. Sdoder, "Sharing
of Profit From Coordinated Operation Planning and Bidding of Hydro
and Wind Power," in IEEE Transactions on Power Systems, vol.25,
no.3, pp.1663-1673,August 2010, doi: 10.1109/ TPWRS.2010.2040636
[13] M. Bockarjova, M. Zima, and G. Andersson. On allocation of the
transmission network losses using game theory. In Electricity Market,
2008. EEM 2008. 5th International Conference on European, pp. 1-6,
2008.
[14] R.Varfolomejeva, I. Umbrasko, A. Mahnitko. Algorithm of Intellectual
Control System Operation of Small Hydropower Plant.// Proceedings of
12th International Conference on Invironment and Electrical
Engineering EEEIC 2013. Wroclaw, Poland, May, 2013. 414-418.pp.
[15] P. Sangsarawut, A. Oonsiivilai, T. Kulworawanichpong. Optimal
Reactive Power Planning of Doubly Fed Induction Generators Using
Genetic Algorithms.// Proceedings of 5
th
International Conference on
Energy and Environment. IASME/WSEAS. UK, 2010, 278.- 282.pp.
[16] Sepehr Sanaye, Amir Mohammadi Nasab. Modeling and optimization of
a natural gas pressure reduction station to produce electricity using
genetic algorithm. .// Proceedings of 6
th
International Conference on
Energy, Environment, Sustainable Development and Landscaping.
WSEAS. Romania, 2010, 62.- 70.pp.
[17] M. Grigoriu, M.Popescu. Hydropower Preventive Monitoring Action
Plan. .// Proceedings of 5
th
International Conference on Energy and
Environment. IASME/WSEAS. UK, 2010, 265.- 270.pp.
[18] Y. Narahari. Lecture Notes, Game Theory. Cooperative game theory.
Department of Computer Science and Automation. India, 2009.-1.-
12.pp.
















Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
157
The Eastern Baltic LNG terminal as a prospect to
improve security of regional gas supply
Kati Krbe Kaare, Ott Koppel, Ando Leppiman
Department of Logistics and Transport
Tallinn University of Technology
19086 Tallinn, Estonia
E-mail: {kati.korbe, ott.koppel, ando.leppiman}@ttu.ee


AbstractOne of the crucial issues in Europe at the moment
is securing reliable gas supply. Achieving security of gas supply
implies diversifying gas sources, while having enough supply,
transportation, and storage capacity to meet demand peaks and
supply interruptions. In 2013, the Baltic States still remain
disintegrated from the rest of Europe in one crucial way: their
natural gas infrastructure isolates them into energy islands.
The Eastern Baltic Sea European Union (EU) member states of
Finland, Estonia, Latvia and Lithuania are the only ones which
remain isolated from the present integrated EU natural gas
transmission system. The gas demand in these isolated member
states is approximately ten billion cubic meters (bm
3
) of natural
gas per year. The third energy package of EU proposes a new
series of measures to promote competition and create a single
European energy market. Estonia, Latvia, Lithuania and Finland
now for the first time have a chance to secure their energy
independence by connecting their natural gas systems with those
of their European allies and evolving them into market-based
trading systems. Liquefied natural gas (LNG) is an important
energy source that contributes to energy security and diversity,
therefore a concept of a regional LNG terminal has been
proposed. In this paper the authors give an overview of the
current situation and present possible future scenarios with the
development of Eastern Baltic regional LNG terminal. 2013 is a
crucial time as in September the decision will be made regarding
weather the regional LNG terminal will be chosen as a project of
common interest in the trans-European energy networks.
KeywordsLNG; security of supply; regional terminals
I. INTRODUCTION
More than two decades after the end of the Soviet
occupation and eight years after the Baltic States joined NATO
and the EU, they remain disintegrated from the rest of Europe
in one crucial way: their natural gas infrastructure isolates them
into energy islands. As a Soviet-era legacy, the natural gas
networks of Estonia, Latvia, Lithuania and Finland are supplied
only by Gazprom through links to the grids of Belarus,
Russias Kaliningrad Oblast, and mainland Russia.
The isolation of those states from the EUs natural gas
networks is incompatible both with these states individual
economic needs and with the EUs collective vision of a
unified European energy market. In 2013 Estonia, Latvia,
Lithuania and Finland in are closer than ever to make concrete
steps securing their energy independence by connecting their
natural gas systems with those of their European allies. This is
the ultimate goal of the third energy package of EU, to promote
competition and create single European energy market. In
parallel with infrastructure planning Lithuania, Latvia, Estonia
and Finland are evolving market liberalization with intention to
introduce market-based trading systems.
EU energy policy now aims to couple Baltic natural gas
networks with those of their EU allies in pursuit of two key
strategic goals: creation of a single unified energy market in
Europe; and completion of a post-Cold War Europe that is
whole and free [1]. Cooperation in the framework of Baltic
Energy Market Interconnection Plan (BEMIP) between eight
Baltic Sea EU Member States is being carried out and a
Memorandum of Understanding with an Action Plan was
signed on 17 June 2009 and is going to be fulfilled.
In order to link the isolated East-Baltic region to the
European natural gas market, thus enhancing security of
supply, ending single supplier dependency and increasing
diversification, BEMIP identified key gas infrastructure
investments, including a regional LNG terminal for Estonian,
Latvian, Lithuanian and Finlands needs [2].
The European Commission (EC) has considered two
options for connecting the Baltic States to the European natural
gas network either by interconnector and/or by an LNG import
terminal. Numerous LNG projects have been proposed in
recent years for the Eastern Baltic region [2]. In September
2013 according to the Action Plan concrete measures in
infrastructure development are going to be agreed upon and
selection into the list of Projects on Common Interest (PCI) in
accordance with Trans-European Energy Networks (TEN-E)
delegated acts are going to be made.
This paper focuses on comparing of the final three projects
proposed to develop the Eastern Baltic regional LNG terminal
and their role in improving the security of gas supply of the
mentioned region.
II. BACKGROUND
European experts have predicted European gas demand to
stay largely flat over the next twenty years due to heavy
emphasis on renewable energy. Europe is expected to become
an increasingly significant importer of gas as at the same time
gas production in Europe itself is to fall because depletion of
the United Kingdom and Dutch reserves. Through rising
imports the gas prices are expected to act same way and
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
158
according to experts Asian price levels will be reached from
2025 onwards [3].
One of the crucial issues in Europe at the moment is
reliable gas supply. This subject became even more important
after gas supply interruptions and limitations took place in
January of 2009 in some EU countries. At present security of
supply is on top of the agenda of the EC [4].
The Eastern Baltic gas market (Finland, Estonia, Latvia and
Lithuania) currently has an aggregated demand of about 10
bcm
3
per year (Table I), which is expected to remain flat
(compound annual growth rate 0.3%) unless major
discontinuities will take place. If gas supply diversification was
enhanced and the required infrastructures were developed
accordingly, market could grow up to 16 bm
3
, with the
additional upside of 1.5 bm
3
for LNG bunkering [3].
TABLE I. NATURAL GAS ANNUAL CONSUMPTION (MILLION M
3
) [5], [6]
Country 2007 2008 2009 2010 2011
Estonia 1,003 962 653 701 632
Latvia 1,645 1,665 1,528 1,821 1,604
Lithuania 3,720 3,245 2,727 3,115 3,398
Finland 4,587 4,728 4,446 4,701 4,105
Total 10,955 10,183 9,585 10,338 9,739

Currently, as the Great Baltic area relies entirely on Russian
gas supplies and only Latvia and Finland are compliant with N-
1 rule, which refers to the security of supply. Several projects
have been proposed to end isolation of the Eastern Baltic
market, and some of them are included in BEMIP.
These projects can be clustered in three groups [3]:
upgrades of the existing interconnections Intra-Baltic
connections;
new pipeline connections as Balticconnector and Gas
Interconnection Poland-Lithuania (GIPL);
new LNG terminal (six projects proposed in different
port locations).
A joint implementation of Intra-Baltic connections,
Balticconnector and GIPL would help the area to achieve some
degree of supply diversification (about 33% of diversified
gas, mainly in Latvia and Lithuania), but the security of supply
in Lithuania would only marginally improve [3].
To expand supply options and achieve security of supply, a
LNG terminal of 4 bm
3
per year can be considered with
potential for future scalability. According to experts
simulation, in a base case demand this terminal will be
probably utilized at 50% of its capacity and Russian contracts
might be utilized at minimum quantity intake. The remaining
LNG capacity could provide flexibility for peak shaving. This
could help to diversify further the Baltic supply mix (ca 60% of
Russian gas, 20% LNG, 20% gas imported from European
network). A larger terminal would be almost unutilized in the
base case demand [3].
With the assumption that each Baltic country would have to
achieve the same diversification target and equally comply
with N-1 rule (see below), the location that minimizes further
network upgrades and optimizes gas grid flows is Estonia [3].
Numerous LNG projects have been proposed in recent
years for the Eastern Baltic region. Different port locations
might be eligible for the realization of the LNG terminal (
Muuga, Paldiski and Sillame in Estonia, Riga and Ventspils in
Latvia, Klaipeda in Lithuania) [3].
Klaipeda LNG terminal is the only project in the early
stages of implementation, potentially allowing for a detailed
assessment of the project cost. The adopted technical solution
for Klaipeda terminal is a Floating Storage Regasification
Units (FSRU) facility leased for ten years; the lease fee of 43
million euros per year covers for rent, financing cost and
overheads. The total cash-out over the lease period would be
430 million euros. Project promoter Klaipedos Nafta reports
the overall investment (discounted lease fees and buy-back
option) to be 250 million euros [3].
The combined market of Estonia, Latvia and Finland
amounts to approximately 6 bm
3
per year until 2020. The
market opening in Estonia combined with the expiry of the
Gazprom contracts implies an increasing need and opportunity
for shippers to diversify their sourcing portfolio. It will further
enable easier access to new entrants as new supply options
become available [2].
Russian imports will, however, still play an important role
in Estonian gas supply and the gas supplies from LNG terminal
will supplement the existing import source [2].
III. SECURITY OF SUPPLY
The security of energy supply (SOS) is one of the main
objectives of EU energy policy [7]. Energy security is defined
as the availability of regular supply of energy at an affordable
price there are availability, accessibility, affordability and
social acceptability. Energy security comes at a cost and it is
not a question of achieving it at any cost [8]. From a European
perspective, energy security is most often discussed in terms of
SOS, in other words with reference to the avoidance of sudden
changes in the physical availability of energy relative to
demand [9].
The definition has physical, economic, social and
environmental dimensions. A physical disruption can occur
when an energy source is exhausted or production is stopped,
temporarily or permanently. Economic disruptions are caused
by erratic fluctuations in the price of energy products on the
world markets, which can be caused by a threat of a physical
disruption of supplies. Recent energy market trends show that
there is another cause for concern, linked to speculative price
movements in anticipation of a potential disruption of supplies.
If commercial energy services and electricity are available,
income is the main factor that appears to influence a
households choice of fuel. The measures of SOS can be
grouped into two categories: dependence, and vulnerability,
represented both in physical and economic terms. Physical
measures describe the relative level of imports or the prospects
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
159
for shortages and disruptions. Economic measures describe the
cost of imports or the prospects for price shocks [10].
It is therefore a question of guarding against such changes.
Energy security involves developing strategies to reduce, or
protect against, risks stemming from insufficient production
capacity, imported energy and also, in the case of network
industries, from transmission infrastructures (and thus transit
problems). The main concern here is the risk of interruption of
supplies and, more particularly, whether the energy is available
in sufficient quantities to meet demand [8].
The recent developments in the energy markets have
heightened concerns about the feasibility of supply security,
usually defined as a continuous availability of energy at
affordable prices. EU countries buy more than half of their
energy from non-EU sources. Since the demand for energy is
growing the EU, dependence from foreign suppliers will
increase over time [7].
Energy security has risen in importance on the international
policy agenda during recent decades due to growing
dependence of industrialized economies on imported energy
consumption and the increased frequency of disruptions in
supply. In this context, the current European domestic energy
system is not sufficiently reliable or affordable to support
sustained economic growth.
Organization for Economic Co-operation and Development
(OECD) European countries are consuming more and more
energy and importing more and more energy products. As a
result, external energy dependence for all sectors of the
economy is constantly increasing, especially for oil and natural
gas. For the future, it is vitally important to be able to
implement measures that will allow an orderly and effective
response to the threat from energy insecurity [10].
IV. ESTONIAN SECURITY OF SUPPLY
One possibility to describe SOS of natural gas in Baltic
States is through EU regulation concerning measures to
safeguard security of gas supply [11].
According to the regulation the N-1 criterion means
assessment of the situation in the event of disruption of the
single largest gas infrastructure delivery connection (1). If in
the event of interruption it is possible to rearrange deliveries
without any supply disruption, the N-1 criterion is met.
% 100 1 , 100 % 1

N
D
I LNG S P EP
N
max
m m m m m
(1)
where:
EP
m
technical capacity of entry points (in million cubic
meters per day), other than production, LNG and storage
facilities covered by P
m
, S
m
and LNG
m
, means the sum of the
technical capacity of all border entry points capable of
supplying gas to the calculated area;
P
m
maximal technical production capability (in million
cubic meters per day) means the sum of the maximal technical
daily production capability of all gas production facilities
which can be delivered to the entry points in the calculated
area;
S
m
maximal technical storage deliverability (in million
cubic meters per day) means the sum of the maximal technical
daily withdrawal capacity of all storage facilities which can be
delivered to the entry points of the calculated area, taking into
account their respective physical characteristics;
LNG
m
maximal technical LNG facility capacity (in
million cubic meters per day) means the sum of the maximal
technical daily send-out capacities at all LNG facilities in the
calculated area, taking into account critical elements like
offloading, ancillary services, temporary storage and re-
gasification of LNG as well as technical send-out capacity to
the system;
I
m
means the technical capacity of the single largest gas
infrastructure (in million cubic meters per day) with the highest
capacity to supply the calculated area. When several gas
infrastructures are connected to a common upstream or
downstream gas infrastructure and cannot be separately
operated, they shall be considered as one single gas
infrastructure;
D
max
means the total daily gas demand (in million cubic
meters per day) of the calculated area during a day of
exceptionally high gas demand occurring with a statistical
probability of once in twenty years.
Based on the calculations in the Joint risk assessment of
security of gas supply of Estonia, Latvia and Lithuania 2012
[12] the infrastructure standard N-1 for Estonia was 59.7%, for
Latvia - 153.9% and for Lithuania 27.4%. Considering all
three countries as a whole in the event of a disruption of the
single largest gas supply infrastructure, natural gas supply line
MinskVilnius, the infrastructure standard N-1 was 129.7%.
On 7 November 2012 Estonian transmission system
operator EG Vrguteenus presented, that in accordance with
the latest calculations N-1 criteria for Estonia is fulfilled (2)
due to the increased pressure after the reconstruction works in
Russia in the pipeline St. Petersburg-Narva [13].

(2)

Although infrastructure standard N-1 calculations show that
in the event of the largest capacity disruption the capacity of
the remaining infrastructure should be able to satisfy total gas
demand, response scenarios demonstrate that there will be gas
shortage in the region due to internal bottlenecks. The main
bottlenecks in the system are the capacity of meter stations on
the borders as well as the Inukalns underground gas storage
facility (UGS) send-out capacity in the spring.
To maintain this risk and to improve the security of supply
of natural gas to Estonia the critical infrastructure given in
BEMIP Action Plan is needed. But most of it will be effective
after 2020. Therefore the only option to have a real security of
gas supply before 2020 is to build up in earliest conveyance the
LNG supply option together with the storage of gas for
vulnerable customers.in accordance with the EU Regulation
concerning measures to safeguard security of gas supply.
% 104 100
7 , 6
7 0 0 0 14
% 1

N
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
160
According to regulation EU Member States or TSOs are
required to guarantee 30 days gas supply to vulnerable
customers. Such customers make up approximately 5% of the
Estonian market and in accordance of the regulation demand
by protected customers should be taken during 30 days at one
million m
3
per day.
But in the case of other alternative supply facility as LNG
terminal, it would lower significantly the amount of gas needed
to be stored. The existence of an LNG terminal opens a
possibility to supply the region with gas in very short
timeframe. Several possibilities have been analyzed for quick
supply but in current situation the best options are Swinoujscie
and Nynashamn. Therefore the LNG terminal capacity should
be sufficient to hold only five days reserve.
V. LIQUID NATURAL GAS
LNG is the liquid state of natural gas (NG), useful for
transport/storage since LNG occupies about
1
/
600
the volume of
NG, when the latter is in the gaseous state under normal
conditions. LNG is an important energy source that contributes
to energy security and diversity [14].
Since the first LNG ship arrived in Europe in 1964, the
LNG industry has been steadily growing, driven by rising
natural gas demand in countries where domestic production
inadequately covers local needs. Initially, Asia and Africa
produced the majority of LNG, but now the Middle East and
Trinidad have contributed to the production of LNG and more
recently USA has started to import LNG via Sabine Pass and
more terminals are waiting to get export approval [15].
In 2006, Qatar became the largest LNG producer in the
world [16]. The largest consuming regions for LNG include
Asia and Europe and they are expected to support substantial
new LNG demand growth. There is a unique advantage on
liquefaction as there is a volume reduction of about 630 times
on liquefaction [17], and LNG handling is more like handling
oil.
The ability to convert natural gas to LNG, which can be
shipped on specially built ocean-going ships, provides
consumers with access to vast natural gas resources worldwide.
LNG is ideally transported in cryogenic tankers by road, ships
and rail wagons. Further, tremendous cost reductions [18] have
been accomplished in all parts of the LNG chain in recent
years. The fall in tanker prices over the last decade led to a
much wider economic reach of LNG transportation. The
dramatic cost reductions for LNG liquefaction trains made
LNG projects viable even if only part of the capacity is secured
by long-term sales, so that the remainder could be sold on a
flexible or spot basis.
As most of the undeveloped gas reserves are located far
away from OECD markets, it is clear that LNG will play a key
role to bring this gas to the market, when distance or natural
obstacles make pipeline transport impossible. Hence the
increasing supplies of LNG, accompanied by the increased
flexibility in LNG trade are adding to the security of gas
supply. Like all natural gases, LNG is cleaner than coal or oil
and it offers an opportunity to diversify energy supplies.
Future use of LNG is expected to grow. By 2030, the LNG
market would have a big change, with a five-fold increase in
volume to nearly 75 billion cubic feet per day, that represents
about 15% of the total gas market, up from about 5% in 2000
[19]. There are two paradigm shifts in the world gas markets
that have resulted in the fact that the European gas prices are
now about five times USA levels and in Asia eight times USA
level.
Shale gas boom in USA from 2010 have diminished large
net imports of gas to USA for only a marginal one. Due to that
significant amount of demand exited the market and
furthermore, USA LNG re-gasification terminals are starting to
be converted into liquefaction plants and export terminals. The
first operational LNG export facility, Sabine Pass terminal, is
expected to be ready for exports in 2015.
After the Fukushima catastrophe, Japan has phased out
from nuclear energy, which has increased heavily the use of
natural gas for power production. Having no natural gas of
their own and no pipelines, this has had a significant impact on
global LNG markets and has resulted in LNG cargoes for
Europe to be diverted to Asia. Additionally, gas demand in
Europe has significantly fallen due to the drop of coal prices
and almost collapsed Emission Trading System of EU. Coal
consumption in Europe has risen significantly in the last two
years.
As recent advances in technology have facilitated LNG as
the quickest way for many countries to diversify their supplies
of natural gas, so should do Baltic States and Finland. Price
wise, in the short-term then, while over supply is an issue in
Germany, supplying Eastern Baltic LNG terminal could be
attractive if prices are quoted on the European hub prices. A
commercially viable LNG terminal serving all three Baltic
countries and Finland would ensure a year-round diversified
supply of gas, which is the most fundamental element required
for emergence of a liquid trading hub.
Such diversification of supply would also partially undercut
Gazproms monopolist tactics, even if GIPL fails to materialize
and if Inukalns remains under Gazproms control [1].
VI. REGIONAL TERMINALS PROJECTS IN EASTERN BALTIC
There are several factors affecting the scope and services of
the terminal, including supply possibilities, development in
demand, and functionality of the terminal in terms of the type
of services provided. These factors are closely interrelated and
together determine the boundary conditions for the technical
parameters of the terminal.
The concept of the terminal is to cover the following areas
[2]:
SOS in form of long term capacity reservations for the
Estonian and Finnish Transmission System Operators
(TSO-s);
commercial capacity for the interested gas shippers
operating in Estonia, Latvia, Lithuania, and Finland;
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
161
servicing of the off-grid market in Estonia hereunder
district heating plants not connected to the network
supplied via trucks;
re-fuelling LNG driven ships via re-loading facilities for
bunker barges, which could then bring their cargo to the
old port in Tallinn or other locations in Estonia or along
the Finnish and Swedish coast.
It is the over-all plan to be able to service the shipping
industry already from 2015 when the Sulphur Emission Control
Areas (SECA) enters into force. This will be done by
implementing a small storage capacity only to accommodate
the bunkering demand from the maritime sector. In the
following the key technical parameters comprising of send out
rates and tank size which are needed to accommodate the
increasing demand from the sectors are discussed [2].
The overall investment for the LNG terminal and the
proposed pipeline projects (Balticconnector, Intra-Baltic
connections and GIPL) would be around 1.3 billion euros,
covering the whole Eastern Baltic area for an addressable
demand of 11 bcm
3
per year with an estimated increase of the
regional transportation tariff of about 0.5 US cents per million
British Thermal Units (MMbtu).
This will help the area to reach a diversification target of
63%, by accessing to the LNG market and western European
gas hubs. Additional benefits are [3]:
increased attractiveness of Inukalns storage, granting
access to Poland and Finland;
incremented role of Baltic countries as a transit market
for Russian gas to Europe;
balanced grid.
Two possible implementation strategies have been
identified that might grant incremental benefits for the area.
Those two options have been developed with the objective of
equally grant to all involved countries security of supply and
supply diversification [3].
The first option considers the implementation of GIPL and
Intra-Baltic connections: the overall investment spending
would be in the range of 690-815 million euros the investment
will address an overall demand pool of 5.5 bm
3
per year
(Lithuania, Latvia and Estonia), with an estimated impact on
the regional transportation tariff of about 0.65 US cents per
MMbtu. This will help the area to reach a diversification target
of 63% by accessing western European gas hubs. Additional
benefits are [3]:
increase attractiveness of Inukalns storage, granting
access to Poland and Finland;
incremental role of Baltic countries as a transit market
for Russian gas to Europe.
The second option considers the implementation of LNG,
Intra-Baltic connections and Balticconnector: the overall
investment spending would be in about 860 million euros,
covering the whole Eastern Baltic area for an addressable
demand of 11 bm
3
per year, with an estimated impact on the
regional transportation tariff of about 0.3 US cents per MMbtu.
This will help the area to reach a diversification target of 33%
accessing to LNG markets. Additional benefits are [3]:
increased attractiveness of Inukalns storage, granting
access to Finland;
balanced grid.
In conclusion, an integrated approach to infrastructure
development may balance the value from pipelines and from
LNG [3]:
proposed BEMIP pipeline investments alone do not
fully allow all Baltic countries to meet N-1 rule.
Conversely, a LNG terminal in Estonia with additional
investments on interconnections would meet the target;
the diversification opportunity offered by the LNG
terminal would cap the Russian gas price, although it
should be considered that, at current international LNG
prices, this sourcing option might not be competitive
compared to historical Russian price levels;
a 4 bm
3
terminal would be the optimal size to meet the
limited demand of the Eastern Baltic area, and to
support gas market growth through scalable
investments. This dimension would also allow using
storage capacity to further manage high peak demand;
countries involved have to take full responsibility that
the initiators, owners and future operators of all the
projects must be independent of the existing dominant
supplier in all aspects so that is serves as real source
diversification.
A joint assessment of the required investments shows that
Estonia (in particular Paldiski port in case of Balticconnector
landing there) is the location that helps minimizing additional
investments to connect the terminal to the main transmission
system and to equalize benefits of supply diversification and
supply security [3].
In addition to the project recommendation, as requested by
EC-s Directorate-General for Energy during the BEMIP High
Level Group meeting held in Brussels in September 2012,
experts conducted a high level strategic assessment of Finland
(Finnish regasification terminal the FinGulf project, as
proposed for PCI candidate) as possible location for the
Eastern Baltic regional LNG Terminal, initially out of the
projects scope [3].
The FinGulf LNG Terminal would fit within the strategic
goal set by the European Commission to improve both SOS
and diversification in the Baltic region. It would bring the same
benefit to the region than a LNG terminal located in Estonia.
Furthermore, a LNG terminal in Finland has the advantage to
be closer to the centre of biggest gas consumption in the
region, namely Finland. However this consumption is fully
covered with supplies from Gazprom and therefore it is
unrealistic to expect the real need for LNG in Finland before
the maturity of existing take-or-pay contract on 2025. Hence,
the Balticconnector would become a sister project that would
grant the SOS to Estonia and would enable the supply
diversification to the Baltic region [3].
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
162
VII. ANALYSIS OF PROJECTS
A. Finland
In Finland, Gasum Oy is the sole importer of gas; however
a secondary gas market has been established on a day-ahead
trading basis. Since Fortum is 50.8% owned by the State of
Finland, it can be stated that the Finnish State still has a simple
majority among the Gasum shareholders. This situation is
unique in the region. The Finnish TSO is not unbundled,
neither is there a clear plan in Finland to do so before the
exemption on the Third Energy Package runs out.
There have been talks of government discussions to
unbundle, but that cannot be verified from any official source.
Finnish gas demand is one with the least seasonal influences in
the region, mainly due to the large proportion of industrial gas
clients in the market. Many off-grid import terminals are
already planned in Finland, and now also regional LNG
terminal project in Inkoo is promoted. It is in the early stage of
development as project is in the middle of planning and
environmental impact assessment procedures.
The Inkoo project location, as currently proposed, has daily
capacity of 19.2 million m
3
per day; hence 7.2 million m
3
per
day could be dedicated to serve Estonia, Latvia and Lithuania.
Inkoo port is kept open by the icebreakers of the Finnish
Maritime Administration in wintertime. The ice conditions are
easy at Inkoo during normal winters, and thus the channel is ice
free almost always. In conclusion, a regasification terminal in
Finland would grant Baltic area same benefits of the Estonian
one [3].
B. Lithuania
Lithuania has made the most progress. In 2011, when
Lithuania adopted the EU-s Third Energy Package, it
immediately announced plans for a floating LNG terminal at
Klaipeda. To move quickly, Lithuania contracted for a floating
LNG regasification unit, which was assessed by Balti Gaas
Pyry report in 2012 as the most advanced of any LNG project
in Baltic States [20].
C. Latvia
Latvia is seeking EU support for its own LNG terminal in
Riga. Latvias main argument has been that its terminal would
reduce construction costs compared with a terminal in
Lithuania or Estonia, since Rigas proximity to the Inukalns
storage facility obviates the need to build gas storage for the
terminal. However, Latvias Baltic neighbors worry that
Gazproms control of Inukalns would negate the strategic
value of an LNG terminal in Riga. Estonias leaders have
expressed worry that as long as the Inukalns gas storage
facility remains under Gazproms control, a regional LNG
terminal in Latvia would not enhance Estonias security of gas
supply [1].
D. Estonia
The Estonian government has therefore argued that public
control of strategic projects like the proposed LNG terminal is
crucial to strengthening Estonias energy security. Accor-
dingly, in May 2012, state-owned companies Elering and the
Port of Tallinn announced a joint feasibility study for an LNG
terminal at Muuga harbor in Tallinn. Elering, the government-
owned electricity transmission company that co-owns the
Estlink-1 cable between Estonia and Finland, and which is
currently constructing a second connection, Estlink-2, plans to
connect its proposed LNG terminal at Muuga with a sub-sea
pipeline to Finland known as Balticconnector. In addition to
Elerings project at Muuga port, two separate consortia are also
pursuing LNG terminals linked to Balticconnector: Sillgas in
Sillame and Alexela at Paldiski [3].
The Balticconnectors link to Finland is crucial to the
commercial viability of any Estonian LNG terminal. Estonia,
with its modest natural gas demand of 0.7 bm
3
, is too small of a
market to ensure commercial viability of an LNG terminal no
matter where in the country it is located. This remains true
even if an Estonian terminal is connected to the markets of
Latvia and Lithuania, where demand totals only 4.8 bm
3
. By
contrast, with Finlands demand of five bm
3
, the combined
market of the Baltic states and Finland is 10.5 bm
3
, some
fifteen times larger than the domestic market. The EC, in a
report it commissioned to international consulting company
Booz & Co to determine which Baltic state should receive EU
financial support for an LNG terminal, concluded that a market
of this size can support a regional LNG facility [3].
Regarding the specific location of an LNG project in
Estonia, the EU-s Booz & Co report concluded, The Sillame
project is the weakest of the three, due to being in a very early
stage of development, while the other two already have clear
and well-defined projects. The choice is therefore effectively
between the latter two options. Differences between the Muuga
and Paldiski projects are explained by supporters as follows
[3]:
Muugas urban location poses a lower environmental
threat but a higher safety threat compared with the more
remote Paldiski port;
Muuga is closer to Estonias existing domestic gas
distribution network than the Paldiski site, reducing the
cost of the pipeline connection to Estonias national
grid; but
Paldiski is closer to the Finnish port of Inkoo, reducing
the length of the future Balticconnector pipeline.
The Muuga project has the additional advantage of being
co-developed with Royal Vopak, a company that can integrate
the Estonian terminal into its commercially attractive Baltic
LNG delivery network operating from Rotterdam [3].
Ramboll has, together with the Elering/Vopak/Port of
Tallinn Working Group, considered various scenarios with
regard to facility scope and phasing of terminal development,
including scenarios involving coverage of varying supply
security service area needs (i.e. national vs. regional solutions)
and consideration of phasing of commercial capacity scope [2].
Becoming a fully regional terminal means connecting to
Finland via the Balticconnector. The concept behind the
Balticconnector was originally to connect Finland with the gas
storage in Latvia, and to allow export of gas to Estonia from
Finland. The interconnector was put on hold after the
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
163
introduction of LNG as a possibility for supply, as the routing
and sizing of the interconnector would be very dependent on
the location and capacity of the LNG terminal [2].
It is clear that the Balticconnector will not move forward
until a decision has been taken with regard to the location of
the LNG terminal [2].
VIII. CONCLUSIONS
The perpetuation of energy islands like currently exists
amongst the Eastern Baltic countries, poses a threat not just to
the energy security of the Baltic countries, but to their national
security as well. The risks of gas price hikes or cutoffs also
provide Moscow significant geo-economic and geopolitical
leverage, which it has not shied from using against the Baltic
republics and other EU members.
The Eastern Baltic gas market (Finland, Estonia, Latvia and
Lithuania) currently has an aggregated demand of about ten
bm
3
per year which is expected to remain flat. If gas supply
diversification was enhanced and the required infrastructure
was developed accordingly, market could grow up to 16 bm
3
,
with the additional upside of 1.5 bm
3
for LNG bunkering.
Even currently Latvia and Finland, as well as in accordance
with the latest information, also Estonia, are compliant with N
1 criterion, which refers to the SOS, it only covers the
infrastructure part and doesnt improve the supply interruptions
caused by non-infrastructure means.
Therefore several infrastructure projects have been
proposed to end isolation of the Eastern Baltic market from the
rest of EU and improve security of supply. These projects are
described as a package in the framework of BEMIP: Intra-
Baltic connections upgrading, new pipeline connections as
Balticconnector and GIPL, regional LNG terminal in the shores
of Finnish Gulf.
At the moment Inkoo in Finland, Muuga and Paldiski in
Estonia are the tree competing locations for the regional
terminal. A joint assessment of the required investments shows
that Estonia (in particular Paldiski port in case of
Balticconnector landing there) is the location that helps
minimizing additional investments to connect the terminal to
the main transmission system and to equalize benefits of
supply diversification and supply security.
REFERENCES
[1] M.J. Bryza, and E.C. Touhy, Connecting the Baltic states to Europe's
gas market. Tallinn: International Centre for Defence Studies, 2013.
[2] Pre-feasibility study for an LNG terminal in Tallinn. Tallinn: Ramboll,
2012.
[3] Analysis of costs and benefits of regional liquefied natural gas solution
in the East-Baltic area, including proposal for location and technical
options under the Baltic Energy Market Interconnection Plan. Milano:
Booz & Company, 2012.
[4] A. Davis, A. Jesinska, A. Kreslins, V. Zebergs, and N. Zeltins,
Evaluation of a risk level of gas supply of the Baltic countries and risk
criteria of UGS, in Proc. 24th World Gas Conf., Buenos Aires, 2009.
[5] A. Me, Liquefied Natural Gas (LNG) Terminal for Eastern Baltic,
Geopolitika, June 5th, 2012 [E-journal] Available:
http://www.geopolitika.lt/?artc=6077.
[6] M. Roodi, and . Ehrlich, Renewable Electricity in Estonia
Discrepancy between State Subsidies and Private Demand, Recent
Advances in Energy and Environmental Management, 13, pp. 66-71,
2013,.
[7] C. Le Coq, and E. Paltseva, Measuring the security of external energy
supply in the European Union, Energy Policy, 37, pp. 4474-4481, 2009.
[8] C. Clastres, and C. Locatelli, European Union energy security: the
challenges of liberalisation in a risk-prone international environment,
in Proc. 9th int. conf. European energy market (EEM 2012), Florence,
Italie, 2012.
[9] C. Winzer, Conceptualizing energy security, Energy Policy, 46, pp.
36-48, 2012.
[10] V. Constantini, F. Gracceva, A. Markandya, and G. Vicini, Security of
energy supply: comparing scenarios from a European perspective,
Energy Policy, 35, pp. 210-226, 2007.
[11] Regulation (EU) No 994/2010 of the European Parliament and of the
Council of 20 October 2010 concerning measures to safeguard security
of gas supply and repealing Council Directive 2004/67/EC, Official J.
European Union, L295/1, 2010.
[12] Joint risk assessment of security of gas supply of Estonia, Latvia,
Lithuania 2012. Baltic Energy Market Interconnection Plan (BEMIP)
Focus Group on Regional Cooperation, 2013.
[13] R. Bogdanovitsh, Natural gas infrastructure in Estonia and LNG, (in
Estonian), 2012 [On-line] Avalaible:
http://ftp.jlp.ee/public/Energeetikafoorum/.
[14] P.-M. Spanidis, Lessons Learnt from Establishing Liquefied Natural
Gas Facilities in Countries of North Mediterranean Sea, Recent
researches in environmental and geological sciences, 4, p. 21, 2012.
[15] G.A. Chamberlain, Management of large LNG hazards, in Proc. 23rd
World Gas Conf., Amsterdam, 2006.
[16] A.S.R. Kuramoto, N. Magalhes Bueno, W. Da Silva Frazo Filho, E.M.
Dias, and C.F. Fontana, Automation of Port Facilities for Import of
GNL, in Proc. 13th WSEAS Int. Conf. on Systems, Stevens Point,
2009.
[17] T. Shukri, and F. Wheeler, LNG technology selection, Hydrocarbon
Engineering, 2, pp. 71-76, 2004.
[18] S. Cornot-Gandolphe, LNG cost reductions and flexibility in LNG
trade add to security of gas supply, Energy Prices & Taxes, 1, pp. xxix-
xxvi, 2005.
[19] H.B. Gooi, P.L. So, E.K. Chan, E. Toh, and H. Gan, Strait ahead.
Toward a sustainable, economic, and secure electricity supply in
Singapore, IEEE Power & Energy Magazine, pp. 65-74, July/August
2012.
[20] Liberalisation of the Estonian Gas Market. Tallinn: Pyry Management
Consulting (UK) Ltd, 2011.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
164
Use of simulator for decision to reuse of industrial
effluents
Case study the use of processs condensate as make up water boiler.

Ana Cecilia Correia dos Santos
Professor at the Federal Institute of Bahia (IFBA / Campus
Salvador).
Researcher of Group Clean Technologies of Federal
University of Bahia (TECLIM-UFBA)
Salvador, Bahia, Brazil
anacecilia@ifba.edu.br
Elias Andrade Braga
Researcher of Group Clean Technologies of Federal
University of Bahia (TECLIM-UFBA)
Salvador, Bahia, Brazil.


Igor L. S. Rodrigues
Researcher of Group Clean Technologies of Federal
University of Bahia (TECLIM-UFBA).
Ricardo de Arajo Kalid
Associate Professor, Department Chemical Engineering
Federal University of Bahia.
Asher Kiperstok
Associate Professor, Department of Environmental
Engineering Federal University of Bahia.

Abstract - The need to reduce operational inefficiency because no
generation of effluent encourages industries to search
improvements in the procedures and technologies. However, in
case that is not technically or economically feasible, the water
reuse is a good option. The removal of volatile organic
compounds (VOC) in liquid streams at a nitrogen fertilizer
industry is carried out in a stripper that generates treated
condensate that can be reused. The present work aims at the
reuse of the treated condensate to produce steam and it was
conducted according to methodology: collection and
reconciliation of data in the industrial plant; simulation
operating conditions and changed conditions for the most
contaminant removal; tests in loco and analysis of physic-
chemical parameters to assess the quality of the treated
condensate. The simulation was performed in steady state for two
scenarios: the first one using operating conditions with vapor
pressure between 3.2 bar and 6.0 bar and a flow rate of saturated
steam between 1.5 t/h 5.0 t/h; the second simulation scenery
considered the removal of the D stream process of the stripper,
that has a high concentration of ammonia and methanol. In the
first scenario, it was observed by manipulating of the pressure to
6.0 bar and steam flow to 2.0 t/h make better the removal
ammonia efficiency. For the second scenario has also greater
efficiency in the removal of both ammonia and methanol. The
tests in loco and physic-chemical analysis showed that removal of
the D stream process can enable the of reuse to make up for
water in boiler to produce steam of until 41 bar after additional
treatment, reducing the conductivity and iron concentration. The
reuse of this treated condensate provides an economic gain of
approximately US$ 500 000 each year, reducing the cost with the
use of demineralized water and wastewater treatment. The use of
the simulator allowed studying different scenarios, to reduce the
number of experimental tests in loco and establishing routes for
reuse industrial wastewater.
Keywords Stripper, ammonia, condensate, reuse.
I. INTRODUCTION
In recent decades the need to reduce waste generation and
the use of water is encouraging industries to researcher better
procedures and new technologies. Researches to reduce the use
of water in industrial plants have been reported in the literature
[1,2].
The Group of Clean Technology from Federal University
of Bahia (TECLIM-UFBA), in their works, shows the
importance of reducing the contaminants in the source as a
priority to reduce the volume of wastewater generated. In its
various projects in industrial plants, this has adopted preventive
practices, with focus to minimize waste in all stages of the
production process [3].
Despite of the action at source to be the most
environmentally way recommended, some equipments with
technology end of pipe is crucial for suitability of effluents in
industrial operations.
The stripping column or stripper is equipment designed to
remove high levels of volatile organic compounds (VOC) in
liquid streams. In the process of ammonia production this
equipment is used to adapt the condensate process to the
accepted environmental parameters at the operating license of
the plant. This article proposes an evaluation by mathematic
simulation of the factors that affect the efficiency during
operation of an ammonia stripper and its validation in an
industrial plant, in addition to the possibilities for reuse of the
treated condensate instead to make up of water in medium
pressure boilers (until 41 bar).

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
165
II. MATERIAL AND EXPERIMENTAL PROCEDURES
A. Simulation in Steady State
The methodology proposed for the realization of this work
has four stages, as shown in Fig. 1. The first stage was a study
of the operating conditions of the equipment and its process,
through the flowcharts and engineering process and descriptive
manual processes for obtaining the data needed to make the
simulation.
Fig. 1. Methodology used for simulation in steady state to a stripper column.
Study of the process
Physic-Chemical
Analyses
Data collection
design and operation
Adjust the thermodynamic
model
Mathematic
Simulation
Interpretation of
results
Alternative Propose
Stage 1: Collect of Data
Stage 2: Mathematic Simulation
Stage 3: Analysis of results
Validation of the
model
Testing of promising
scenarios
Stage 4: Plant Tests
Evaluation Physic-Chemical
parameters

The stripper column is fed with a steam low pressure flow
of 3.2 bar and another condensate process, which is composed
of four streams (A, B, C and D) with different concentrations
of ammonia and methanol, as shown in Fig. 2.
The stream A is the condensate removed in separator
condensate of the compression 1
st
stage of ammonia synthesis
gases. Since, the separator also receives the gas stream of
unreacted of the synthesis process of ammonia which is
recovered through a unit for reprocessing.
Fig. 2. Schematic designed of the stripper column.

Stripper
A
B
C
D
Steam
Trated
Condesate
Vent

The stream B is the condensate removed in separator
condensate of the compression 2
nd
stage of ammonia synthesis
gases.
As the process unit is integrated with the unity of the
production of urea, it burns natural gas for generation of steam,
energy and obtaining CO
2
for the production of urea. Thus, it
has been condensed to generate the stream C, which is
removed in the condensate separators of the CO converters to
CO
2
, which follows treatment in the same stripper. Finally,
there is the stream D, which is the condensate generated in the
compression process of CO
2
.
Due to the limited information, it was necessary to carry
out chemical analysis of samples of the upstream column to
know the concentrations of ammonia and methanol in the feed.
For feed streams of intermediate process were used the results
of the water balance reconciled through the objective function
proposed by [3] as shown in Table 1:
TABLE I. RELATIVE VALUES OF FLOWS AND CONCENTRATIONS OF
COLUMN FEED.
Stream
Flows and Compositions
Relative
flow (%)
Relative
concentration
of ammonia
(% weight)
Relative
concentration
of methanol
(% weight)
A 6.23 0.06 7.15
B 0.02 0.01 0.00
C 76.40 99.35 7.02
D 17.35 0.58 85.83
From the obtained data, it was performed the second stage,
which consisted of the simulation equipment. Thus, it was
researched a thermodynamic model to represent the vapor-
liquid equilibrium between the components. The model GC-
EOS was applied to the two phases, due to temperature and
pressure conditions of operation and the type and size of the
molecules present in the streams [4]. The packed used to
contact was the HyPak metal rings type with two inch.
With the aid of UNISIM , the simulation was performed
in steady state for two scenarios: the first one using the typical
operating conditions, where the variation of vapor pressure was
between 3.2 bar and 6.0 bar and the flow of steam was between
1.5 t/h and 5.0 t/h.
The second simulation scenario was performed considering
the removal of the D stream process, since this has a pH of
approximately 7.5, indicating stabilization of ammonia in the
form of their salts and high methanol concentration, then
another possible treatment for this stream should be studied in
order to reuse, because the unit operation is not appropriated
for the removal of ammonia in this molecular form.
Both scenarios were considered an efficiency of 60% to
each packed section - adjusted value to approach the simulation
result with the composition of the bottom stream of the column
known by physico-chemical analysis. For the scenarios
mentioned was conducted the validation of the model by
comparing the results obtained by simulating with the
laboratory analysis of the constituents presents in the bottom
stream of the column. After the simulation, in a third step, the
results analysis and their interpretation were performed.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
166
B. Test in loco
For knowledge of the process parameters, the test in loco
was taken place for the scenario that corresponds to the actual
conditions of operation in the process with the removal of the
D stream process, due its physico-chemical characteristics.
This test was conducted to evaluate the new parameters of
treated condensate and new possibilities to reuse with seven
hours with a steam flow rate of 2.4 t/h and pressure of 3.2 bar.
To evaluate the physical and chemical characteristics of the
treated condensate were conducted analyzes of ammonia,
methanol, partial alkalinity, total alkalinity, pH, conductivity,
scanning of the ions (acetate, formate, nitrite, nitrate, bromide,
phosphate, sulfate, lithium, sodium, fluoride, chloride,
potassium, magnesium, calcium, zinc and iron) in the
Chromatograph Compact IC Pro Metrohm.
III. RESULTS AND DISCUSSIONS
A. Mathematic Simulation in Steady State
To know the best operating conditions of this equipment, a
simulation was carried out and the results achieved supported
decision making in order to improve the performance of the
column, i.e., the operation that promotes a higher ammonia
removal within the operating limits of the equipment.
1) Scenario 1: typical operating conditions This one
shows the operating conditions in which the column receives
four streams (A, B, C and D) and to promote separation is
used a stream of saturated steam pressure of 3.2 bar and flow
rate of 2.0 t/h.
For this scenario was performed the simulation and
sensitivity analysis for the following variables: pressure and
flow rate of steam. The removal of ammonia and methanol
after treatment may be observed in Fig. 3 and Fig. 4.
Fig. 3. Concentration of ammonia in the bottom of the stripper versus the
steam flow.

It is observed from Fig. 3 that by the manipulation of the
flow and pressure of saturated steam that feed the column, it
achieves a condition of operation with greater efficiency for
the removal of ammonia, for example, maintaining the flow of
steam 2.0 t/h and increasing the pressure to 6.0 bar.
For the methanol is observed that the operating condition of
pressure of 6.0 bar and flow rate of 5.0 t/h has a greater
removal efficiency, Fig 4.
The percentages for removing ammonia and methanol in
relation the feed to the saturated steam flow of 1.5 t/h and in
the respective pressures of simulation this showed greater
removal of ammonia and methanol for the condition of steam
pressure greater, for example, 6.0 bar, removal of 99%
ammonia and removal of 16% methanol. For saturated steam
flow of 5.0 t/h, the removal percentage of ammonia and
methanol are also higher to steam pressure of 6.0 bar, 100%
and 85%, respectively. This ammonia and methanol removal
enhanced can be attributed to the higher amount of energy
provided by the higher pressures steam. However the removal
of methanol is less efficient than that of ammonia.
Fig. 4. Concentration of methanol in the bottom of the stripper versus the
steam flow.

2) Scenario 2: after removal of D stream process The D
stream process has a high concentration of ammonia stabilized
as salts and methanol. The presence of methanol affect the
operation of the stripper, since this unit operation was not
designed for this removal, despite that it was studied this
possibility.
For this new scenario was performed a simulation for
typical operating conditions with the removal of the D stream
process of the strippers feed and also performing a sensitivity
analysis of the variables mentioned in the previous scenario.
Those results can be seen in Fig. 5 and Fig. 6.
It can be observed in Fig. 5 that the removal of D process
stream allows greater ammonia removal to any saturated steam
pressure and it also allows to assert that to the pressure steam
of 4.0 bar or 4.4 bar was achieved a removal ammonia of
approximately 100% of the feed with a steam flow rate of,
approximately, 2.0 t/h.




Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
167
Fig. 5. Concentration of ammonia in the bottom of the stripper versus the
steam flow after removal of the D stream process.

In Fig. 6 it is observed significant reduction of the
methanol concentration, since this is the stream that has the
largest contribution of methanol to the system.
The removal of ammonia and methanol in relation to the
feed without D stream process for flow of saturated steam of
1.5 t/h and pressure of 6.0 bar showed removal of 100% and
51%, respectively, indicating better quality treated condensate
and less consumption of steam. For saturated steam flow of 5.0
t/h, it is observed the removal of ammonia shows it had not
gain, since in the most unfavorable energetic conditions the
system shows a maximum removal. In the case when the
methanol has a steam pressure of 6.0 bar, it was observed
removal of 98% in relation to the feed.
Fig. 6. Concentration of methanol in the bottom of the stripper versus the
steam flow after removal of the D stream process.

The second scenario is characterized by a situation of
changing in the process operating instead of the measurement
end of pipe, which is normally practiced, which leads to
greater energy efficiency, because for the same power
consumption is achieved a higher level of separation. As this
stream represents 20% of full flow of the column, then allows
alternative forms of treatment, for example, by biotechnology
that can provide the reuse in other places in the process.
B. Test in loco - Reuse of Condensed Treated
The simulation results to evaluate the scenario for the
current situation of operation, steam flow rate 2.0 t/h and
pressure of 3.2 bar, showed the removing of the D stream
process provided reduction of 86% (237 mg/L to 32 mg/L) in
the concentration of ammonia in the treated condensate, which
is significant; and to this stream removal, the methanol content
shows a reduction of 57% (132 mg/L to 76 mg/L), however
this is still above the maximum allowable concentration for
reuse as boiler make up water. According [6], the quality of
the condensate stripper off should be 7 mg/L of ammonia and
250 mg/L of methanol and this is used for direct use or as a
mixture of make up water to demineralizer. And the customers
can use this condensed water after treatment as replacement
boilers to produce steam of 41 bar.
As this scenario was indicated as the most promising
among the two simulated scenarios, the test equipment was
performed under the conditions described by removing the D
stream process for seven hours, where three samples were
collected for the parameters and the average concentration of
ammonia and methanol, pH, conductivity, total and partial
alkalinity of treated condensate were determined and the
results of mean percentage reductions are shown in Table 2.
According to Table 2 shows that with the removal D
stream process provides a significant quality improvement in
various parameters of the condensate. The scanning ions
showed just the iron concentration was not acceptable for
reuse, its concentration was around 0.1 mg/L and is given less
than 0.02 mg/L [8]; conductivity also presents higher than the
minimum recommended for the intended reuse.
TABLE II. PARAMETERS OF QUALITY OF TREATED CONDENSATE BEFORE
AND AFTER THE REMOVAL OF THE D STREAM PROCESS OF FEED STRIPPER TEST
IN LOCO.
Parameters
Before
Removal
the
stream D
After
Removal
the
stream D
Mean
Reduction
(%)
Maximum
value for
reuse in
demineralis
ers
Possibili
ty of
Reuse
Ammonia
(mg/L)
101.0 22.0 78 7 mg/L
(a)
No
Methanol
(mg/L)
32.0 8.7 73 250 mg/L
(a)
Yes
pH 9.5 9.4 2 9,5
(b)
Yes
Conductivi
ty (S/cm)
647.0 179.0 72
< 5
S/cm
(c)

No
Part.
alkalinity
(mg/L)
172.4 37.2 78 - -
Tot.
alkalinity
(mg/L)
400.1 139.9 65 200 mg/L
(c)
Yes
Iron
(mg/L)
0.18 <0.1 44 0.02 No
1. (a) values adopted [6], (b) values adopted [7] and (c) values adopted [8].
The subsequent treatment of this condensate in
demineralizer removed the ammonia of treated condensate,
but studies indicate that the presence of ammonia in the stream
would not be limiting condition for reuse because ammonia
and amines are components that can be used as corrosion
inhibitors being added to the feedwater and condensate lines
[7]. Thus, to use this effluent to produce demineralized water
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
168
is necessary only additional treatment to remove iron and
adjustment of conductivity.
C. Environmental and Economic Gains
An assessment was made of the economic benefits,
whereas the cost of demineralized water for steam production
was US$ 1.50/m. Therefore, the operation with second
scenario allows the reuse of 30 m/h of treated condensate,
which provides an annual cost reduction in your purchase for
steam production of about 400 thousand dollars. Besides the
gain of reuse water has also reducing the volume of effluent to
be treated, that currently cost is US$ 0.36/m, giving a
reduction in the annual cost of wastewater treatement of
approximately 100 thousand dollares. Totaling an annual gain
in the reuse of this treated condensed on the order of 500
thousand dollars.
Economic gains are noticeable and attractive to the
enterprise management system, since the cost for disposal of
this effluent stream after treatment in stripper corresponds to
approximately 18% of the costs. In addition, environmental
benefits of reuse of condensed should be considered as reduce
the water uptake of approximately 4.0% for this industrial
activity. In new scenarios that indicate shortages and the rising
price of this natural resource, therefore, the future impact
activities that require high amounts of this feature.
IV. CONCLUDING REMARKS
The simulation scenarios with saturated vapor pressure
between 3.2 bar and 6.0 bar indicate that a lower flow rate
may be used (1.5 t/h) when the column operates with a higher
saturated vapor pressure (6.0 bar), situation that cannot be
tested and implemented, because the tower was designed to
operate at atmospheric pressure.
For the test of removal of the D stream process, the results
presented showed a significant reduction of the ammonia and
methanol concentration, indicating that the removal of stream
provides a reduction in the overload of contaminants from the
column. But the limitation in respect to the concentration of
ammonia in the effluent prevents its removal, indicating the
necessity of studies to its treatment.
After study of the process together with the chemical
analysis, it was observed the B stream process has small flow
rate and methanol concentration, and high concentration of
ammonia in comparison with the other streams, approximately
10%, indicating an opportunity to reuse, such as example, for
measurement water in the ammonia final product, recovering
the ammonia presents in the stream.
The physic-chemical analysis of treated condensate
indicated that the parameters conductivity and iron content did
not enable its direct reuse of this like water make up of boiler
to 41 bar.
The completion of the tests in loco with physic-chemical
analysis of treated condensate on the condition proposed in the
second scenario indicates the necessity of additional treatment,
but only two parameters assessed. Thus, it is suggested that
the operation with the removal of the D stream process as well
as additional study of the individual treatment to its reuse.
The simulator processes UNISIM allowed the study of
scenarios, with few capital investment, and the development
of effective experimental design, because only the most
promising scenario was tested in the industrial plant. Thus, the
combination of modeling/simulation validation in the
industrial plant was efficient and environmental-economic
practical consistent to study possibilities for reuse of industrial
effluents. Thus, reducing the need to water uptake, cost of
water treatment and wastewater treatment inside the
manufacturing process.
ACKNOWLEDGMENT
We are grateful to the team of Engineering, Laboratory
Operations and support provided by the company during the
course of the activities, and the ANP for financing the research
project.
REFERENCES
[1] J. FRES, A. REYNAUDB, A. THOMAS, R. S. MOTTA,
Competitiveness and effectiveness concerns in water charge
implementation: a case study of the Paraba do Sul River Basin, Brazil.
Water Policy, vol. 10, pp. 595-612, 2008.
[2] K. P. OLIVEIRA-ESQUERRE, A. KIPERSTOK, R. A. KALID, E. A.
SALES, L. TEIXEIRA, V. M. PIRES, Water and Wastewater
Management in o Petrochemical Raw Material Industry. Computer
Aided Chemical Engineering vol. 27, pp. 1047-1052, 2009.
[3] M. A. F. MARTINS, C. A. AMARO, L. S. SOUZA, R. A. KALID, A.
KIPERSTOK, New objective function for data reconciliation in water
balance from industrial processes. Journal of Cleaner Production. vol.
18, pp. 1181-1189, 2010.
[4] S. SKJOLD-JORGENSEN, Group Contribution Equation of State (GC-
EOS): a Predictive Method for Phase Equilibrium Computations over
Wide Ranges of Temperatures and Pressures up to 30 MPa, Ind. Eng.
Chem. Res., vol. 27, pp. 110-118, 1988.
[5] A. KIPERSTOK, Clean technologies and waste minimization. Course
Notebook of Specialization in Environmental Management and
Technologies in Industry Federal University of Bahia. Salvador,
Bahia, Brazil, 2001.L.
[6] E. CASTORINA, Treat BFW for NH3 plants, 12
th
Pullman Kellogg,
Denver, Colo, 1977.
[7] KURITA W. IND. LTDA. Kurita Handbook of Water Treatment.
Tokyo, 1985.
[8] BETZ. INC., Handbook of Industrial Water Conditioning, Trevose, PA,
Betz Dearborn, 1991.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
169
Environmental/Economic Power Dispatch
Problem /renewable energy Using firefly algorithm

MIMOUN YOUNES
Faculty of Technology
)
Djillali Liabes University
Sidi Bel Abbes, 22000, Algeria,
younesmi@yahoo.fr
Riad Lakhdar KHERFENE/ Fouad KHODJA


Faculty of Technology
)
Djillali Liabes University
Sidi Bel Abbes, 22000, Algeria,
rilakh@yahoo.fr/ khodjafouad@gmail.com


Abstract Exploitation and development of renewable energy
such as solar and wind energy is a very important alternative to
reduce gas emissions, reduce the bill for power generation. This
paper examines the implications of renewable energy deployment
in power generation with the classical energy system, managed by
an intelligent method, to minimize the cost of production of
electric energy and also reduce the emission of gases. Simulation
results on the 10 units power system prove the efciency of this
method thus conrming its capacity to solve the
environmental/economic power dispatch problem with the
renewable energy.
Keywords Economic Power Dispatch (EPD); renewable
energy; environmental, an intelligent method
I. INTRODUCTION
Electricity is regarded as the invention that changed the
world; some centuries ago the world was in total
darkness.Currently electricity is virtually present in all our
activities.With the advanced technology on the one hand, and
population growth on the other hand, these two factors have
made the world a voracious and ravenous appetite for
electricity. Most predictions to ensure the growth of energy
consumption in developed countries in the compositions to
about 1% per year, but for developing countries, consumption
now exceeds 5% per year [1]. With the increasing negative
effects of fossil fuel combustion on the environment in addition
to limited stock of fossil fuel have forced many countries to
inquire into and change to environmentally friendly alternatives
that are renewable to sustain the increasing energy demand.
Energy policy plays a vital role to mitigate the impacts of
global warming and crisis of energy availability [2].The
problem which has received much attention. It is of current
interest of many utilities and it has been marked as one of the
most operational needs. In traditional economic dispatch, the
operating cost is reduced by the suitable attribution of the
quantity of power to be produced by different generating units.
However the optimal production cost can not be the best in
terms of the environmental criteria. Recently many countries
throughout the world have concentrated on the reduction of the
quantity of pollutants from fossil fuel to the production of
electrical energy of each unit. The gaseous pollutants emitted
by the power stations cause harmful effects with the human
beings and the environment like the sulphur dioxide (SO2),
nitrogen oxide (NOx) and the carbon dioxide (CO2), etc. Thus,
the optimization of production cost should not be the only
objective but the reduction of emission must also be taken into
account. Considering the difference in homogeneity of the two
equations, the equation of the cost of fuel given in $/hr, and the
equation of emission of gases to the production of electrical
energy given in Kg/hr. Algeria has substantial resources and
inexhaustible renewable energy ie solar radiation exceptional
covers an area of 2,381,745 km2, with over 3000 hours of
sunshine per year and the existence of significant wind energy
potential. Moreover, these energies are clean, renewable and
are used where they are and their decentralized nature is well
suited to the state of scattered areas of low population density.
Consequently, they can contribute to environmental protection,
reduce the emission of greenhouse gases, particularly a
successful CO2 reduction, and to combat global warming, be
considered as a future alternative to conventional energy ,
increased energy independence and preservation of raw
materials. Our work revolves around two main axes: the
injection of the maximum power produced from renewable
energy sources in the Algerian network. Optimal management
of power produced by conventional power plants by an
improved firefly algorithm ( FFA). Simulation results on the 10
units power system prove the efciency of this method thus
conrming its capacity to solve the environmental/economic
power dispatch problem with the renewable energy.
II. PROBLEM FORMULATION AND OPTIMIZATION WITH THE
SOLAR ENERGY AND WIND ENERGY
1) Solar Energy
The maximum power provided by a solar panel is given by
the following characteristic [3]:
( )] . 1 .[ .
2 1 jref j c s
T T P E P P + = (1)
Ec is solar radiation, T
jref
is the reference temperature
of the panels at 25C, T
j
is the cells junction temperature
(C), P1represent the characteristic dispersion of the panels
and the value for one panel is included enters 0.095 to 0.105
and the parameter P
2
=0.47%/C; is the drift in panels
temperature [3].
The addition of one parameter P
3
to the characteristic,
gives more satisfactory results:
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
170

( ) ( )
c jref j s
E P T T P P P + + =
3 2 1
]. . 1 .[ (2)
This simplified model makes it possible to determine the
maximum power provided by a group of panels for solar
radiation and panel temperature given, with only three
constant parameters P
1
, P
2
and P
3
and simple equation to
apply. A thermal solar power station consists of a
production of solar system of heat which feeds from the
turbines in a thermal cycle of electricity production.
B. Wind energy
The power contained in the form of kinetic energy, P (W),
the wind is expressed by:
3
. . .
2
1
v A P = (3)
with:
A is the area traversed by the wind (m2); is the density
of air (= 1.225kg/m3) and v is the wind speed (m / s).
The wind generator can recover some of this wind power
and represents the power produced by wind generator:
3 3
10 . . . .
2
1

= v A C P
e el
(4)
C
e
is the efficiency factor, which depends on the wind
speed and the system architecture [4].
C. Economic Dispatch
Optimization of cost of generation has been formulated
based on classical OPF with line flow constraints. The detailed
problem is given as follows [5].
) (
1

=
=
NG
i
Gi
P f Min F (5)
The cost function ) (
Gi
P f is usually expressed as a
quadratic polynomial [6].
( )
i Gi i Gi i Gi
c P b P a P f + + =
2
(6)
The minimization the daily total cost of active power
generation may be expressed by:

= =
=
24
1 1
) (
t
NG
i
Gi
P f Min F

(7)
The minimum value of the above objective function has to
be found out by satisfying the following constraints [7]:
0 - -
1 1
= +

= =
L D
NGk
k
GRk
NG
i
Gi
P P P P (8)
The generation capacity of each generator has some limits
and it can be expressed as [8]:
max min

Gi Gi Gi
P P P (9)
In minimizing the cost, the equality constraint (power
balance) and inequality constraint (power limits) should be
satisfied.The transmission loss can be represented by the B-
coefficient method as

=
i j
Gj ij Gi L
P B P P (10)
Where
ij
B is the transmission loss coefficient,
j i
P P, are
the power generation of ith and jth units. The B-coefficients
are found through the Z-bus calculation technique.
Where
max min
,
Gi Gi
P P : Lower and upper limit of active power
generation at bus i
i i i
c b a , , the cost coefficients of the i th generator.
Gi
P :The power output of generator i in MW;
D
P : Active power load total
Gi
P : Active power generation at bus i
GRk
P : Active power renewable generation at bus k
L
P : Real losses
NG : Number of thermal generators connected in the
network.
NGR : Number of renewable generator
D. Minimization of pollutants emission
The most important emissions considered in the power
generation industry due to their effects on the environment are
sulfur dioxide (SO
2
) and nitrogen oxides (NO
x
) [9]. These
emissions can be modeled through functions that associate
emissions with power production for each unit [10, 11]. One
approach to represent SO
2
and NO
x
emissions is to use a
combination of polynomial and exponential terms [12]:
( ) ( ) ( )
gi i i i gi i gi i
P P P Pg EC exp
2
+ + + =


0 =
L
P (11)
where
i
,
i
,
i
,
i
and
i
are coefficients of the ith
generator emission characteristics..
The bi-objective combined economic emission dispatch
problem is converted into single optimization problem by
introducing price penalty factor h as follows.
Minimise F=FC+h*EC
Subjected to the power flow constraints of equations[13].
The price penalty factor h blends the emission with fuel cost
and F is the total operating cost in $/h.The price penalty factor
hi is the ratio between the maximum fuel cost and maximum
emssion of corresponding generator.
) (
) (
=
max
max
gi
gi
i
P EC
P FC
h
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
171
The following steps are used to find the price penalty factor
for a particular load demand
1. Find the ratio between maximum fuel cost and maximum
emission of each generator.
2. Arrange the values of price penalty factor in ascending
order.
3. Add the maximum capacity of each unit
max
gi
P one at a
time, starting from the Smallest hi unit until
d gi
P P
max

4. At this stage, hi associated with the last unit in the
process is the price penalty factor h for the given load.
The above procedure gives the approximate value of price
penalty factor computation for the corresponding load demand.
Hence a modified price penalty factor (hm) is introduced in this
work to give the exact value for the particular load demand.
The first two steps of h computation remain the same for the
calculation of modified price penalty factor. Then it is
calculated by interpolating the values of hi corresponding to
their load demand values.
III. FIREFLY ALGORITHM (FFA)
Fireflies (lightning bugs) use their bioluminescence to
attract mates or prey. They live in moist places under debris
on the ground, others beneath bark and decaying vegetation.
Firefly Algorithm (FFA) was developed by Xin-She Yang
at Cambridge University in 2007. It uses the following three
idealized rules: 1) All fireflies are unisex so that a firefly will
be attracted to other fireflies regardless of their sex. 2)
Attractiveness is proportional to their brightness; thus for any
two flashing fireflies the less brighter will move towards the
brighter one. The attractiveness is proportional to the
brightness and they both decrease as their distance increases.
If there is no brighter firefly than a particular one it will move
randomly. 3) The brightness of a firefly is affected or
determined by the landscape of the objective function. On the
first rule, each firefly attracts all the other fireflies with weaker
flashes [14]. All fireflies are unisex so that one firefly will be
attracted to other fireflies regardless of their sex. Secondly,
attractiveness is proportional to their brightness which is
reversely proportional to their distances. For any two flashing
fireflies, the less bright one will move towards the brighter
one. The attractiveness is proportional to the brightness and
they both decrease as their distance increases. If there is no
brighter one than a particular firefly, it will move randomly.
Finally, no firefly can attract the brightest firefly and it moves
randomly. The brightness of a firefly is affected or determined
by the landscape of the objective function. For a maximization
problem the brightness can simply be proportional to the value
of the objective function. Other forms of brightness can be
defined in a similar way to the fitness function in genetic
algorithms based on these three rules.
1) Attractiveness
In the firefly algorithm there are two important issues: the
variation of light intensity and the formulation of the
attractiveness. For simplicity, we can always assume that the
attractiveness of a firefly is determined by its brightness which
in turn is associated with the encoded objective function [15].
In the simplest case for maximum optimization problems, the
brightness I of a firefly at a particular location x can be chosen
as I(x) corresponding to f(x). However, the attractiveness is
relative; it should be seen in the eyes of the beholder or judged
by the other fireflies [16]. Thus, it will vary with the distance
r
ij
between firefly i and firefly j. In addition, light intensity
decreases with the distance from its source and light is also
absorbed in the media so we should allow the attractiveness to
vary with the degree of absorption. In the simplest form, the
light intensity I(r) varies according to the inverse square law
( )
2
/ r I r I
s
= where
s
I
is the intensity at the source. For a
given medium with a fixed light absorption coefficient, the
light intensity I varies with the distance r [17].
That is
r
e I I

=
0
, where I
0
is the original light intensity.
In order to avoid the singularity at
r = 0 in the expression
( )
2
/ r I r I
s
= the combined effect
of both the inverse square law and absorption can be
approximated using the following Gaussian form:
( )
2
0
r
e I r I

=
(12)
Sometimes we may need a function which decreases
monotonically at a slower rate. In this case we can use the
following approximation:
( )
2
0
2
1
1
r
e I
er
r I

+
=
(13)
At a shorter distance, the above two forms are essentially
the same. This is because the series expansions about r = 0
have the form:
.., 1
1
1
.., 1
2
2
2
2
+
+
+

r
r
r e
r

(14)
and are equivalent to each other up to the order of 0(r
3
).
Since a fireflys attractiveness is proportional to the light
intensity seen by adjacent fireflies, we can now define the
attractiveness of a firefly by:
( )
2
0
r
e r



=
(15)
where
0
is the attractiveness at r = 0. As it is often faster
to calculate 1/ (1 + r
2
) than an exponential function, the above
expression, if necessary, can conveniently be replaced by
2
0
1 er +
=

. Equation (9) defines a characteristic distance


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
172

1
=
over which the attractiveness changes significantly
from
0

to
1
0

e .
In the implementation, the actual form of attractiveness
function
( ) r
can be any monotonically
decreasing function such as the following generalized
form:
( )
m
r
e r



=
0
with 1 m (16)
For a fixed , the characteristic length becomes
1
1
=
m
as m .
Conversely, for a given length scale in an optimization
problem, the parameter
can be used as a typical initial value. That is
m

=
1

.
2) Distance and Movement
The distance between any two fireflies i and j at
i
x
and
j
x
is the Cartesian distance given by [18] as follows:
2
, ,
) (

= =
d
k
k j k i j i ij
x x x x r
(17)
Where
ik
x
is the k-th component of the spatial coordinate
i
x
of i-th firefly as shown in fig.2 the movement of a firefly i
is attracted to another more attractive firefly j is determined by
( )

+ + =

+
2
1
2
0 1
rand x x e x x
i j
r
i i
ij


(18)
Where the first term is the current position of a firefly, the
second term is used for considering a fireflys attractiveness to
light intensity seen by adjacent fireflies and the third term is
used for the random movement of a firefly in case there are
not any brighter ones. The coefficient is a randomization
parameter determined by the problem of interest, while rand is
a random number generator uniformly distributed in the space
[0, 1]. As we will see in this implementation of the algorithm,
we will use
0
=0.1, [0, 1] and the attractiveness or
absorption coefficient = 1.0 which guarantees a quick
convergence of the algorithm to the optimal solution (see
figure 1).




Fig. 1. Flow chart for EPD using Ferefly algorithm.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
173
IV. SIMULATION RESULTS
The new firefly algorithm (FFA) was coded in the
MATLAB environment. The test was performed on the
Algerian 59-bus system. This network consists of 59 buses, 10
generators, 36 loads of 684.10 MW and 83 branches. Table 2
shows the technical parameters of the 10 generators. These
parameters were determined by curve tting techniques based
on real test data. with PL=19.6490MW. To demonstrate the
effectiveness of the proposed technique, two different cases
have been considered, as follows:
Case1: calculate the total cost and emission to Algerian
electrical network without renewable energy.
Case 2: Minimize the total cost function and the emission,
with renewable energy.
It is noticed that the proposed method (FFA) gives
reduction in fuel cost and the emission in case 1 without
renewable energy (Table III). The convergence proles of the
best solution for the fuel cost, the emission, the fuel cost and
the emission are shown in Fig. 2, 3,4 and 5, respectively. from
Table 4 taking into account the renewable-energy ( case 2), we
can notice that the optimization has been greatly improved
(see figures 6,7,8 and 9). It is noticed also from these gures
that the convergence of the proposed approach (FFA) is
promising , we got the results after only 50 iterations.
TABLE I. POWER GENERATION LMITS COST COEFICIENT DATA OF
COMPARISON OF GENERATING UNITS OF 10-UNIT SYSTEM.
Bus
No

Power limit
(MW)
Cost Coefficients

min
Gi
p
(Mw)
max
Gi
p
(MW)
i
a
i
b
i
c
1
8 72 0.0085 1.50 0
2
10 70 0.0170 2.50 0
3
30 510 0.0085 1.50 0
4
20 400 0.0085 1.50 0
5
15 150 0.0170 2.50 0
6
10 100 0.0170 2.50 0
7
10 100 0.0030 2.00 0
8
15 140 0.0030 2.00 0
9
18 175 0.0030 2.00 0
10
15 140 0.0030 2.00 0
11
0 30 / / /
12
0 10 / / /
TABLE II. EMISSION CO-EFICIENT DATA OF GENERATING UNITS OF
10-UNIT SYSTEM.
Bus
No


Emission Coefficients


i

i

i

i

i

1 4.091 -5.554 6.490 2.00e-04 2.857
2 2.543 -6.047 5.638 5.00e-04 3.333
3 4.258 -5.094 4.586 1.00e-06 8.000
4 5.326 -3.550 3.380 2.00e-03 2.000
5 4.258 -5.094 4.586 1.00e-06 8.000
6 6.131 -5.555 5.151 1.00e-05 6.667
7 4.091 -5.554 6.490 2.00e-04 2.857
8 2.543 -6.047 5.638 5.00e-04 3.333
9 4.258 -5.094 4.586 1.00e-06 8.000
10 5.326 -3.550 3.380 2.00e-03 2.000
TABLE III. BEST COMPROMISE OUTPUT FOR 10 GENERATOR
SYSTEM (CASE 1)

minimum
cost

minimum
emission
minimum
cost and
emission
PG1 (MW) 27.651275 34.905620 63.912720
PG2 (MW) 10.236654 44.582265 28.630876
PG3 (MW) 98.577976 78.694207 150.449182
PG4 (MW) 164.521511
134.06968
3
137.938443
PG5 (MW) 25.823325 68.474983 19.607090
PG6 (MW) 10.010182 32.812131 17.749961
PG7 (MW) 67.760025 51.259948 78.499539
PG8 (MW) 129.423035
105.64282
9
112.362983
PG9 (MW) 83.473542
119.68095
7
23.007701
PG10 (MW) 85.693984 33.202681 71.361280
PGR1(MW) 0.00 0.00 0.00
PGR2(MW) 0.00 0.00 0.00
PD(MW) 684.10 684.10 684.10
PL(MW) 19.1715 19.3253 19.4198
Cost ($/h) 1723.830137
1781.1538
45
1744.32416
3
Emission
(ton/h)
0.454346 0.381361 0.401380
T (s) 0.82813 0.78125 0.81250

Fig. 2. Convergence characteristic for fuel cost minimization for case 1
(minimum cost)


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
174

Fig. 3. Convergence characteristic for emission minimization for case 1
(minimum emission)

Fig. 4. Convergence characteristic for fuel cost minimization for case
1(minimum cost and emission)

Fig. 5. Convergence characteristic for emission minimization for case
1(minimum cost and emission)

TABLE IV. BEST COMPROMISE OUTPUT FOR 10 GENERATOR (CASE 2)

minimum cost

minimum
emission
minimum
cost and
emission
PG1(MW) 35.126728 26.927115 44.607584
PG2 (MW) 40.630506 51.606233 41.527364
PG3(MW) 112.232408 81.774188 58.197489
PG4(MW) 109.720341 54.829520
116.90814
6
PG5(MW) 23.952401 35.894921 45.133375
PG6(MW) 24.829224 67.783741 17.569297
PG7(MW) 53.465187 95.082396 15.254955
PG8(MW) 122.255830 92.478793
128.30054
6
PG9(MW) 46.207456 84.254893
149.53460
7
PG10(MW) 94.906182 72.301011 45.194946
PGR1(MW) 30.000000 30.000000 30.000000
PGR2(MW) 10.000000 10.000000 10.000000
PD(MW) 684 684 684
PL(MW) 19.3263 10.2572 18.2283
Cost ($/h) 1644.965062
1680.60856
6
1658.9628
85
Emission
(ton/h)
0.362192 0.290030 0.31325
T(s) 0.98438 1.024581 1.01563



Fig. 6. Convergence characteristic for fuel cost minimization for case 2
(minimum cost)

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
175


Fig. 7. Convergence characteristic for emission minimization for case 2
(minimum emission)


Fig. 8. Convergence characteristic for fuel cost minimization for case2
(minimum cost and emission)



Fig. 9. Convergence characteristic for emission minimization for case2
(minimum cost and emission)
V. CONCLUSION
Most of the countries are investing in renewable energy
technology to meet emission target and increase the share of
power from renewable energy sources. our work strengthens
the idea and gives a method for the integration of renewable
energies in the classical system.
REFERENCES
[1] Muneer T, Asif M, Munawwar S. Sustainable production of solar
electricity with particular reference to the Indian economy. Renewable
and Sustainable Energy Reviews 2005;9:444-73
[2] R. Saidur, M.R. Islam, N.A. Rahim, K.H. Solangi, A review on global
wind energy policy, Renewable and Sustainable Energy Reviews
2010;14: 17441762
[3] Faisal A. Mohamed, Heikki N. Koivo., (2007), Online Management of
Micro Grid with Battery Storage Using Multiobjective Optimization,
POWERENG 2007, April 12-14,
[4] Nadine May. Eco-balance of a Solar Electricity Transmission from
North Africa to Europe. Diploma Thesis, Faculty for Physics and
Geological Sciences, Technical University of Braunschweig; 2005
[5] O. Alsac, J. Bright, M. Prais, and B. Stott, Further developments in LP-
based optimal power flow, IEEE Transactions on Power Systems, Vol.
5, 1990, pp. 697-711.
[6] J. Nanda, D. P. Kothari, and S. C. Srivatava, New optimal power-
dispatch algorithm using Fletchers quadratic programming method, in
Proceedings of the IEE, Vol. 136, 1989, pp. 153-161.
[7] R. D. Zimmerman, C. E. Murillo-S_anchez, and R. J. Thomas,
\Matpower's extensible optimal power ow architecture," Power and
Energy Society General Meeting, 2009 IEEE, July 26-30 2009, pp. 1-7.
[8] H. W. Dommel, Optimal power dispatch, IEEE Transactions on Power
Apparatus and Systems, Vol. PAS93, 1974, pp. 820-830.
[9] Basu M. Dynamic economic emission dispatch using nondominated
sorting genetic algorithm II. Electr Power Energy Syst 2008;30(2):140
9.
[10] Jiang X, Zhou J, Wang H, Zhang Y. Dynamic environmental economic
dispatch using multi-objective differential evolution algorithm with
expanded double selection and adaptive random restart. Int J Electr
Power Energy Syst 2013;49:399407.
[11] Zhang R, Zhou J, Mo L, Ouyang S, Liao X. Economic environmental
dispatch using an enhanced multi-objective cultural algorithm. Electr
Power Syst Res 2013;99:1829
[12] Basu M. Economic environmental dispatch using multi-objective
differential evolution. Appl Soft Comput 2011;11(2):284553.
[13] Provas Kumar Roy, Sudipta Bhui, Multi-objective quasi-oppositional
teaching learning based optimization for economic emission load
dispatch problem, Electrical Power and Energy Systems 53 (2013)
937948.
[14] Fraga .H. (2008)., Firefly luminescence: A historical perspective and
recent developments, Journal of Photochemical & Photobiological
Sciences, vol. 7, pp. 146 158.
[15] Yang .X.S., (2009). Firefly algorithms for multimodal optimization,
Stochastic Algorithms: Foundations and Applications Lecture Notes in
Computer Sciences, vol. 5792, pp. 169178.
[16] Yang .X. S., Firefly algorithm, stochastic test functions and design
optimisation, International Journal of Bio-Inspired Computation, vol. 2
n. 2, pp. 78 84, 2010.(13)
[17] S. X. Yang, Firefly Algorithm, Engineering Optimization. Hoboken,
New Jersey: Wiley, 2010, pp. 221-230.
[18] Xin-She Yang, Firefly Algorithm, Engineering Optimization:An
Introduction with Metaheuristic Applications, pp 221-230, Wiley,2010.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
176
Hydrogen Production From Steam Gasification of
Palm Kernel Shell Using Sequential Impregnation
Bimetallic Catalysts

Anita Ramli
Department of Fundamental and Applied Sciences,
Universiti Teknologi PETRONAS,
31750 Tronoh, Perak, Malaysia
anita_ramli@petronas.com.my
Siti Eda Eliana Misi, Mas Fatiha Mohamad, Suzana
Yusup
Department of Chemical Engineering,
Universiti Teknologi PETRONAS
31750 Tronoh, Perak, Malaysia


AbstractZeolite supported bimetallic Fe and Ni catalysts
have been prepared using sequential impregnation method and
calcined at temperatures between 500-700 C. The catalytic
activity of these catalysts in a steam gasification of palm
kernel shell was tested in a fixed-bed quartz micro-reactor at
700 C. Both Fe and Ni active metals present in FeNi/BEA
and NiFe/BEA catalysts are corresponding to Fe
2
O
3
and NiO.
Different calcination temperatures and different sequence in
metal addition have a significant effect to the catalytic activity
where FeNi/BEA (700) shows the highest hydrogen produced
than other catalysts.
Keywordsbimetallic catalyst; sequential impregnation
method; hydrogen production; palm kernel shell
I. INTRODUCTION
The development on biomass gasification system is an
important strategy for future green technology to protect the
environment from CO
2
emission. The conversion of biomass
to hydrogen is the promising route since it can be used as an
alternative fuel for transportation and power generation. On
the other hand, if the process produces syngas, it may be
utilized to produces methanol and Fisher-Tropsch oil. [1-2].
Generally, the gasification of biomass at high temperatures
yield a product gas composed of CO, CO
2
, H
2
O, H
2
, CH
4
,
higher hydrocarbons, tars, char, and ash [2-3]. The formation
of tar and char are undesirable because the components could
limit the hydrogen production and reduce the efficiency of the
gasification process [4]. The nature of the tar produced is
principally affected by the type of biomass, gasification
process, gasifying agent and the operating conditions [3].
The application of metal based catalysts such as nickel
(Ni), Cobalt (Co), Iron (Fe), Ruthenium (Ru) and Platinum
(Pt) in biomass gasification is an effective method in reducing
tar content. Among these catalysts, supported Rhodium (Rh)
catalyst showed the best performance in steam gasification
whereby catalyst having Rh loading of 1.2 x 10
-4
Rh atom for
each g of catalyst can convert 98-99% of the carbon in
biomass to products at 873K [5]. However, Ni and Fe based
catalysts are the preferred choice due to their wide availability
and cheapness [4, 6-7]. Moreover, Ni and Fe based catalyst
allows both methane reforming and water gas shift activity
during the gasification process thus provide adjustment of the
H
2
/CO ratio in the product gas [8-9].
Nevertheless, the activity of the Ni based catalyst is
influence by Ni loading and Ni dispersions [4]. This is due to
migration of metallic particles to form larger aggregates thus,
reducing the dispersion of catalyst and consequently reducing
the catalyst activity [10]. Some studies have demonstrated that
the nickel sintering could be limited when nickel oxide has
strong interaction with promoter [9] or support [11] and have
well defined structure like perovskite [10]. Dolomite and
olivine which contains Fe helps stabilize Ni in the support and
gives an important effect on precursor reducibility as well as
catalytic properties [6]. Chaiprasert and Vitidsant [9] also
verified that the addition of metallic noble metals as promoter
may help to improve the metallic dispersion, decrease
sintering and enhance the thermal stability.
In this study, zeolite (BEA) supported Fe and Ni
catalysts with different sequence in addition of the second
metal have been proposed for steam gasification of palm
kernel shell (PKS) for hydrogen production. The effects of
second metal and calcination temperatures of the catalysts on
the composition of gaseous product were investigated.
II. METHODOLOGY
A. Biomass Preparation
The biomass considered in this study is PKS, collected from
palm plantation industry at Felda Nasaruddin, Perak. The PKS
was dried at 110 C before they were crushed and sieved to
500 m.
B. Catalyst Preparation
The bimetallic catalysts were prepared via a sequential
impregnation method. First, BEA was calcined at 500 C for
16 hrs. First, 5% Ni/BEA and 5% Fe/BEA catalysts were
prepared [Anita et al. 2013]. The second metal was introduced
in the second impregnation step using another 5 wt% of Fe
and Ni metal, yielding 5%Fe5%Ni/BEA and 5%Ni5%Fe/BEA
which are designated as FeNi/BEA (T) and NiFe/BEA (T). In
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
177
general the catalysts are YX/BEA (T) where metal X was
impregnated first followed by metal Y and T is the calcination
temperature in C.
C. Catalyst Characterization
N
2
adsorption-desorption isotherm (Quantachrome ASAP
2000) was used to determine the surface properties of the
bimetallic catalysts. Powder X-ray Diffraction (XRD) patterns
were obtained with a Bruker D8 Advance diffractometer using
Cu-K radiation to identify the crystalline phases of
FeNi/BEA and NiFe/BEA catalysts. Temperature
Programmed Reduction (TPR) experiments were performed
on Thermo Finnigan TPDRO 1100 to determine the
reducibility of the metal present on the catalyst surface and to
investigate the interaction between the metals and support.
D. Catalyst Testing
Experiments were performed in a fixed bed quartz micro
reactor (15 mm i.d.) in an electric furnace. The mixture of
PKS and catalyst bed was held in place by quartz wool in the
tubular reactor. The steam gasification reaction was performed
at 700 C temperature with catalyst/PKS ratio of 1:3,
steam/PKS ratio of 4:1 and steam/Argon ratio of 1:6 (vol.).
Two thermocouples were used to measure the temperature;
one is placed at the centre of the bed in the tubular reactor and
the other is placed on the outer surface of the reactor. Helium
and Nitrogen were used as diluents gases and its flow was
regulated by mass flow meter. Mass flow meter adjusted the
gas flow in the range of 20-30 ml min
-1
. Water is introduced
by liquid pump, where they were quickly evaporated at
elevated temperature and then carried to the tubular reactor by
inert flow.
The outlet gas was passed through iced water condenser to
substantially condense the water before it entered the gas
chromatograph (GC). The gaseous product were analysed
using an online gas chromatograph (VARIAN CP-3800)
equipped with thermal conductivity detector (TCD) and fitted
with TDX-01 column with Argon as a carrier gas. This
analysis only focuses on the main product which is H
2
, CH
4
,
CO
2
and CO gases.
III. RESULTS AND DISCUSSION
A. Catalyst Characterization
The physicochemical properties of the FeNi/BEA catalysts
have been reported in our previous works [13]. Therefore, this
section will only be discussed in general in order to
understand better the comparison between FeNi/BEA and
NiFe/BEA as well as the effect in the biomass steam
gasification. Based on N
2
adsorption-desorption, the textural
properties like BET surface area, pore volume and pore
diameters of the catalysts prepared are summarized in Table 1
where all the bimetallic catalysts have lower surface area and
smaller pore volume as compared to the bare BEA. An
increase in calcinations temperature does not have a
significant effect on the surface area of the bimetallic
catalysts. However, the average pore diameter of the
bimetallic catalysts is bigger than that of bare BEA.

Table 1. Surface properties of the catalysts

Catalysts
BET
Surface
Area (m
2

g
-1
)
Pore
Volume
(cm
3
g
-1
)
Average
Pore
Diameter
(nm)
BEA (500) 529 0.15 4.30
FeNi/BEA (500) 445 0.12 5.71
NiFe/BEA (500) 447 0.13 5.88
FeNi/BEA (600) 441 0.12 5.83
NiFe/BEA (600) 454 0.13 5.61
FeNi/BEA (700) 449 0.09 5.34
NiFe/BEA (700) 434 0.09 6.23

Fig. 1 shows the diffraction patterns displayed by
FeNi/BEA and NiFe/BEA catalysts. The presence of both Ni
and Fe were detected in the prepared catalysts with the
diffraction peaks corresponding to NiO phase and -Fe
2
O
3

phase, respectively. The peaks for NiO phase are represented
by the appearance of Ni(111) and Ni(200) planes at the
2 = 37.3 and 43.3 while the diffraction peaks corresponding
to -Fe
2
O
3
phase are represented with the appearance of
Fe(104) and Fe(110) planes positioned at the 2 = 33.1 and
35.6. These planes are in agreement with data reported in the
J CPDS card and those reported in previous studies [14-15].



Fig. 1. The diffraction patterns of the bimetallic catalysts

Furthermore, presence of NiO and -Fe
2
O
3
phase in the
bimetallic catalysts affects the diffraction peaks of BEA where
the diffraction peaks are shifted to slightly higher 2 value and
the intensity is reduced. This may be due to the formation of
interacted species between Fe and Ni with Al
2
O
3
or SiO
2
in
BEA. However, nickel aluminate (NiAl
2
O
4
) and iron
aluminate (FeAl
2
O
4
) phase were not present, which could be
due to lack of crystallinity as formerly observed by by Salagre
et al. [16].
The variation in the TPR profile of FeNi/BEA and
NiFe/BEA catalysts (Fig. 2) shows the combination of nickel
and iron phases reduction. The reduction of free NiO was
observed at 400 - 500 C [17]. On the other hand, the
reductions of nickel and iron phase at 500 - 800 C region
were overlapped into a broad peak, which suggests the
stabilization of Fe
3+
and Ni
2+
ions in the lattice. The reduction
process transforms Fe
2
O
3
,

FeAl
2
O
4
and NiAl
2
O
4
[17-18].

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
178






Fig. 2. TPR profile of (a) Ni/BEA and Fe/BEA as a reference (b) FeNi/BEA
and (c) NiFe/BEA at different calcination temperatures

The presence of free Fe
2
O
3
is noticeable with a small
peak at 380 C from TPR profile of FeNi/BEA (500).
However, due to strong interaction of Fe and Ni at high
temperature, the reduction peak of free Fe
2
O
3
is significantly
intensified and disappeared. This type of oxide is easily
reduced and its existence can cause several difficulties during
reaction such as sintering and carbon deposition on the
catalyst surface which will lead to catalyst deactivation [19].
Apart from that, the reduction peak associated to the reduction
of free NiO to Ni
o
shifted towards lower temperature as the
calcination temperatures increases. While, the reduction peak
represents the reduction of several phases were shifted to
higher temperature.
In the case of NiFe/BEA catalysts (Fig. 2c), the reduction
peak representing reduction of several phases are divided into
two peaks particularly after the catalyst was calcined at 700
C. For NiFe/BEA (700), the peak represent the reduction of
Fe
3+
is shifted to lower temperature while the peak having
strong interaction with support (NiAl
2
O
4
and FeAl
2
O
4
) is
shifted to higher temperature. This indicates that addition of
Ni as a second metal in this catalyst results in higher
reducibility of Fe
2
O
3
phase where it is reducible at lower
temperature and active for the reaction. It is also observed that
the reduction temperature of free NiO does not change even
though the calcinations temperature of the catalyst was
increased; however the intensity of the reduction peak is
different.
B. Catalytic Steam Gasification
Fig. 3 shows the concentration of gases evolved from the
steam gasification of PKS in the presence of BEA supported
bimetallic catalysts. The results on monometallic Ni/BEA and
Fe/BEA catalysts from previous work [12] are also reported
for comparison. Addition of Fe to the Ni/BEA (500) to form
FeNi/BEA (500) resulted in slight decrease in H
2
and CO gas
produced, which in turn increases the concentration of CO
2

while there is no significant change in the CH
4
evolved (Fig.
3a). This indicates that FeNi/BEA (500) is slightly less
reactive in steam gasification, thus promoting combustion of
PKS to CO
2
. This could be due to the presence of both fixed
nickel oxide (NiAl
2
O
4
) and fixed iron oxide (FeAl
2
O
4
) as seen
in Fig. 4b which suppresses the reduction of Fe
2
O
3
and NiO
with different state of interaction with BEA. Furthermore,
presence of free iron oxide at reduction temperatures
between 380 - 400 C in the catalyst could also be the reason
in low catalytic activity of FeNi/BEA (500) catalyst since it
promotes the formation of CO
2
as well as loss of active phase
during the reaction [8, 19]


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
179




Fig. 3. Performance of the catalysts calcined at (a) 500 C (b) 600 C (c) 700
C in steamgasification of PKS

Addition of Ni to Fe/BEA (500) to form NiFe/BEA (500)
on the other hand, enhances the steam methane reforming
whereby it leads to an improvement in the concentration of H
2

produced and reduction of CO
2
concentration in the product
gas although there is slight increase in CO concentration. This
could be due to reaction between CH
4
and H
2
O in a 1:1 molar
ratio on the catalyst active site to form a 1:3 molar ratio of CO
and H
2
during the steam methane reforming [2-3]. High
performance of this catalyst may be attributed to bigger
reduction peak at 500-700 C in H
2
-TPR profile as compared
to Fe/BEA (500) whereby this peak is attributed to reduction
of Fe
2
O
3
. Bigger reduction peak means more H
2
consumption
which means more Fe
2
O
3
available within the catalyst system
thus, the catalyst becomes more active for steam gasification
due to availability of more active sites for the reaction to take
place [20].
Addition of Fe to Ni/BEA (600) to form FeNi/BEA (600)
follows the same trend as the catalyst calcined at 500 C
whereby it becomes no longer effective to promote steam
gasification of PKS to produce H
2
. Instead, the FeNi/BEA
(600) promotes oxidation of CO or combustion of PKS which
is shown by an increase in CO
2
concentration. FeNi/BEA
(600) also promotes the formation of CH
4
, presumable via
reduction of CO as CH
4
concentration slightly increases while
CO concentration slightly decreases. This may be due to
insufficient presence of steam content during the reaction and
presence of metallic Fe from over reduction of Fe
2
O
3
which
resulted in reduction of CO with H
2
to produce CH
4
[21].
Indeed, Chaiprasert and Vitidsant [22] have studied the effect
of steam during the gasification of biomass and found that
increasing of steam feed resulted in higher H
2
evolved, lower
CO
2
formation and slightly decrease in CH
4
because of water-
gas shift reaction and methane reforming.
Furthermore, addition of Ni to Fe/BEA (600) to from
NiFe/BEA (600) resulted in reduction of H
2
, CO, and CH
4
,
evolved. This trend is similar as FeNi/BEA (600) whereby the
concentration of CO
2
increases indicating that NiFe/BEA
(600) is less reactive in steam gasification, thus promoting
PKS to undergo oxidation of CO to produce more CO
2.

Addition of Fe to Ni/BEA (700) to form FeNi/BEA (700)
in contrast, exhibited higher concentration of H
2
. A slight
decrease in CO concentration indicates that FeNi/BEA (700)
promotes the water gas shift reaction even though slight
decrease in concentration of CO
2
was observed. However,
there is no significant difference in the concentration of CH
4

evolved. As stated in the H
2
-TPR analysis, the addition of Fe
as the second metal and calcined at 700 C significantly
improves the reducibility of NiO phase by reducing at low
temperature. As a result, more active metals react with PKS to
produce H
2
gas and facilitate the water gas shift reaction.
Therefore, this indicates that FeNi/BEA (700) is active in
steam gasification reaction; hence it is able to prevent the PKS
from undergoing oxidation. The results are consistent with the
work reported by Chaipraset and Vitidsant [9] whereby the
presence of Fe as the second metal in Ni based catalyst
enhances the water-gas shift reaction and amplified the H
2

production.
Addition of Ni to Fe/BEA (700) to form NiFe/BEA (700)
also results in an increase in concentration of H
2
and reduction
of CO
2
in the product gas. A slight increase in CO
concentration indicates that NiFe/BEA (700) follows the same
trend as the NiFe/BEA (500) catalyst whereby it facilitates the
reduction of Fe
3+
and enhances the steam methane reforming.
However, high concentration of CH
4
was observed may be
due to NiFe/BEA (700) also promotes the formation of CH
4

through methanation. This possibly attributed to majority of
Ni metals on the surface of the catalyst whereby Ni is the first
component to be reduced at 450 C as reported in H
2
-TPR.
Hence, this favours the methane steam reforming and
methanation reaction on Ni surface [8] as opposed to water
gas shift reaction on Fe surfaces [9].
The variations in the trends indicate that the concentration
of H
2
gas for FeNi/BEA decreases in the order of calcination
temperatures: FeNi/BEA (700) > FeNi/BEA (500) >
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
180
FeNi/BEA (600). This is because doping of Fe into Ni/BEA at
different calcination temperatures resulted in slightly higher
surface area for FeNi/BEA (700) followed by FeNi/BEA (500)
and FeNi/BEA (600) as observed in N
2
adsorption-desorption.
According to Chaipraset and Vitidsant [9], the catalyst with
high surface area can provide large contact area for reactants
and consequently enhance the reaction activity. However,
FeNi/BEA (700) has less reducibility as compared to
FeNi/BEA (500) and FeNi/BEA (600).
The order of the H
2
formation for NiFe/BEA catalysts is
NiFe/BEA (500) > NiFe/BEA (700) > NiFe/BEA (600),
whereby the catalytic activity decreases with increasing in
calcination temperatures. This behaviour is expected since
NiFe/BEA (500) shows higher reducibility as compared to
NiFe/BEA (600) and NiFe/BEA (700). This can be explained
by integration of Ni and Fe in the BEA structure as observed
in TPR analysis. The strong interaction of Ni and Fe at higher
calcination temperatures (700) causes the stabilization of Fe
3+

and Ni
2+
ions in the lattice resulting in less reduction of
metals. It is notable that the BET surface area of NiFe/BEA
catalysts is in order NiFe/BEA (600) >NiFe/BEA (500) >
NiFe/BEA (700). Even though calcination at 600 C leads to a
bigger surface area, NiFe/BEA (500) still shows a higher
catalytic activity due to its reducibility at lower temperature.
IV. CONCLUSION
It can be inferred that different sequence of Fe and Ni as a
second metal in bimetallic catalysts resulted in deviation in the
catalyst properties and interaction between the active metals
with support as well as the catalytic activity. Furthermore,
both Fe and Ni are active for the steam gasification of PKS
when they act as a second metal if the precursors were
calcined at a suitable calcination temperature. The highest
concentration of H
2
evolvement in the steam gasification of
PKS achieved in the presence of FeNi/BEA (700) and
NiFe/BEA (500). Incorporation of Fe as the second metal in
the bimetallic catalyst and calcined at 700 C improves the
reducibility of NiO phase and enhances the water-gas shift
reaction. While addition of Ni as the second metal facilitates
the reduction of Fe
2
O
3
phase and exhibits steam methane
reforming. Hence, the second metal plays an important role
and may act as a promoter to amplify the steam gasification
reaction.
ACKNOWLEDGMENT
The authors are grateful for the financial support and facilities
provided by UTP for this research and for granting a
postgraduate scholarship to Siti Eda Eliana Misi.
REFERENCES
[1] Kimura, T., Miyazawa, T., Nishikawa, J ., Kado, S., Okumura, K.,
Miyao, T., Naito, S., Kunimori, K., Tomishige, K. Development of Ni
catalyst for tar removal by steamgasification of biomass. Appl. Catal. B.
Vol. 68, pp.160-170, 2006.
[2] Mohammed, M. A. A., Salmiaton, A., Wan Azlina, W. A. K. G.,
Mohammad Amran, M. S., Fakhrul-Razi, A., Taufiq-Yap, Y. H.
Hydrogen rich gas from oil palm biomass as a potential source of
renewable energy in Malaysia. Renew. and Sustain. Ener. Rev. Vol. 15,
pp. 1258-1270, 2011.
[3] McKendry, P. Energy production frombiomass (Part 3): Gasification
technologies. Bioresource Technol. Vol. 83, pp. 55-63, 2002.
[4] Sutton, D., Kelleher, B., Ross, J . R. H. Review of literature on catalyst
for biomass gasification. Fuel Process. Technol. Vol. 73, pp. 155-173,
2001.
[5] Asadullah, M., Miyazawa, T., Ito, S, Kunimori, K., Tomishige, K.
Demonstration of real biomass gasification drastically promoted by
effective catalyst. Appl. Catal. A. Vol. 246, pp. 103-116, 2003.
[6] Zhang, R., Wang, Y., Brown, R. C. Steamreforming of tar compounds
over Ni/olivine catalysts doped with CeO
2. Energy Conv & Manage.
Vol. 8, pp. 68-77, 2007.
[7] Uddin, M. A., Tsuda, H., Sasaoka, E. Catalytic decomposition of
biomass tars with iron oxide catalyst. Fuel. Vol. 87, pp. 451-459, 2008.
[8] Swierczynksi, D., Libs, S., Courson, C., Kiennemann, A. Steam
reforming of tar from biomass gasification process over Ni/olivine
catalyst using toluene as a model compound. Appl. Catal. B. vol. 74, pp.
211-222, 2007.
[9] Chaiprasert, P., Vitidsant, T. Effect of promoters on biomass gasification
using nickel/dolomite catalyst. Korean J . Chem. Eng. Vol. 26, pp. 1545-
1549, 2009.
[10] Rapagna, S., Provendier, H., Petit, C., Kiennemann, A., Foscolo, P. U.
Development of catalyst suitable for hydrogen or syn-gas production
frombiomass gasification. Biomass Bioenergy. Vol. 22, pp. 377-388,
2002.
[11] Swaan, H. M., Kroll, V. C. H., Martin, G. A., Mirodatos, C.
Deactivation of supported nickel catalysts during the reforming of
methane by carbon dioxide. Catal. Today. Vol. 21, pp. 571-578, 1994.
[12] Ramli, A., Misi, S. E. E., Mohamad, M. F., Yusup, S. H2 Production
fromSteamGasification of PalmKernel Shell in the Presence of 5%
Ni/BEA and 5% Fe/BEA Catalysts, Advanced Science Letters, Vol. 19,
pp. 950-954, 2013.
[13] Misi, S. E. E., Ramli, A. and Rahman, F. H. Characterization of
structure feature of bimetallic Fe-Ni catalysts. J . Appl. Sci. Vol. 11(8),
pp. 1297-1302, 2011.
[14] Kang, S. H., Bae, J . K. W. Fisher-Tropsch synthesis using zeolite-
supported iron catalysts for the production of light hydrocarbons. Catal
Lett. Vol. 125, pp. 264-270, 2008.
[15] Rynkowski, J . M., Paryjczak, T., Lenik, M. On the nature of oxidic
nickel phase in NiO/-Al
2O3 catalysts. Appl. Catal. A. Vol. 106, pp. 73-
82, 1993.
[16] Salagre, P., Fierro, J . L. G., Medina, F., Sueiras, J . E. Characterization of
nickel species on -alumina supported nickel samples. J . Mol. Catal. A.
Vol. 106, pp. 125-134, 1996.
[17] Cheng, Z. X., Zhao, X. G., Li, J ., Zhu, Q. M. Role of support in CO2
reforming of CH4 over Ni/Al2O4 catalyst. Appl. Catal. A. Vol. 205, pp.
31-36, 2001.
[18] Wan, H. J., Wu, B. S., Zhang, C. H., Xiang, H. W., Li, Y. W., Xu, B. F.,
Yi, F. Study of Fe-Al2O3 interaction over precipitated iron catalyst for
fisher-tropsch synthesis. Catal. Commun. Vol. 8, pp. 1538-1545, 2007.
[19] Virginie, M., Libs, S., Courson, A., Kiennemann, A. (2008).
Iron/olivine catalysts for tar reforming: comparison with nickel/olivine.
[Online]. Available: http://gdricatal.univlille1.fr/GDRI%20FR/21-28.pdf
[20] Wang, L., Li, B., Koike, M., Koso, S., Nakagawa, Y., Xu, Y. Catalytic
performance and characterization of Fe-Ni catalysts for the steam
reforming of tar from biomass pyrolysis to synthesis gas. Appl. Catal. A.
Vol. 392, pp. 248-255, 2011.
[21] Aznar, M. P., Caballero, M. A., Corella, J ., Molina, G., Toledo, J . M.
Hydrogen production by biomass gasification with steam-O
2 mixtures
followed by a catalytic steamreformer and a CO-shift system. Energy
and Fuels. Vol. 20, pp. 1305-1309, 2006.
[22] Chaiprasert, P., Vitidsant, T. Promotion of coconut shell gasification by
steamreforming on nickel-dolomite. Am. J . Appl. Sci. Vol. 6(2), pp.
332-336, 2009.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
181
The impact of the economic crisis on the
environmental responsibility of the companies

Miras, M.M., Escobar, B., Carrasco, A.
Accounting and Financial Economics Department
University of Seville
Seville (Spain)
mmiras@us.es


Abstract The severe economic crisis is affecting significantly
to the environment in which companies have to continue with
their business. Consequently, academicians and managers are
worried about what is going to happen with the Social
Responsibility and particularly with the Environmental
Responsibility, due to the decrease in the financial performance
of the companies. The aim of this paper is to study the effect of
the crisis on the environmental behavior of the Spanish
companies through an explicative study, deepening in the
comparison between the years 2006 and 2010. As a result,
Spanish companies continue carrying out behaving in an
environmentally friendly way because their Environmental
Scores are growing, despite the decline of the Financial
Performance. Moreover, it is identified a change in the factors
that affect to the environmental behavior due to the identification
of less dependence on corporate financial performance.
Keywords Environmental Responsibility; Crisis, Financial
Performance.
INTRODUCTION
The current financial and economic crisis is been singular
given its intensity, complexity and the difficulties that
developed countries are finding in overcoming it. Probably if
organizations had been taken into account the CSR approach
seriously, nowadays we would not be likely involved in the
current economic crisis [1] or at least not in such magnitude,
but the crisis is a reality as well as the great consequences are
been suffered by organizations which range from the closing
down of several firms, getting losses, until at the best, a large
reduction of the profits.
Since the current economic crisis rose, the priorities of the
business have changed becoming the management of the
liquidity in one of the most important aspects. Moreover, all
the actions are being carried out in accordance with the
financial difficulties [2]. Taking into account the uncertain
business environment, companies have been forced to redefine
their business and implement austerity plans as unique
alternative to survive, particularly, they are encouraged to
reduce the expenses [3] which could imply revoking to their
social and environmental responsibilities because them
generates costs [4] or the delay or cancellation of many CSR
initiatives [5].
However, its no less true that the number of social needs
has been increased during these rough times, so the CSR
actions are more necessary than ever [3], being more necessary
than ever to emphasize the relevance of CSR actions carried
out by the organizations for the societal well-being. Hence,
society asks companies being more involve in supporting social
and environmental causes [6].
In this context, both academicians and practitioners are
asking about how Corporate Social Responsibility (hereinafter
CSR) and all its dimensions according to the Triple Bottom
Line approach (Social, Environmental and Economic) are
going to be influenced by these extraordinary circumstances.
In addition, these circumstances may be well allowed to
understand better and more clearly what are the real
motivations or interests of conducting Social or Environmental
policies for firms, and provides a perfect opportunity to test the
real commitment of the companies with the CSR approach [7].
If companies only implement CSR actions looking for
legitimacy or direct benefits (short-term vision), the CSR
should be drastically affected by the crisis. However, if
organizations are really engaged with these issues and they
have really integrated CSR in their business strategy, they
could take advantage of the crisis as an opportunity instead of
considering it such as a threat [1, 8]. Therefore, the present
crisis may not mean directly the disappearance of CSR actions,
although the amount could be reduced due to main causes [9].
Despite the importance of this issue and the large number
of explications found in the literature, there is little empirical
evidence about what is happening in the different countries.
While in [3] is analyzed if in the context of the companies
listed in Fortune 500 there was a change in the number and
extend of CSR projects in 2008 -being in the deep of the crisis-,
others investigated the influence on the CSR performance in
some companies included in the GRI report list since 2007
until 2010 [10]. On the other hand, others evaluated the CSR
behavior of the companies during 2007 until 2009 [11].
Likewise, in [5] is examined how the multinational firms in
Kenya were being affected by the economic downturn whereas
in [12] is studied the impact on the USA companies. Indeed,
the general conclusion we can report from the articles analyzed
is that the firms had improved their CSR scores in spite of the
consequences of the economic downturn. Nevertheless, it
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
182
seems to be a significant drop in the level of CSR during the
last period studied (2009-2010).
Nevertheless, it was found that the relationship between
CSR and FP in France had changed due to the uncertain
environment of the crisis going from a strong relationship
between the variables to an insignificant connection, being the
second semester of 2007 the break-point [13].
The present economic crisis has been made deeper and
longer in Spain than other countries, so the results and the
evidence from the articles discussed previously can not be
extrapolated. There are several reasons why we are so
interested in studying the consequences of the financial crisis
on the social and environmental behavior in Spain, and why it
could be different from other countries [14].
One of the reasons why the Spanish economy has been
severely affected by the present situation is due to the lack of
balance that had been generated during the boom phase. It has
made to Spanish economy particularly vulnerable to changes in
the macroeconomic and financial conditions, so the
consequences of the global international crisis are being worse
than others European countries.
Due to the pronounced expansion experienced by the
Spanish economy in the previous period -with annual GDP
growth above 4%- , it is more highlighted the sharp decline that
is experiencing the employment (with an unemployment rate of
around 25%), the difficulties facing the recovery as well as the
higher risks from the possible fall. In this sense, the adjustment
phase is being conditioned by certain idiosyncratic features of
the Spanish economy -the shocks have affected Spain more
than neighboring countries- and by certain institutional
characteristics that affect the adjustment mechanisms.
Moreover, according to [14], the crisis has had a direct
impact on the business activity in Spain. Such report reveals
that at least 65% of Spanish firms maintained or increased their
investment in CSR in 2010, although it is also shown that one
in three companies stopped performing CSR as a direct
consequence of the crisis. It is also noted in the report that the
behavior of big companies and the small ones differs
considerably, the latter being those that have absorbed most of
the reductions in CSR.
Additionally, in the report we can see that in Spain the
environmental actions have being identified such as one of the
higher priorities in order to decide if a company is committed
with the CSR approach.
Despite the difficult situation that is happening in the
Spanish economy, it seems that the commitment of the firms
with the socially and environmentally responsible behavior has
not lost its strength.
So taking into consideration that Spain is one of the
countries that is been more affected by the current financial
situation, the effect of the crisis on the environmental
dimension of the CSR in Spain could be different from other
countries, we are going to test if Spanish companies continue
behave in a Environmentally friendly way, through the
comparison between the years 2006 (before the crisis started)
and 2010 (when the crisis was wreaking havoc).
MATERIAL AND METHOD
It is undeniable the evolution that Corporate Social
Responsibility has suffered in importance and significance over
the last decades [15, 16]. It has changed from an irrelevant or
fashionable idea to one of the most widely accepted concepts in
the business world [17, 18].
Although the idea that firms had some responsibilities to
society beyond that making profits has been around for
centuries [15], it has not been until the end of the last century
when CSR was become in a reality in business and one of the
determinants factor that has been taking into account in the
decision-making [19, 20]. It is why most of the international
organizations have established guidelines (i.e. Global
Reporting Initiative - GRI) and recommendations about how to
be a socially and environmentally responsible company, the
reason of a high increment in the number of voluntary social
disclosure memories of the companies as well as the creation of
Sustainability Stock Indexes -Dow Jones Sustainability Index,
KLD Domini, FTSE4Good, between others- [21].
One of the main debates about CSR, is the one refers to its
relationship with the FP. In that sense, there are several
theories that try to explain this complex relationship [22]. In
this regard, this paper is focused in the effect of the Financial
Performance (FP) on the Environmental Responsibility due to
the current economic crisis is affecting seriously to the
financial outcomes of the companies. Hence, it is necessary to
describe the different approaches that try to explain this
relation.
On the one hand, the Slack Resources Hypothesis argued
that companies will be more or less environmentally
responsible depending on their availability of financial
resources [23]. Achieving a better performance will allow
making great investments in environmental projects.
Consequently CSR will only be viable in companies with solid
and sustainable financial results, i.e., some authors, such as
[24] emphasizes that CSR is a luxury that can only be borne by
buoyant companies.
Moreover, the Managerial Opportunism Hypothesis
reported by [25, 26] discussed that the purposes of the
managers may be different from those of the shareholders and
other stakeholders. This is due to the managers' objectives
being oriented towards the short-term and immediate
profitability, while the owners' objectives are more linked to
the long term.
In accordance with these hypotheses, the high cost of the
Environmental initiatives would be the responsible of a drastic
reduction of this kind of actions being it even higher if we
based on the second theoretical approach. It is because
managers worried by the financial situation prefer to decrease
all the cost that they are not sure about their short-term benefits
because their main concern is their survival at the company. So
the present financial situation would be triggered a large
diminution of the environmental activities or policies, so our
first Hypothesis would be:
H
1
: The companies are less environmentally responsible
due to the economic crisis.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
183
This conflict of interest between managers (agents) and
owners (principals) has also been developed by the Agency
Theory [27]. Notwithstanding, in order to avoid the managerial
opportunism [28], some mechanisms (financial rewards,
shares) were specified so that the interest of the shareholders
has to be taken into account [29, 30]. In addition, during a
crisis period, directors and shareholders should come to an
agreement about the strategic decisions of the companies, so
managers pressured by shareholders could choose continuing
with CSR policies because they understand that it could be a
good way to manage the economic crisis and they could be
more concerned about long-term repercussions, so our second
hypothesis would be:
H
2
: Despite of the crisis, companies continue to behave in
an environmentally responsible way.
In addition, it must be kept in mind that the impact of the
crisis on the CSR actions can have time lags [4], so it is
important to analyze the influence in the current year and the
next.
H
3
: The Environmental Responsible behaviors of the
companies are affected by the performance obtained the
previous year.
In order to achieve our aims, firstly, we made a descriptive
analysis that helps us to understand the evolution and what is
happening with the Environmental Scores and the financial
measures from 2006 to 2010. After that, to test the first and the
second hypotheses proposed, we have carried out two linear
regressions to evaluate if the environmental friendly behaviors
have been affected by the FP obtained, in 2006 (before the
crisis started) and in 2010 (when the crisis was wreaking
havoc), to compare if the influence of the performance on the
behavior has changed due to the crisis. Moreover, to test the
third hypothesis, we made two extra regressions to see if the FP
of the previous year has influenced on the environmental
behavior of the firms in 2006 and 2010.
The sample was initially composed by all the Spanish firms
included in the IBEX-35, although seven of them had to be
excluded due to the lack of data availability, so the final sample
was compounded of 28 companies whose data were provided
by the DataStream Professional database and ASSET4
database.
The variable used in the study to measure the
Environmental Responsible Behavior is the Environmental
Score (range from 0 to 100) provided by ASSET4 database
[31]. This Score measures a company's impact on living and
non-living natural systems, including the air, land and water, as
well as complete ecosystems.
Due to the lack of agreement in the literature about what is
the best indicator to measure the FP, we are going to use the
ROA and the ROE (such as traditional indicators) and the
Economic Score (value between 0 and 100) provided by
ASSET4 database (Appendix 1) because it includes some
intangibles measures of the client loyalty, performance and
shareholder loyalty apart from taking into account the
traditional financial measures.
RESULTS AND DISCUSSION
As mentioned previously, the aim of the paper is to analyze
empirically the influence of the crisis on the Environmental
responsibilities of the most representative firms in Spain. The
results are divided into two parts as follows: firstly, it has been
examined the evolution of the four variables studied -using the
mean values- in order to be able to get a general idea of the
increase or decrease of the scores when the crisis began and
secondly, we are going to carry out linear regressions to test the
influence of the FP on the Environmental Responsibilities
(Model 1), comparing the situation in 2006 to 2010 and taking
into account the time lags (Model 2).
The evolution of all the variables considered from 2006 to
2010 is shown in Figures 1 and 2, and we could report that the
Environmental Scores is growing year by year in spite of the
economic downturn, that we could easily identify by the
behavior of the three different measures of the FP. Its so
relevant because the CSR behaviors of the companies have not
being interrupted or delayed by the financial crisis as in other
countries in which significant reductions were identified in the
last two studied periods [3, 10, 11].
It disagrees with the evidence found on the Spanish Savings
Banks [7], so this fact shows that the conclusions of a
particular industry could not be generalized. Notwithstanding,
this evidence agrees with [14], because this report argues that
large companies in Spain continue to have a commitment with
the CSR, although the situation of small firms is not the same.















Fig. 1: Evolution of the Environmental and Economic Scores


Fig. 2: Evolution of the ROA and ROE indicators


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
184
TABLE I. ESTIMATION OF MODEL 1.
ENV Score 2006 ENV Score 2006 ENV Score 2006 ENV Score 2010 ENV Score 2010 ENV Score 2010
Constant 46.375 (5.56)*** 72.294 (10.19)*** 75.783 (6.07)*** 82.401 (12.04)*** 85.505 (25.44)*** 87.028 (25.22)***
Eco_Score t 0.449 (3.84)*** 0.056 (0.64)
ROA t 0.366 (0.51) 0.22 (0.40)
ROE t -0.271 (-0.06) -0.025 (-0.15)
F 14.76*** 0.26 0 0.41 0.16 0.02
R
2
0.3622 0.0101 0.0001 0.0155 0.0061 0.0009
*** < 0.005, **<0.01,*<0.05, <0.1. T-student test in brackets.

Table 1 reports the results of the linear regressions made in
order to see what the influence of the different measures of FP
on the Environment Score is (Model 1) as well as to compare
the scenario after and during the crisis.
Firstly, it is relevant to mention the differences between the
analyses taking into account different measures of FP. While
the Economic Score predicts in a 36% (in 2006) the
Environmental Score, the predictions of the others measures
are not significant. Moreover, we can see that any of the
financial measure gets a good prediction of the Environmental
Score. It could be explained by a change in the factors that
influence on the Environmental behavior of the Spanish
companies, being now more committed with the real extend
than with the potential financial rewards of them. These results
are consistent with the evidence found by [13] in France, who
reported that since 2007 there has been a change in the
behavior of this relationship.
In Table 2, it is presented the estimation of the Model 2
which is focused on the time lags. In this test, due to the results
of the model 1s estimation, we only test the time lags with the
Economic Score. It is highlighted that the FP of the previous
year has a bigger and more significant effect on the
Environmental Score than the performance of the year,
regardless the year we are testing. Moreover, the percentage of
the Environmental Score explained by them is higher than in
Model 1. Then, the environmental behavior of the companies is
more influenced by the FP of the previous year.
In spite of the influence of the previous performance on the
CSR, the difference in the percentages in which CSR is
explained by the FP between the two years considered remains.
This fact allows us to further support the results shown by
Model 1, confirming that there is a change in the factors that
determine the environmental responsible behavior of the
companies.
TABLE II. ESTIMATION OF MODEL 2.

Model 2
ENV Score 2006
Model 2
ENV Score 2010
Constant 42.536 (5.25)*** 71.411 (11.67)***
Eco_Score t 0.196 (1.18) -0.284 (-2.58)*
Eco_Score t-1 0.336 (2.03)* 0.457 (3.98)***
F
10.32*** 8.24 ***
R
2
0.4522 0.3973
*** < 0.005, **<0.01,*<0.05, <0.1. T-student test in brackets.
IV. CONCLUSION
The results are surprising one hand-because in no other
empirical work had been obtained CSR increases during 2010 -
but otherwise they confirm the evidence shown by [14].
Therefore, it is reinforced the idea that Spanish big companies
during the crisis continue taking into consideration the CSR
approach, trying to behave in an environmental responsible
way.
It is important to see that the Environmental Score is close
to the maximum level, so it is true that the scope for
improvement of firms each year is more reduced and therefore
each year is more complicated to get significant improvements
in the Score. Then, the results confirm that the commitment of
the Spanish companies with the Environment is increasing year
by year.
After the descriptive analysis of the findings and trying to
connect them with the theoretical framework, we could say that
the results show that shareholders are influencing in strategic
decisions of the companies as, so it supposes to accept the
Hypothesis 2 and reject Hypothesis 1. Thus, in the light of the
results, we can conclude that listed Spanish firms have adopted
a long-term approach to manage the environmental dimension
of CSR. So they are trying to continue doing Environmental
policies, although in most of the cases it involves making
significant changes in their strategies to adapt it at the new
financial circumstances.
A relevant change has been identified about the factors that
affect to the Environmental behavior. While in 2006, the
Economic Score from the same year explained at least a 36%
of the Environmental Scores obtained, it was only a 1.5% of
the Scores in 2010. Hence, we could clearly conclude that the
FP has left to be the most influenced factor on CSR. The direct
implication is that we have to search which factors explains
nowadays the Environmental Behavior of the companies.
Regarding the paper's limitations, it only has been analyzed
the companies listed in the IBEX-35 Index, so the conclusions
couldnt be extrapolated to all the Spanish companies and
particularly to the Small and Medium companies because they
are completely different from the studied firms.
REFERENCES
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
185
[1] B. Fernandez, Crisis and Corporate Social Responsibility: Threat or
Opportunity? International Journal of Economic Sciences and Applied
Research. 2009; 2(1): 36-50.
[2] N. Yelkikalan and C. Kse, The Effects of the Financial Crisis on
Corporate Social Responsibility, International Journal of Business and
Social Science. 2012; 3(3): 292-300.
[3] Y. Z. Karaibrahimoglu, Corporate social responsibility in times of
financial Crisis, Afr J Bus Mana 2010; 4(4): 382-389.
[4] M. Orlitzky, F.L. Schmidt and S.L. Rynes, Corporate Social and
Financial Performance: A Meta-Analysis, Organ Stud. 2003; 24(3):
403441.
[5] J. Njoroge 2009, Effects of the global financial crisis on corporate
social responsibility in multinational companies in Kenya, Covalence
Intern Analyst Papers, available at: www.covalence.ch/docs/Kenya-
Crisis.pdf accessed on 30 December, 2009.
[6] F.G. Grigore, Corporate Social Responsibility and Marketing, in
Developments in Corporate Governance and Responsibility. 2011; 2:
4158.
[7] B. Escobar and M.M. Miras, Spanish Savings Banks Social
Commitment: just pretty words?, Social Responsibility Journal. 2013;
9(3), 427-440.
[8] D. Krali, Leading Greenovate Change, In: V. Perminov, J. Nunes
and N. Mohareb editors. Advances in Environmental Science and
Sustainability. WSEAS Press, 2012; p-81-86.
[9] A. Branca, J. Pina, M. Catalao-Lopes, Corporate Social Responsibility
and Macroeconomic Environment, In: M. Mastorakis, V. Mladenov, J.
Savkovic-Stevanovic editors. Recent Researches in Sociology,
Financing, Environment and Health Sciences. WSEAS Press, 2011; p-
223-228.
[10] C. Giannarakis and I. Theotokas, The Effect of Financial Crisis in
Corporate Social Responsibility Performance, International Journal of
Marketing Studies. 2013; 3(1): 2-10.
[11] G. Charitoudi, G. Giannarakis and T.G. Lazarides Corporate Social
Responsibility Performance in Periods of Financial Crisis, European
Journal of Scientific Research. 2011; 63(3): 447-455.
[12] J.A. Arevalo and D. Aravind, The impact of the crisis on corporate
responsibility: the case of UN global compact participants in the USA,
Corporate Governance. 2010; 10(4): 406-420.
[13] I. Ducassy, Does Corporate Social Responsibility Pay Off in Times of
Crisis? An Alternate Perspective on the Relationship between Financial
and Corporate Social Performance, Corporate Social Responsibility and
Environmental Management. 2013; 20(3):157-167.
[14] Foretica Report, Evolucin de la Responsabilidad Social de las
empresas en Espaa 2011.
[15] A.B.Carroll and K.M. Shabana, The Business Case for Corporate
Social Responsibility: A Review of Concepts, Research and Practice,
Int J Manag Rev. 2010; 12(1): 85105.
[16] F. Schultz and S. Wehmeier, Institutionalization of corporate social
responsibility within corporate communications: Combining
institutional, sensemaking and communication perspectives, Corporate
Communications: An International Journal. 2010; 15(1): 929.
[17] B. Argandoa, La Responsabilidad Social de la Empresa a la luz de la
tica, Contabilidad y Direccin. 2007; 27-37.
[18] M.D.P. Lee, A review of the theories of corporate social responsibility:
Its evolutionary path and the road ahead, Int J Manag Rev. 2008; 10(1):
5373.
[19] M. Nieto and R. Fernndez, Responsabilidad social corporativa: la
ltima innovacin en management, Universia Bus Rev. 2004; 1: 28-39.
[20] E. Garriga and D. Mel, Corporate Social Responsibility Theories:
Mapping the Territory, J Bus Ethics. 2004; 53(1): 5171.
[21] M. Maleti, D. Maleti and B. Gomiek, An organizational
sustainability performance measurement framework, In: R.A.
Rodrigues Ramos, I. Straupe, T. Panagopoulos editors. Recent
Researches in Environment, Energy Systems and Sustainability.
WSEAS Press, 2012; p-220-225.
[22] L.E. Preston and D.P. OBannon, The Corporate Social-Financial
Performance Relationship: A Typology and Analysis, Bus Soc. 1997;
36:419-429.
[23] S.A. Waddock and S.B. Graves, Corporate Social Performance-
Financial Performance Link, Strategic Manage J. 1997; 18 (4): 303-
319.
[24] J.A.M. Izquierdo, Responsabilidad social corporativa y competitividad:
una visin desde la empresa, Revista valenciana de Economa y
Hacienda. 2004; 12: 950.
[25] O.E. Williamson, The Economics of Discretionary Behavior:
Managerial Objectives in a Theory of the Firm, Chicago: Markham,
1967.
[26] O.E. Williamson, The Economic Institutions of Capitalism, New
York, Free Press, 1985.
[27] S.A. Ross, The Economic Theory of Agency: The Principals
Problem, Am Econ Rev. 1973; 63(2): 134-139.
[28] J.L. Miller, The Board as a Monitor of Organizational Activity: The
Applicability of Agency Theory to Nonprofit Boards, Nonprofit
Management and Leadership. 2002, 12(4): 429450.
[29] M.C. Jensen and H.W. Meckling, Theory of the firm: managerial
behavior, agency costs, and ownership structure, J Financ Econ. 1973;
3: 305360.
[30] K.M. Eisenhardt, Agency Theory: An Assessment and Review, Acad
Manage Rev. 1989; 14(1): 57-74.
[31] I. Ioannou and G. Serafeim, What drives corporate social
performance?: The role of nation-level institutions, J Int Bus Stud.
2012; 43(9): 834-864.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
186
Hydroinformatic Tools for Flood Risk Map
Achievement

Erika Beilicci, Robert Beilicci, Ioan David


Abstract Water Framework Directive and Flood Directive
of European Commission establishes the need for preparation of
flood risk maps for each member country on each important
hydrographic basins. Based on these established the flood risk
management plan (must be finalized by end of 2015), which is a
communicator and disseminator tool of the knowledge gained
during two previous stages across the horizontal structures of
governmental and non-governmental bodies dealing with flood
protection, flood mitigation and flood struggle in general. Flood
risk management plans, considered as a communicator and
disseminator tool across the horizontal structures of
governmental and non-governmental bodies dealing with flood
protection, flood mitigation and flood struggle in general. They
mainly include proposals on how to reduce the losses of lives,
property and environmental through flood prevention, protection
of vulnerable areas and increased flood preparedness in each
river basin. The way of processing of this flood risk management
plans on IT platforms changes the information stream flow.
Future development plans of regions and cities will get a proper
guidance and platforms for future feasibility studies. In
Romania, each state institution wants to improve the skills of
their employees. There is a lack of specialists who has enough
knowledge about the hydroinformatics, thus in everyday work
there is a very limited use of such tool, meanwhile the work with
complex problems has generated recently a need to use valuable
tool.
Keywords management, flood, flood risk management plan,
hydroinformatic tools
I. INTRODUCTION
Floods are natural phenomena which cannot be prevented.
Some human activities and climate change contribute to an
increase in the likelihood and adverse impacts of flood events.
In order to have available an effective tool for information,
as well as a valuable basis for priority setting and further
technical, financial and political decisions regarding flood risk
management, it is necessary to provide for the establishing of
This work was supported under Leonardo da Vinci Project LLP-LdV-
ToI-2011-RO-002, POLITECHNICA University of Timisoara,
Department of Hydrotechnical Engineering, George Enescu 1/A, 300022
Timisoara, Romania.
Erika Beilicci is with the POLITECHNICA University of Timisoara,
Department of Hydrotechnical Engineering, George Enescu 1/A, 300022
Timisoara, Romania (e-mail: beilicci_erika@yahoo.com)
Robert Beilicci is with the POLITECHNICA University of Timisoara,
Department of Hydrotechnical Engineering, George Enescu 1/A, 300022,
Timisoara, Romania (e-mail: beilicci@yahoo.com
Ioan David is with the POLITECHNICA University of Timisoara,
Department of Hydrotechnical Engineering, George Enescu 1/A, 300022
Timisoara, Romania (e-mail: Ioan.David@gmx.net)
flood hazard maps and flood risk maps showing the potential
adverse consequences associated with different flood scenarios,
including information on potential sources of environmental
pollution as a consequence of floods [1].
EU Member States should assess activities that have the effect
of increasing flood risks. For to avoiding and reducing the
adverse impacts of floods in the area concerned it is
appropriate to provide flood risk management plans. Flood
risk management plans should therefore take into account the
particular characteristics of the areas they cover and provide
for tailored solutions according to the needs and priorities of
those areas, whilst ensuring relevant coordination within river
basin districts and promoting the achievement of
environmental objectives laid down in Community legislation
[1].
Flood risk management plans should focus on prevention,
protection and preparedness. With a view to giving rivers more
space, they should consider where possible the maintenance
and/or restoration of floodplains, as well as measures to
prevent and reduce damage to human health, the environment,
cultural heritage and economic activity. The elements of flood
risk management plans should be periodically reviewed and if
necessary updated, taking into account the likely impacts of
climate change on the occurrence of floods [1].
Today, many national development and investment
programs are needed to be done in relation to flood mitigation,
adaptation and protection as well as to water scarcity and
drought. It means huge infrastructural investments are and will
running in these fields, particularly but not solely in the newly
accessed countries and in water sector.
Each EU member state implementing Water Framework
Directive (WFD) and Flood Directive (FD) 2007/60/EC, needs
wide and interdisciplinary knowledge to be able to create area-
adjusted solutions which provides solution for the local needs
by understanding the national/country specific environmental
processes.
In Europe, water managers must address the
key requirements of the FD.
Flood risk management planning represents the most
important element of EU Flood Directive. This is a
communicator and disseminator of the knowledge gained
during two previous stages across the horizontal structures of
governmental and non-governmental bodies dealing with flood
protection, flood mitigation and flood struggle in general,
included public involvement in this process. These plans must
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
187
be finalised as the final round of the first planning cycle of EU
Flood Directive by the end of the year 2015. Flood risk
management plans mainly include proposals on how to reduce
the losses of lives, property and environmental through flood
prevention, protection of vulnerable areas and increased flood
preparedness in each river basin. The way of processing of this
flood risk management plans on IT platforms changes the
information stream flow.
Romania is one of most exposed countries to natural
catastrophes, especially to floods, which have caused
substantial damage during the last years. What currently occurs
on the territory of Romania, and we are referring here to the
two categories of hydrological phenomena floods and
droughts is, on the one hand, the consequence of the global
climatic changes on the regional and planetary level, and, on
the second, of the human intervention on the specific
landscape.
During the last years, despite the progresses we made in the
field of scientific research and risk forecast, we cannot but
notice a worldwide increase in the frequency of occurrence of
catastrophic hydrological events, resulting in serious material
damage and the loss of human lives. These undesired
phenomena are often seen to be a consequence of natural
events (magnetic storms, solar eruptions, El Nio phenomena)
undoubtedly coupled with reckless human interventions, which
led to changes of balance within the elements of the natural
environment (massive land/woods clearing, without
reforestation; the expansion of urban agglomeration, having as
a result the waterproofing of an increasing number of areas; the
building of dams, the watercourse regulation and the building
of dykes; the drainage works, the excessive exploitation of
water resources and, last but not least, the burning of
combustible and the occurrence of the green house effect) [2].
II. THE NEED TO USE HYDROINFORMATIC TOOLS
FOR FLOOD RISK MAP ACHIEVEMENT
In order to prevent these catastrophic hydrological
phenomena, experts had to develop complex models of
simulation based on elevated mathematical models of the
dangerous hydrological phenomena triggering mechanisms of
and their effects on the environment.
Considering the prevailing frequency of occurrence of
floods, for areas of high flooding risk it is necessary that we
make researches and studies that allow us to know the causes,
the evolution and the effects of these phenomena on the
environment. These researches require the achievement or the
updating of topographical, geomorphological, climate and
pedological studies regarding soil erosion, overland flow and
other degradation, hydrological, hydrogeological, geotechnical
processes, vegetation related and social / economic studies.
On the basis of these studies and developed complex
simulation models we can elaborate the flood risk maps for all
watercourses, with a high precision.
For achievement of flood risk maps is need to have
engineers skilled in the field related to development of water
management infrastructure as well as in system operation, who
are familiar also with the newest technological achievements,
capable to develop area-adjusted solutions by understanding
the specific environmental processes.
III. CLASSICAL MUSKINGUM METHOD
The classic Muskingum methods is an hydrological
methods for channel routing, which use the principle of
continuity equation to solve the mass balance of inflow,
outflow and the volume of storage. These methods of routing
require a storage-stage-discharge-relation to determine the
outflow for each time step. Hydrological methods involve
numerical techniques that introduce translation or attenuation
to an inflow hydrograph.
In classical Muskingum method, irregular non-permanent
water movement is described by partial differential equations
Saint-Venant. This system of equations is difficult to solve, the
integration is numerical and is necessary to make in computing
various simplifying assumptions. For problem solving is
necessary to know the boundary conditions upstream and
downstream sector of the studied river and its definition in a
number of cross sections. Because in many cases do not
provide profiles and unknown boundary conditions upstream
and downstream sector of the river, have been imagined a
series of methods to solve the problem of flood wave
propagation based on the equation of continuity and need only
wave hydrograph flood at the entrance to the river considered
[3].
Based on the Muskingum model equations, a group of
teachers from Politechnica University of Timisoara,
Romania, developed a simulation program for flood wave
propagation in natural river channels.
The Muskingum Method is a simple, approximate method
to calculate the outflow hydrograph at the downstream end of
the channel reach given the inflow hydrograph at the upstream
end. No lateral inflow into the channel reach is considered.
Classical Muskingum method introduce in calculation of
flood propagation in river bed a quite rigid relationship in
displacement and attenuation of flood waves, especially
because of the stability conditions imposed by the used
integration method. In the study of flood waves propagation ant
attenuation in the natural river beds was admitted the
neunivocality of rating curve in the sector due to secondary
phenomena accompanying the propagation of flood waves,
such as the change of cross sections of the riverbed during the
flood through sediments deposition or river bed erosion;
absorption by dry soil of a volume of water that cannot be
neglected when flood cover large areas; water free surface
slope change for the same discharge, in case of increasing and
decreasing of flood wave.
The Muskingum method assumes a single stage-discharge
relationship. This assumption causes an effect known as
hysteresis, which may introduce errors into the storage
calculation. The hysteresis effect between reach storage and
discharge is due to the different flood wave speeds during the
rising and falling limb of the hydrograph. For the same river
stage, the flood wave moves faster during the rising limb of the
hydrograph. In spite of its simplicity and its wide applicability,
the Muskingum method has the shortcoming of producing a
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
188
negative initial outflow which is commonly referred to as dip
or reduced flow at the beginning of the routed hydrograph.
The method is restricted to moderate to slow rising
hydrographs being routed through mild to steep sloping
channels. This constraint restricts the Muskingum method even
more by making the method not well suited for very mild
sloping waterways where a looped stage-discharge rating may
exist. The Muskingum method also ignores variable backwater
effects such as downstream dams, constrictions, bridges and
tidal influences. In small catchments, where measured inflow
and outflow hydrographs are not available, or where a
significant uncertainty and errors are reported for the outflow
data, modeling the flow using this method is quite a source of
errors, and the Muskingum method fails to simulate the flow
hydrograph using this type of data [3].
IV. DUFLOW MODEL
It is a one-dimensional program for quantitative and
qualitative modeling of overland flow in open runways. It was
developed by the International Institute for Hydraulic and
Environmental Engineering (IHE) Delft, The Rijkswaterstaat
(Public Works Department), the Tidal Water Division, The
Hague, The Delft University of Technology, Holland.
DUFLOW is designed to cover a large range of
applications, such as propagation of tidal waves in estuaries,
flood waves in rivers, operation of irrigation and drainage
systems, etc. Basically, free flow in open channel systems is
simulated, where control structures like weirs, pumps, culverts
and siphons can be included. As in many water management
problems, the runoff from catchments areas is important; a
simple precipitation-runoff relation is part of the model set-up
in DUFLOW. The DUFLOW software consists of the
following parts: DUFLOW water quantity (with this program
one can perform unsteady flow computations in networks of
open water courses) and DUFLOW water quality (this program
is useful in simulating the transportation of substances in free
surface flow and can simulate more complex water quality
processes).
DUFLOW is based on the one-dimensional partial
differential equation that describes non-stationary flow in open
channels.
The application of this model supposes a plan of the study
area for of the river network division and the hydrographical
basin. The river network is divided into sectors of different
lengths by nodes, in such a way that the linear sectors between
two consecutive nodes, following the axe curves of the river
bed. In each node, we need to show the bed level and the width
of the water mirror on different levels. The area of
hydrographical basin is delimited by the highest slope line, and,
subsequently, successively, the associated flow areas which
will connect in nodes [4].
The results from simulation with DUFLOW are: the
variation of water levels and water discharges in each node of
network.
Unlike Muskingum method, which based on a relatively
simple equation, the DUFLOW model is a numerical model
with a more complex theoretical base. The numerical methods
offer multiple possibilities related to the most complex and
difficult problems of research, developed in physics of
hydraulic phenomenas. The numerical calculus permit the
knowledge of physical phenomenas with sufficient accuracy,
so most times, checking on laboratory models is no longer
necessary. The application of this model supposes a plan of the
studied area for the division of the hydrographical network and
the basin.
Muskingum method is applicable only for sectors of
watercourses where not exist lateral inflows like tributaries,
while DUFLOW can apply for a whole river system. The
model takes into account the existing hydraulic structures on
watercourses. The hydrographical network is divided into
sectors of different lengths by nodes, so that the linear sectors
between two consecutive nodes follow the axe curves of the
rivers thalweg. In each node, we need to show the thalweg
level and the width of the water mirror on different levels. The
area of the hydrographical basin is delimited by the highest
slope line, and, subsequently, successively, the associated flow
areas which will connect in nodes [5].
V. HYDROLOGIC ENGINEERING CENTERS RIVER
ANALYSIS SYSTEM (HEC-RAS) MODEL
HEC-RAS is a computer program that models the
hydraulics of water flow through natural rivers and other
channels. The program is one-dimensional, meaning that there
is no direct modeling of the hydraulic effect of cross section
shape changes, bends, and other two- and three-dimensional
aspects of flow. The program was developed by the US
Department of Defense, Army Corps of Engineers in order to
manage the rivers, harbors, and other public works under their
jurisdiction; it has found wide acceptance by many others since
its public release in 1995.
The basic computational procedure of HEC-RAS for steady
flow is based on the solution of the one-dimensional energy
equation. Energy losses are evaluated by friction and
contraction / expansion. The momentum equation may be used
in situations where the water surface profile is rapidly varied.
These situations include hydraulic jumps, hydraulics of
bridges, and evaluating profiles at river confluences.
For unsteady flow, HEC-RAS solves the full, dynamic, 1-D
Saint Venant Equation using an implicit, finite difference
method. The unsteady flow equation solver was adapted from
Dr. Robert L. Barkaus UNET package.
HEC-RAS is equipped to model a network of channels, a
dendritic system or a single river reach. Certain simplifications
must be made in order to model some complex flow situations
using the HEC-RAS one-dimensional approach. It is capable of
modeling subcritical, supercritical, and mixed flow regime
flow along with the effects of bridges, culverts, weirs, and
structures.
HEC-RAS is a computer program for modeling water
flowing through systems of open channels and computing
water surface profiles. HEC-RAS finds particular commercial
application in floodplain management and flood insurance
studies to evaluate floodway encroachments. Some of the
additional uses are: bridge and culvert design and analysis,
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
189
levee studies, and channel modification studies. It can be used
for dam breach analysis, though other modeling methods are
presently more widely accepted for this purpose.
HEC-RAS has merits, notably its support by the US Army
Corps of Engineers, the future enhancements in progress, and
its acceptance by many government agencies and private firms.
It is in the public domain and peer-reviewed. The use of HEC-
RAS includes extensive documentation, and scientists and
engineers versed in hydraulic analysis should have little
difficulty utilizing the software.
Users may find numerical instability problems during
unsteady analyses, especially in steep and/or highly dynamic
rivers and streams. It is often possible to use HEC-RAS to
overcome instability issues on river problems. HEC-RAS is a
1-dimensional hydrodynamic model and will therefore not
work well in environments that require multi-dimensional
modeling. However, there are built-in features that can be used
to approximate multi-dimensional hydraulics [6].
VI. MIKE 11 MODEL
MIKE 11 is a professional engineering software package
for simulation of one-dimensional flows in estuaries, rivers,
irrigation systems, channels and other water bodies. MIKE 11
is a 1-dimensional river model. It was developed by DHI Water
Environment Health, Denmark.
The Hydrodynamic Module (HD), which is the core
component of the model, contains an implicit finite-difference
6-point Abbott-Ionescu scheme for solving the Saint-Venants
equations. The formulation can be applied to branched and
looped networks and flood plains. HD module provides fully
dynamic solution to the complete nonlinear 1-D Saint Venant
equations, diffusive wave approximation and kinematic wave
approximation, Muskingum method and Muskingum-Cunge
method for simplified channel routing. It can automatically
adapt to subcritical flow and supercritical flow. It has ability to
simulate standard hydraulic structures such as weirs, culverts,
bridges, pumps, energy loss and sluice gates.
The MIKE 11 is an implicit finite difference model for one
dimensional unsteady flow computation and can be applied to
looped networks and quasi-two dimensional flow simulation on
floodplains. The model has been designed to perform detailed
modeling of rivers, including special treatment of floodplains,
road overtopping, culverts, gate openings and weirs. MIKE 11
is capable of using kinematic, diffusive or fully dynamic,
vertically integrated mass and momentum equations. Boundary
types include Q-h relation, water level, discharge, wind field,
dambreak, and resistance factor. The water level boundary
must be applied to either the upstream or downstream
boundary condition in the model. The discharge boundary can
be applied to either the upstream or downstream boundary
condition, and can also be applied to the side tributary flow
(lateral inflow). The lateral inflow is used to depict runoff. The
Q-h relation boundary can only be applied to the downstream
boundary. MIKE 11 is a modeling package for the simulation
of surface runoff, flow, sediment transport, and water quality in
rivers, channels, estuaries, and floodplains.
MIKE 11 has long been known as a software tool with
advanced interface facilities. Since the beginning MIKE11 was
operated through an efficient interactive menu system with
systematic layouts and sequencing of menus. It is within than
framework where the latest Classic version of MIKE 11
version 3.20 was developed.
The new generation of MIKE 11 combines the features and
experiences from the MIKE 11 Classic period, with the
powerful Windows based user interface including graphical
editing facilities and improved computational speed gained by
the full utilization of 32-bit technology.
The computational core of MIKE 11 is hydrodynamic
simulation engine, and this is complemented by a wide range
of additional modules and extensions covering almost all
conceivable aspects of river modeling.
MIKE 11 has been used in hundreds of application around
the world. Its main application areas are flood analysis and
alleviation design, real-time flood forecasting, dam break
analysis, optimisation of reservoir and canal gate/structure
operations, ecological and water quality assessments in rivers
and wetlands, sediment transport and river morphology studies,
salinity intrusion in rivers and estuaries [7].
VII. EXAMPLE OF FLOOD RISK MAP ACHIEVEMENT
USING MIKE 11
To exemplify of flood risk map achievement with MIKE 11
hydroinformatic tools was considered a sector of Crasna River,
located in northwestern Romania. Considered sector have a
length of 64 km, representative cross sections are considered in
the right of localities Supuru de Jos, Craidorolt, Domanesti and
Berveni, the border with Hungary (Fig. 1). Cross sections have
been raised by the Romanian Waters, Somes-Tisa Water Basin
Administration.
The input data are: area plan with location of cross sections
(Fig. 2); cross sections topographical data and roughness of
river bed (Fig. 3); flood discharge hydrograph in section
Supuru de Jos (Fig. 4).

Fig. 1. Area plan.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
190

Fig. 2. Area plan with location of cross sections.

Fig. 3. Cross sections topographical data.


Fig. 4. Flood discharge hydrograph in section Supuru de Jos.
After simulation with MIKE 11 result the water level in
each cross sections (Fig. 5) and in Fig. 6 are show the
Domanesti cross section, where the water level exceeds the
level of dike and flood village. Based on contour maps we can
establish the flooded area (Fig. 7), the flood risk map for the
maximum discharge 88,4 m3/s. For comparison of simulations
with measured values, t-test of student method and X-square
test was used for the three sections corresponding localities
Craidorolt, Domanesti and Berveni. The test results are shown
in Figure 8.

Fig. 5. Water level in each cross sections.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
191

Fig. 6. Maximum water level in Domanesti cross section.

Fig. 7. The flood risk map for the maximum discharge 88,4 m3/s.

Fig. 8. T-test and X-square test results
VIII. CONCLUSIONS
Besides the models mentioned above, have been developed
over the years other models applied in the preparation of flood
risk maps. In Romania, most flood risk maps were prepared
using HEC-RAS and MIKE 11 models, which show a high
degree of confidence.
Throughout the Community Countries different types of
floods occur, such as river floods, flash floods, urban floods
and floods from the sea in coastal areas. The damage caused by
flood events may also vary across the countries and regions of
the Community. Hence, objectives regarding the management
of flood risks should be determined by the Member States
themselves and should be based on local and regional
circumstances. In each river basin district or unit of
management the flood risks and need for further action should
be assessed. In order to have available an effective tool for
information, as well as a valuable basis for priority setting and
further technical, financial and political decisions regarding
flood risk management, it is necessary to provide for the
establishing of flood hazard maps and flood risk maps showing
the potential adverse consequences associated with different
flood scenarios, including information on potential sources of
environmental pollution as a consequence of floods.
Member States should assess activities that have the effect
of increasing flood risks. Flood risk management plans should
therefore take into account the particular characteristics of the
areas they cover and provide for tailored solutions according to
the needs and priorities of those areas, whilst ensuring relevant
coordination within river basin districts and promoting the
achievement of environmental objectives laid down in
Community legislation. Member States should base their
assessments, maps and plans on appropriate best practice and
best available technologies not entailing excessive costs in
the field of flood risk management [1].
ACKNOWLEDGMENT

This project has been funded with support from the
European Commission. This publication [communication]
reflects the views only of the author, and the Commission
cannot be held responsible for any use which may be made of
the information contained therein.
REFERENCES
[1] Directive 2007/60/EC of the European Parliament and of the Council of
23 October 2007 on the assessment and management of flood risks.
[2] I. David, Zs. Nagy, E. Beilicci, T. Kramer, A. Szilagyi, Development
of knowledge centers for life-long learning by involving of specialists
and decision makers in flood risk management using advanced
hydroinformatic tools, Lifelong Learning Programme Leonardo da
Vinci, Submission ID 260243, Submission local date (Brussels), 2011-
02-28, Hash code 5BA440F7658CABFE, Form id. 5BA440F7.
[3] S. Elbashir, Flood Routing in Natural Channels Using Muskingum
Methods, Dublin Institute of Technology, Ireland, pp. 18-26, 2011.
[4] DUFLOW Reference Manual, Version 3.5, Stowa / MX System,
Holland, 2002.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
192
[5] E. Beilicci, R. Beilicci, A comparative analysis of two flood wave
propagation models, International Multidisciplinary 12th Scientific
GeoConference SGSM, Albena, Bulgaria, 2012, vol. III, pp. 523-530.
[6] HEC-RAS Reference manual, US Army Corps of Engineers, Hydrologic
Engineering Centers, 2010.
[7] MIKE 11 - A modelling system for rivers and channels, Short
introduction and tutorial, DHI, Horsholm, Denmark, 2011.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
193

Ecological Aspects in Control of Small-scale Biomass
Fired Boilers


Bohumil ulc
Department of Instrumentation and Control Engineering
Mechanical Engineering Faculty of the CTU in Prague
Prague, Czech Republic
bohumil.sulc@fs.cvut.cz
Cyril Oswald
Department of Instrumentation and Control Engineering
Mechanical Engineering Faculty of the CTU in Prague
Prague, Czech Republic
cyril.oswald@fs.cvut.cz


AbstractControl of combustion in small-scale boilers has
been standing outside interest for a long time. More attention has
been paid to design of the boilers than to economic and ecological
aspects of their operation. To the question under which operating
conditions of the boiler a controller carries out its activities has
been paid a limited attention. The quick by developing use of
biomass fired boilers, local shortage of quality biomass, legal
restriction on emissions and available cheaper instrumentation
have caused a turn in replacing older simple control solutions by
more sophisticated technology. The final goal is evident: to
achieve for small-scale boilers to be able to operate automatically
fulfilling ecological limits comparable with those usual in full-
scale boilers, even when firing biomass of lower quality and no
skilled service can be expected. Some experience obtained from
experiments aimed at this goal and improvements carried out on
pilot boilers are reported in this paper.
Keywordsbiomass boilers; control; emmisions; operating
condition optimizitation
I. INTRODUCTION
Small-scale biomass boilers were traditionally operated
uncontrolled or with a very simple control mechanism. The
ecology improvement of such uncontrolled boilers in
comparison with fossils fuels can be very small if any. Typical
issue is improperly controlled air to fuel ratio during transition
states (e.g. boiler lighting, burn-out phase, sudden increase of
power demand etc.).
Increasing number of small-scale biomass boiler
installations is another important circumstance. Biomass
resources become limited, especially those of better quality
such pellets making possible very simple and effective on-off
control of combustion. Due to this fact more sophisticated
control has became the important issue even for small-scale
boilers. One of the crucial aspects of economy and ecology of
biomass combustion is its local source of fuel. Any
transportation of fuels spoils the advantages of using biomass
as cheaper and more ecological energy source. Local biomass
sources have their limited capacity and thus further growth in
large combustion system is not expected.
Nowadays, there are not any emission regulations required
for operation of small-scale boilers. For example, in the Czech
Republic are valid operation emission limits established for the
boilers with power output above 200 kW. Only for new boilers
introduced to the market has been adopted standard EN 303-5.
However, current development in the national legislation
indicates that in a close future will be established emission
limits for the small-scale boilers too.
If we compare conditions for performing control in large
boilers with those in small-scale boilers, the main differences
are represented by the following points:
- faster fluctuation of combustion process due to smaller
inertia of the combustion chamber walls,
- higher sensitivity of combustion process to external
influences, load changes, etc
- the small- scale boilers are usually used by the users not
having sufficient qualification, therefore periodical
maintenance is poor, sensors provide uncertain
measurements and manual control is unskilled,
- expenditures for automation of a small-scale boiler has
to be kept low because they cannot create the bigger
part of the price, etc.
From the control viewpoint combustion of biomass fuels is
not a simple matter. If it is required boilers capability to
combust various kind of biomass in an optimal way, i.e. with
the maximum of efficiency and lowest production of harmful
emission we have not enough means how to influence the
combustion process.
First of all it is transport of fuel where not only the
delivered quantity is important for the boiler's power, but also
the way how is the motion of transporter carried out because it
is usually linked with grating. In quest of producing the
minimal achievable gaseous emissions and maintaining the
steady fuel combustion, it is necessary to control the air factor
(air excess) , on its desired value. The air factor is expressed
by the ratio
1 [ ]
a
amin
Q
Q
= > (1)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
194

where Qa is the flow rate of actual combustion air and Qamin is
the necessary (stoichiometric) burning air. The topical value of
the air factor in the running combustion process is acquired
via oxygen concentration measurement in the flue gases at the
end part of the boiler.
Fig. 1 demonstrates an important circumstance from the
control viewpoint: the optimal working range. If the values of
the excess air ratio lie within optimal working range then the
burning produces the minimum of carbon monoxide CO and
simultaneously the maximum of efficiency is achieved. On
contrary, NOx emissions are close to their maximum and
therefore higher values of the air excess are accepted as a
certain compromise in production of emission. In other words,
if we are able to design control algorithm which stabilizes the
temperature of heated water, then by setting the flow rate of the
delivered combustion air on such values corresponding the
range of optimal air excess we can guarantee optimal operating
conditions characterized by the minimal fuel consumption and
by a compromise in lowest harmful emissions. This is the task
that has been investigated on experimental setups.
II. EXPERIMENTAL EQUIPMENT
Most of experiments was performed on the commercially
available 25 kW boiler. The boiler is designed for combustion
of wooden and alternative (e.g. grain, hay) pellets of diameter 6
8 mm. Nominal power output of the boiler is 25 kW for
wooden pellets and 18 kW for alternative pellets. For all
experiments have been used 6 mm softwood pellets. The boiler
consists of lined combustion chamber located in the bottom
section and system of heat exchangers in the upper section. In
the combustion chamber is placed steel grate with primary air
inlet from beneath and secondary air inlet holes located in the
side walls of the chamber. The primary/secondary air ratio is
adjustable by a flap. The pellets are fed from storage container
by screw feeder to the rear side of the grate where they start to
burn. Using the original control unit of the boiler, feeding of
the pellets takes place periodically with preset periods of screw
movement and idle state. During the operation, also the grate is
periodically moved (swept) in the way that the pellets are
moved towards front side of the combustion chamber. In
optimal case, the pellets are completely burned out when they
reach the end of the chamber. The ash is afterwards collected in
the ash container. Both time intervals of fuel feeding and grate
movement can be manually adjusted. The combustion air is fed
into the boiler by air fan. The air fan is originally impulse-
controlled and allows manual adjustment of four different air
flow rates. The boiler has been equipped with several
measurement and indication points, as also shown in Fig. 2. In
the original setting, none of these additional sensors is
available. There are originally only two sensors detecting
temperature of outlet water and flue gas. Responses from these
sensors are used in the pre-programmed electronics for simple
control purposes. The experimental setup has involved
measurement of temperatures at the front of the combustion
chamber (T1), temperature after first convective section (T2),
flue gas temperature (T3), temperature of inlet (T4) and outlet
water (T5) and oxygen and CO concentrations in the flue gas
and flow rate of the combustion air. Additionally, there have
been detected periods of grate sweeping and fuel feeding.
Scheme of the boiler with the instrumentation is shown in Fig.
2.
The boiler was originally equipped with manufacturers
control unit that allowed only very limited intervention into the
process. However, the original control unit still needs to be
used, mainly for educational purposes. Therefore a new
switchboard was designed and produced. The switchboard
contains circuits assuring protection against overload, short-
circuit and forbidden combination of inputs, power sourcing,
noise shielding, central earth and emergency stop function. The
switchboard is also equipped with RexWinLab-8000 control
and data acquisition unit. The control of combustion process
can be assured either with pre-programmed electronics or
switched to RexWinLab 8000.

Fig. 1. Dependence of CO, NOx and TOC emissions on the excess air ([1],
fig 1)

Fig. 2. Scheme of the experimental boiler with additional instrumentation
([2] , fig 1)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
195

The RexWinLab-8000 is a control and data acquisition unit
developed along with the switchboard at Department of
Instrumentation and Control Engineering at the Czech
Technical University in Prague. The RexWinLab 8000 is based
on PAC (Programmable Automation Controller) Wincon 8000
series, but the control firmware of PAC is replaced by software
named REX Control developed at Institute of Cybernetics at
University of West Bohemia in Pilsen. REX Control allows
using advanced control algorithms commonly unavailable in
standard controllers. It has also its own graphical user interface
based on similar principles as Simulink produced by
MathWorks. Another useful feature of the software is its ability
to communicate with Matlab/Simulink software bundle in real-
time and exchange the data with it.
The described configuration allowed preparation of
experiments in advance on a standard personal computer. Any
control algorithm synthesis can be easily realized in graphical
development environment very similar to well known
Simulink. Data from all sensors placed in the boiler sensors are
time-synchronized and centralized to PAC. During the
experiment PAC sends measured data to Simulink running on
remote computer and show courses of measured quantities in
real-time. This method allows operator to monitor and
potentially interfere combustion process during the experiment.
Data are periodically saved in Simulink for the follow-up off-
line analysis. The described development of switchbox with
RexWinLab 8000 changed standard factory boiler to an
experimental base that allowed us to prepare, monitor and
interfere, acquire data and analyze them during the experiments
in real-time.
For these reasons the same concept has been used in a new
prototype boiler "Fiedler 100 kW". This boiler has been
delivered recently and that is why we cannot present
experimental results yet. Otherwise, the boiler is designed as a
prototype using new technological principles and rich
instrumentation capable to obtain necessary data for creating a
simulation model. The simulation model is necessary for
tuning control algorithms because the real-time testing of
intended automated search for optimal operating conditions
costs much time, money and effort
III. MAIN EXPERIMENTALLY VERIFIED IMPROVEMENTS
When we defined research project using a new prototype of
100 kW boiler we could come out from the following findings
obtained and experimentally verified on the experimental
Verner boiler:
A. Grate Movement
In small scale boilers the grate sweeping movement
influences combustion process in dependence on in the way
how it is carried out. In the described pilot boiler if fixed pre-
programmed control units were used disturbing periodical
strong peaks in the combustion process occurred during
operation for a short time after the grate sweeping. The peaks
depicted in Fig. 2 on the left could be removed (as shown in the
right part of the figure) if another timing of this movement has
been programmed and generated by a programmable controller.
[3]
B. Replacement of On-Off control by PI controller
In the most of small-scale pellet boilers for temperature
control of heated water, switching in fuel supply rates is used.
Turning off fans ensuring air flow through the boiler can stop
boiler heating function for quite a long time without any
problems with re-burning. Our experiments showed that use of
a standard PI control may reduce fuel consumption, but the
boiler must be equipped with necessary electronics for carrying
out this control algorithm. [3]
C. Optimization of operating conditions
All boilers are characterized by the fact that heat transfer
from the boiler to the heated medium depends on the
temperature of the heated medium. Usually, this temperature is
controlled. If it is achieved such steady state when the
temperature of the heated medium is at its desired value while
in the fuel supply it has been reached its minimum, than this
state indicates an efficiency maximum. Very important is the
fact that combustion process runs under optimal conditions not
only from the economical viewpoint (fuel consumption) but as
well from the viewpoint of ecological aspects because
simultaneously CO emissions are in the case at the lowest level
(see Fig. 3). [4]
The most promising ecological effect in the further
development showed the direction C - operational optimization
based on the extremum seeking approach. That is why we
decided to investigate its properties and alternatives before
implementing designed algorithm in the control system of the
new prototype boiler.
IV. ARRANGMENT OF CIRCUITS FOR HEATING
TEMPARATURE CONTROL IN PROTOTYPE BOILER
Control solutions of operating condition optimization have
some common features. Always, there is a standard control
circuit whose control variable - temperature of the heating
water measured at the heat exchanger outlet in the case of the
investigated prototype boiler (Fig. 4) - corresponds the desired
value in steady states. It is possible to keep one the same
desired value of the control variable while the values of the

Fig. 3. CO concentration peaks elimination by means of a better algorithm
of the grate movement ([2] , fig 2)
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
196

manipulated variable mutually differ in dependence on another
changeable variable influencing the efficiency of the whole
controlled process.. In the case of combustion processes such
variable is mostly the combustion air flow defining air surplus.
In the boiler prototype scheme in Fig. 4 the air flow rate is not
measured, it can be only changed by means of a frequency
changer which makes possible to change revolutions of the fan
drive. The extreme, in the case of efficiency it is a maximum,
of course, is achieved when the fuel flow rate is at its minimum
that can be detected as a lowest steady state value of the
manipulated variable sufficient for keeping the control variable
on the desired value. This value varies according to changing
operating conditions usually having (dynamical) disturbance
influence on the function of the (main) control circuit. [4]
A traditionally preferred main task of a system control
design is to find a control algorithm and to set its parameters
which guarantee the best responses of the controlled variable.
Less attention is paid to the operating conditions. under which
the controlled device works. These conditions strongly
influence an overall efficiency of the controlled device and
very often they determine the environmental impacts of
controlled process, as it is in the case of a biomass combustion.
Optimization of biomass-fired boiler operating conditions is
therefore appropriate not only in order to increase the economic
efficiency of the controlled system run, but also to fulfill
tightening emission standards.
Suppose that the main system control is carried out by some
main controller. The operating conditions optimization may be
applied as a supplement to the traditional system control
provided that, firstly, it is possible to manipulate with least one
of the controlled system inputs different from the main
controller control variable, and secondly, there are some
identifiable optimal operating conditions. The first condition
implies that the controlled system is a multi-input process. The
second condition assumes that we can determine so-called cost
function defining the operating conditions optimality based on
defined criteria.
The biomass combustion in a hot-water boiler is a complex
non-linear process with two main inputs the fuel supply and
the combustion air supply. When the fuel supply is widely used
for a boiler output modulation, i.e. the primary boiler control,
then the combustion air supply may be used for the operating
conditions control. Control to a constant air-fuel equivalence
ratio () is usually used in large and medium-scale boilers for
the operating conditions control by the control of the
combustion air supply. The control to the constant requires a
special wide-band lambda sensor whose increasing availability
allows the use of this kind of control even for small-scaled
boilers. The optimization of the operation conditions controlled
by the constant control lies in the measuring of the optimal
value of for a specific device in advance, i.e. so called off-
line optimization.[5]
The main disadvantage of the off-line optimization is the
necessity to know the optimal value of the monitored
parameter in advance. In biomass combustion the optimal
value of is not constant. It depends on many variable


Fig. 4. Functional scheme of main components in the temperature control of the prototype biomass boiler equipped with the automated search of optimal
operating conditions
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
197

parameters, mostly parameters of the fuel. Moreover, these fuel
parameters, such as fuel moisture, or fuel caloric value to name
those depicted as examples in Fig. 4, are practically
immeasurable on-line, and thus any pre-set of the optimal
values of for different states of the combustion process is
unfeasible. A real-time operation conditions optimization does
not suffer from this disadvantage. However, the real-time
optimization is based on an on-line process monitoring and an
evaluation of the current optimal value of the controlled
secondary input from on-line measured data. The real-time
optimization therefore represents much more complex problem
then the off-line optimization. [6]
V. REAL-TIME OPTIMIZATION BY EXTREMUM-SEEKING
CONTROL
An extremum seeking control or a self-optimizing control is
set of control problems dealing with tracking a varying
extremum (maximum or minimum) of a cost (performance,
objective, output) function. It is common that a reliable
mathematical model of non-linear controlled plant is not
available to predict the variation of the cost function with time.
A strength of the extremum seeking control is that an only
necessary knowledge is the cost function attains an extremum
(maximum or minimum) and this extremum is reachable by
setting of an input u. Any explicit knowledge about the plant or
its steady-state input-output map is not necessary. The
extremum seeking control is therefore a model-free on-line
optimization gradient-based method that dynamically seeks the
optimal point of input-output steady state mapping of the plant.
[6]
A. Continue Extremum-seeking Control Methods
The continue approach utilize the perturbation signal added
to the controlled plant input to excite the controlled plant and
explore the controlled plant steady-state map. The current
gradient of the cost function is then evaluate from a continue
observation of the control plant output. The controlled plant
input is then continuously change on the basis of the evaluated
cost function gradient [7, 8].
B. Numerical Extremum-seeking Control Methods
Iterative extremum-seeking methods use a discrete
sequence of probing inputs and then utilize methods of
numerical optimization to reach the cost function extremum. In
contrast to the continue methods the numerical
extremum-seeking algorithms do not require the continues
observation of the control output and the optimization
algorithm may have time to collect enough information to the
cost function gradient estimation [8].
VI. SIMULATION MODEL OF BIOMASS-FIRE BOILER FOR
TESTING OF MINIMUM-SEEKING ALGORITHMS
A functionality of the proposed biomass-fired boiler
operation optimization based on minimum-seeking algorithm
was proved on small-scale biomass-fired boiler as mentioned
above. The minimal iterative algorithm based on relay
principal was used during functionality proving experiments.
A set of promising minimum-seeking algorithms, which
can be used for the biomass-fired boiler operation optimization,
includes a large variety of different algorithms. Moreover, each
algorithm includes more or less parameters that need to be pre-
tuned for a specific application. It would be very time-
consuming to test and tune all possible algorithms or develop
new algorithms on a real-world biomass-fired boiler. This
resulted in the need to carry out first investigation in a
simulated way. A model for this purposes has been proposed
with the assumption that it is not necessary (and possible) to
design a precise physical-mathematical model of biomass
combustion process and heat transfer in a biomass-fired hot-
water boiler. For testing and designing proposes it is enough if
the optimization algorithms are developed on the basis of a
simulation model which represents the real object in an
approximate way.
Behavior of such simulation model must include an
estimate of the static sensitivity, essential dynamics and non-
linear character of responses at least for the input used in
optimization.
The block scheme of the proposed simulation model is
depicted in Fig. 5. Combustion and heat transfer processes are
modeled by means of two transfer functions reflecting empiric
knowledge of the real boiler dynamics. By these transfer
functions, the modeled dynamics is caught in differences of
variables from their steady state absolute values which in the
model are added to the differences as outputs from the blocks
with dotted borders. These blocks serve as sources of constant
values readable inside of such blocks. The transfer blocks in
the lower part of the complex block bearing the label BOILER
(the controlled system) have three inputs: the fuel feeding flow
obtained as converted values of the manipulated variable u
generated by the PI controller; then the efficiency impact of the
current operating conditions generated as an dynamically
delayed output from a block generating current values of
efficiency defined as a nonlinear tabulated function; and the
third input represent noise and the other disturbances that are
not depicted in the scheme. As the output of the whole block
BOILER considered as the controlled variable y is the heating
water temperature. Values of the controlled variable y are input
for block representing the PI controller block where they are
compared with the desired value of the heating water
temperature differences between. The PI controller evaluates
the control error e and according to its algorithm sets the values
of fuel feeding necessary to reach and keep the heating water

Fig. 5. Block scheme of simulation model approximation roughly
reflecting a biomass-fired boiler behavior used for testing and
evaluating different minimum-seeking algorithms
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
198

temperature y on the desired value w. The looking-up of
efficiency values is based on evaluating the ratio air-fuel. The
air flow rate changes are derived from the frequency of the
electric current supplying the air fan drive. The frequency is
calculated in the block OPERATING CONDITIONS
OPTIMIZER. In the ratio calculation it is important not only
knowledge of the current value of the fuel delivery but also to
detect to the controller steady states as suitable instants for
performing frequency changes. Therefore the control error e is
the next input for the optimizer making possible to speed up
the steady state recognition.
Thus designed simulation model was realized in Matlab
Simulink. Parameters of blocks used in the simulation program
are based on experiments and empiric experience obtained in
laboratory instillation of small-size boilers. The looked-up
combustion efficiency value undergoes a dynamic transfer
which is done by a non-minimal phase block of the second
order. This reflects an observed behavior when a response of
the output water temperature to the step change in air intake
value starts in the opposite direction before changes its course
towards to a new steady state value.
VII. MINIMUM-SEEKING ALGORITHM APPLICABILITY TESTS
The extremum-seeking based optimization methods are
divided to two groups, the continue methods and the iterative
methods, as mentioned before. Our goal now is to evaluate
advantages and shortages we can expect implementing
algorithms from the first or second group. We carried out a set
of simulation experiments using the simulation model of a
biomass-fired boiler, which was described in the previous
section VI. These experiments had to assess applicability of
both main optimization approaches. In these tests we intended
to speed, accuracy, precision and robustness comparison.
Schemes representing both main extremum-seeking based
optimization approaches are depicted in Fig. 6 and Fig. 7
depicted.
The scheme in Fig. 6 represents the group of
numerical-based extremum-seeking methods. The gradient
estimator evaluates the impact of an increment of the fan drive
frequency on the change of the fuel feed. Both differences are
evaluated after achieving the new steady-state. The steady-state
is evaluated from trends of the control error. A numerical
optimization algorithm then computes the new fan drive
frequency by the steepest descent formula:

) (
) ( ) 1 (
k du
du
k u k u
fan
fuel
fan fan
c + = + (2)
where ufan(k+1) is the new value of the fan drive frequency,
ufan(k) is the current value of the fan drive frequency, is
update constant and dufuel/dufan(k) is the estimation of the
gradient in the current step.
The scheme in Fig. 7 presents a continuous method
representative. This algorithm is based on a general continues
extremum-seeking scheme presented in [6]. It uses a harmonic
sinusoidal perturbation excitation added to the electric current
frequency driving the air fan motor. When the frequency of the
harmonic perturbation is slow enough, then the dynamics of the
heating water control loop does not interfere with the
extremum-seeking loop and this control loop can be described
as a static input-output map. Detail arrangement of the
extremum-seeking scheme in Fig. 7 shows that the excitation
signal asin(t) is fed to the Fan Drive input and it excites the
sinusoidal response of the output Heating Water Temperature.
This response after processing in the controller is filtered by a
high-pass filter with transfer function

h
pass high
s
s
F
e +
=

. (3)
Both the output of the high-pass filter and the excitation signal
are nearly two sinusoids that are in phase or out of phase. If the
static input-output map is convex (the extremum is minimum),
then out of phase means that a current Fan Drive Frequency
is lower than the target optimal frequency. If it is in phase it
means that a current Fan Drive Frequency is higher than the
target optimal frequency. If the shape static input-output map is
concave, the reverse holds true. The product of both sinusoids
contains a DC component. This DC component is
extracted by the low-pass filter

l
l
pass low
s
F
e
e
+
=

. (4)
Fig. 6. Scheme of the optimizer using a numerical-based extremum-seeking
method

Fig. 7. Scheme of the optimizer using a continue extremum-seeking method
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
199

The integrator is then update law tunes current input value to
target input value driven by the sensitivity function. A general
form of the optimization algorithm formula is:

s
k
s
t
s
s
t u
t a u t u
l
l
h
fuel
fan fan
e
e
e
e
e
+ +
+
+ + =
) sin( ) (
) sin( ) 0 ( ) (
(5)
The simulation experiments were aimed mainly to assess
the precision, the accuracy and the robustness of both methods
for optimization of described system. A representative result of
tests is in Fig. 8 depicted. In this figure are three graphs. A
permitted range of the fan drive frequency values was limited
to the range from 30 to 40 Hz.
The first graph in Fig. 8 shows the course of the electric
current frequency driving the air fan motor which is the output
of the optimization algorithms and on of the inputs of the boiler
behavior model. The course of the fuel feed needed to maintain
the desired heating water temperature is captured in the middle
graph. The bottom graph shows the course of the heating water
temperature that is to be maintained at the desired value. The
solid line belongs to the continuous optimization method and
the dotted line belongs to the numerical-based method. The
black solid lines shows the optimal values of variables in the
first and second graph. The desired temperature of heating
water is plotted using a solid type of the line labeled as "Set-
point'.
All three graphs are divided to three sections. The first
section shows the start of optimization. The starting conditions
of the boiler behavior model are different from those
representing desired optimal operating state. In the beginning
of the simulation experiment, the fan drive frequency is set to
the lowest permissible limit value of 30 Hz and the heating
water control circuit is in steady state. This section of
experiment was aimed to show the speed, accuracy and
precision of both methods. It confirmed that the used

Fig. 8. Simulation experiment results comparing two different extremum-seeking optimization methods show
- in Section I, a good feasibility of both methods to reach desired an optimal steady-state when they were started from non-optimal operating conditions;
- in Section II, better responses to a change (from 70 C to 73C) of the heating water temperature desired value in the case of the continues algorithm;
- in Section III, an influence of changes in the air supply on the boiler behaviour if the air supply conditions have changed
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
200

continuous method was much faster than the used
numerical-based method. Whereas the numerical-based method
needs to wait until the heating water control loop realizes a
steady state after each step, the used continuous method utilizes
continuous gradient estimation and reaches the optimal value in
time which is close to the control loop dynamics. On the other
hand, the numerical-based method can be assessed as both
precise and accurate. The continuous method based on the
sinusoidal excitation cannot be accurate and precise at once. It
is needed to find a compromise between precision and
accuracy during the used continuous optimizing algorithm
tuning because the simulation experiments showed that the
relationship between the accuracy and the precision is inversely
proportional with this method.
The desired heating water temperature was changed from
70 C to 73 C in the beginning of Section two. Immediately
after this change, the controller starts to change of the fuel feed
in order to reach the new desired value of the heating water
temperature. The need of the numerical-based method to wait
for a steady-state is an advantage in this situation. The
continuous method reacts to changes in fuel consumption
forced by the change of the desired heating water temperature
as this changes are the reaction of the control loop to the
optimizing algorithm excitations. Only the limitation of the fan
drive frequencies in the range from 30 to 40 Hz kept the system
stable. Moreover, because the change of the control loop
operation point changes its dynamics and sensitivity, the used
continuous method is less accurate (but more precise). The
accuracy of the numerical-based method remained the same.
The third section characterizes change in the air supply
conditions carried at its begin. They changed the behavior of
the boiler so much the used continuous optimization method
collapsed and needs to retune the parameters for new operation
conditions. The numerical-based method is slightly imprecise
but still able to achieve and safely maintain the new optimal
value of the fan drive frequency under the new operating
conditions.
CONCLUSIONS
The aim of the paper is to show some of the issues solved
as ecological improvements suitable for small-scale boilers.
Considered solutions had to be in the range of low-cost,
because no expensive instrumentation is acceptable with
respect to the total price of the boiler. Therefore we focused at
design of algorithms for PLC's. Their transfer and modification
are tested when applied for a medium size prototype boiler we
have newly at disposal for experiments in our labs.
From all the algorithms tested on the small-scale boiler, the
most suitable for transfer are the algorithms optimizing
combustion. Due to a different construction of both boilers it is
necessary to define steps of the transfer based on the partial
experiments.
One of such transfer steps is investigation of algorithms
performing search of optimal operating conditions. This has
been done with use of a simplified model of boilers. Control of
one or more variables, that is linked with the requirement to
keep optimal operating condition by sliding manipulation of
another manipulated variable, is a task which can be found in
the other devices. Therefore its reliable solution has enough
opportunities to be applied in other control tasks.
The simulation experiments show that the continuous-based
algorithms are faster then numerical-based algorithms but are
much more sensitive to disturbances and to changes in the
control loop parameters. Because both, disturbances and
control loop parameters parameter changes, are often occur in
the process of biomass combustion the numerical-based
approach seems to be more suitable for the proposed
optimization.
Further experiments and algorithm design will be aimed
especially at improving numerical-based optimization approach
including the current operation state estimation and its future
state prediction.
ACKNOWLEDGMENT
This research had support within Project MSM 68400
770035 Development of environmentally friendly
decentralized power systems.
Latest results have achieved within the project
TA02020836 "Inteligent control methods of residual-biomass
fired boiler ecological control" having financial support from
Technology Agency of the Czech Republic.
The work of Ph.D. students has been supported by the
Doctoral Grant Support of the Czech Technical University in
Prague, grant No. SGS13/179/OHK2/3T/12
REFERENCES
[1] J. Hrdlicka a B. Sulc, On-line operating adjustment of small biomass
fired boilers optimizing CO and NOx emissions", in Proceedings of the
6th IASME/WSEAS international conference on Energy & environment,
Stevens Point, Wisconsin, USA, 2011, p. 3540.
[2] V. Placek and J. Hrdlicka, Influence of control on environmental and
economical aspects of small-scale biomass boiler, in 2011 International
Conference on Power Engineering, Energy and Electrical Drives
(POWERENG), 2011, pp. 13.
[3] ulc, B., Vrna, S., Hrdlika, J., Lepold, M. (2009). Control for
Ecological Improvement of Small Biomass Boilers, IFAC Symposium
Power Plants & Power Systems, 58 July, 2009, Tampere, Finland.
[4] Oswald, C., ulc, B. (2011). Achieving Optimal Operating Conditions in
PI Controlled Biomass-fired Boilers: Undemanding way for
improvement of small-scale boiler effectiveness. In: Proceedings of the
2011 12th International Carpathian Control Conference. Velke
Karlovice, 25.-28. May 2011. pp 280-285. ISBN: 978-1-61284-359-9
[5] Van Loo, S., Koppejan, J., editors (2008). The Handbook of Biomass
Combustion and Co firing. London: Earthscan, 2008.
[6] C. Oswald,. V. Plaek, B. ulc, A. Hoovsk: Transfer Issues of
Control Optimizing Combustion from Small-scale to Medium scale
Biomass-fired Boilers. 8th IFAC Symposium on Power Plant and Power
System Control. Toulouse 2012. ISBN: 978-3-902823-24-3. ISSN:
1474-6670
[7] Ariyur, K.B., Krsti, M.. (2003) Real-Time Optimization by Extremum-
Seeking Control. John Wiley & Sons, Inc., Hoboken, New Jersey, 2003.
ISBN 0-471-46859-2
[8] Zhang, C., Ordez R. (2012), Extremum-Seeking Control and
Applications: Applications: A Numerical Optimization-Based Approach.
Advances in Industrial Control, Springer London, 2012. ISBN 978-1-
4471-2223-4


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
201
The streamers dynamics study by an intelligent
system based on Neural Networks

Fouad KHODJA
Department of engineering
University Djillali Liabes
Sidi Belabes , ALGERIA
khodjafouad@gmail.com
Younes MIMOUN /Riad Lakhdar KHERFANE
Department of engineering
University Djillali Liabes
Sidi Belabes , ALGERIA
younesmi@yahoo.fr/rilakh@yahoo.fr


AbstractThe formation and propagation of streamers is an
important precursor to determine the characteristics of electrical
breakdown of many HV electrode configurations. Understanding
of the study of the interaction between the polymer surface and
the development process of the streamer is of major importance
when we want to improve internal and external performance
insulation systems. In this context, a numerical tool using neural
networks is developed. This model allows evaluating the speed of
streamers as a function of the amplitude of voltage initiation and
the nature of the insulating materials. For this, a database was
created to train the neural model from a laboratory model. This
investigation builds a database for predicting the propagation of
streamers on the polymers surface by different neuronal methods
and this presents an interesting tool for estimating the
propagation phenomena in functions of very important
parameters.
KeywordsOrganic Insulators; Pre disruptive phenomena;
Streamers; Artificial Neural Networks; Learning process; Neural
Networks Feedforward; Radial basis Function.
I. INTRODUCTION
Formation of a streamer is due to photo ionization
mechanisms occurring within the primary avalanche. The
electrons accelerated by the electric field excited by collisions
of neutral molecules which return to their ground state with
emission of a photon. The head of the avalanche is home of a
significant release of photons that are absorbed by the
surrounding gas.
If the electron produced is located in the vicinity of the
primary avalanche, it will create a new so-called secondary
avalanche, with the same mechanism of electron
multiplication, but the avalanche is now growing in a field that
is enhanced by the presence of the positive space charge.
Indeed, in an electric field sufficient to create the boot, the
electron velocity is about 100 times higher than that of positive
ions, so that the avalanche develops as a cloud of electrons
leaving behind positive ions near stationary, then the avalanche
leads to the formation of a dipole structure as shown in fig.1:
-a region (towards the anode) of high electron density,
-a region (towards the cathode) of a high density of positive
ions.

Fig. 1. Electron Avalanche
Therefore the separation of electrons and ions generates a
significant space charge produces an electric field ( ) of
dipolar structure and opposing the separation, which is
vectorially added to the external field (Fig.1).
II. MEASUREMENT TECHNIQUES
A. Optical Measurement
The luminous phenomena occurring within the range can
be recorded by cameras, streak cameras called ultra fast image
converters, photomultipliers, spectroscopy and strioscopy.
Cameras whose optical axes are placed at 90 from one another
possible to reconstruct the actual length of the discharge in all
three dimensions.
The image converter restores both the axial development of
the discharge and its temporal development.
Photomultipliers can be used to measure streamers in
relatively small intervals [4] over large intervals in Fig. 2.

Fig. 2. Schematic diagram of the arrangement of electrodes with
photomultipliers
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
202

B. Insulated Materials Used
Intervals involved in Fig. 2 are polymers. In the electrical
field, the scope of application of insulating organic solids
(polymers) is expanded: power transmission lines,
telecommunication cables, capacitors, alternators, electric
motors, electronic systems and terrestrial power components
and on board satellites...
The use of these materials in electrical insulation has
several advantages such as, excellent electrical properties
(resistivity, stiffness, and permittivity), good mechanical
strength and easy implementation, low weight and for some
possibility of recycling [5]. These materials had excellent
electrical insulating properties because of its low relative
permittivity, low dissipation factor, good stability over a wide
frequency range, and high dielectric breakdown strength [6].
The polymeric materials have a complex structure which
leads to different properties within the same material.
Knowledge of the structure of an individual macromolecule,
but also the arrangement of the macromolecules relative to
each other, is essential to understand the complexity of these
systems. The microstructure of a polymer insulator dictates the
physical, mechanical and electrical properties that are expected
of this material [5].
Insulating materials used in the experimental [4] are:
1 Polytetrafluoroethylene (PTFE).
2 PTFE carbonized (CPTFE).
3 Molybdenumdisuflide PTFE (MPTFE).
4 Nylon.
5 Ceramic coating (CERG).
III. NEURAL NETWORKS
A. Learning Process
Among the desirable properties for a neural network,
probably the most fundamental is the ability to learn from its
environment, to improve its performance through a learning
process [7].
Learning is a dynamic and iterative process for changing
the parameters of a network in response to the stimuli it
receives from its environment. The type of learning is
determined by how parameter changes occur. Thus, the
network may improve overtime [7].
That to say the weight
j i
w
,
connecting the neuron i to its
input j .At time t , a change
j i
w
,
of weight can be simply
expressed as follows:
) ( ) 1 ( ) (
, , ,
t w t w t w
j i j i j i
+ = (1)
and, therefore,
) ( ) ( ) 1 (
, , ,
t w t w t w
j i j i j i
+ = + (2)
With
) 1 (
,
+ t w
j i
and
) (
,
t w
j i
representing respectively the
values of the new and old weight
j i
w
,
.
A set of clear rules for carrying out such a process of
adaptation of the weights is called learning algorithm of the
network [7].
B. Multilayer Perceptron
These are best known neural networks. A perceptron is an
artificial neural network feedforward type ,i.e., direct
propagation.
There is a three-layer perceptron. The first is the input (it is
not considered neural layer by some authors because it is linear
and only distributes the input variables). The second is called
hidden layer (or intermediate layer) and is the heart of the
neural network. Its activation functions are sigmoid type. The
third, consisting here of a single neuron is the output layer. Its
activation function is the linear bounded [8].
Its learning is supervised type, by correcting errors. In this
case only, the error signal is "feeds back" to the inputs to
update the weights of neurons [7]. This is the error
backpopagation method.
The multilayer perceptron is a neural network used for most
problems of approximation, classification and prediction. It
usually consists of two or three layers of neurons fully
connected [7].
One problem of using neural networks is in the
choice of topology. For example there is no general rule that
gives the number of neurons to remember for the intermediate
layer. This choice is application-specific and, in general, these
are just arbitrary choices of which we verify later the
validity [8].
C. Radial Basis Function Networks
Neural networks Feedforward (NNF) and neural networks
based on radial basis function (RBFN) are a class of models
widely used in nonlinear system identification [9], [10].
Justification for this is that these networks with one hidden
layer can approximate any continuous function having a finite
number of discontinuities [11], [12].
A net boost for RBFN neural networks has been observed
in recent years because they offer major advantages over
commonly used to NNF. These benefits include the complexity
of the model and not a lighter load during learning [13].
Neural networks RBFN (Radial Basis Function Network)
have been developed by Moody and Darken [14]. They have
proven successful in several areas since they can approach
several types of functions [15].
The network is a network feedforward RBFN composed of
three layers: an input layer, a hidden layer and output layer.
The activation function in the hidden layer is a radial function.
The activation function most commonly used is the Gaussian
function [16].
The input layer is used as a distributor of inputs to the
hidden layer. Unlike NNF, the values of entries in the input
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
203
layer are routed directly to the hidden layer without
being multiplied by the weight values.
The unit of the hidden layer measures the distance between
the input vector and the center of the radial function, and
produces an output value depending on the distance. The center
of the radial function is called the reference vector [17].
IV. PROBLEM FORMULATION
The algorithms of artificial neural networks (ANN) have
been applied successfully in many applications in many fields.
In the field of high voltage, the ANN has also been applied
effectively to the first partial discharges [18].
The major field of application of ANN is the estimation of
functions, because the useful properties such as adaptability
and nonlinearity are in agreement with the estimation of the
equation describing functions when the function is unknown
and the only requirement is to have a representative sample of
the behaviour of the function. In this work, learning the
important data have been made of experimental studies on the
propagation of streamers on the surface of insulators [4].More
detailed studies and tests were conducted to determine the
parameters of the ANN to give better results and to have
a quality model. A certain approach using ANN as an estimator
function was used to effectively model the propagation
velocity of streamers V depending on several parameters:
The nature of the polymer, represented byT .
The initiation voltageu .
The relationship is as follows:
) , ( T D f V = (3)
It was found that when learning is complete, the ANN is
able to estimate the speeds of different functions efficiently and
effectively. This study attempts to show the effectiveness of
ANN as function estimator in studies of the propagation of
streamers [4]. Modelling the propagation velocities of the
streamer as a function of the applied voltage and the type of
material T by neural networks as a function estimated with
the aid of experimental data.
Each learning model includes two input parameters
u andT , and an output parameter which is the corresponding
values of V
The neural network model has two input nodes and one
output node [4].
Once the neural network trained by the training data, the
network is tested by the test data.
The collection of experimental data was obtained from the
experimental curve from article [4]. The shape of the curve of
the measured velocities as a function of applied voltage is
given as follows:
2 2.5 3 3.5 4 4.5 5
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
Voltage U , (KV)

v
e
l
o
c
i
t
y

,
(
1
0
e
5
x
m
/
s
)

measured velocities as a function of applied voltage


Nylon
CPTFE
MPTFE
PTFE
Air
CERG

Fig. 3. Velocities measured by the voltages applied
A. The Learning Algorithm
The root mean square error learning RMSE is given by:
2
1 1
) (
.
1
pk
NP
p
Nk
k
pk
k
O t
N NP
RMSE =

= =
(4)
The accuracy of learning is measured by the RMSE whose
expression was given by equation (4), and test accuracy is
measured by the percentage of the mean absolute error (MAE
%), given by:
100 %


=

n
t
O t
MAE
k
k K
(5)
Where:
k
t is the experimental result corresponding to the given
test input to the output neuron k ,
k
O is the output determined for the output neuron
k corresponding to the data test input, and nis the number of
input test data.
V. RESULTS AND DISCUSSION
A. Choice of the Arrangement and the Number of Neurons
We begin by a single neuron in the first layer, all
calculations are performed for the second arrangement, and the
number of neurons in the 1st, 2nd and 3rd layer is applied to
other arrangements. We do the same thing with two neurons in
the first layer, then three and four neurons the following
summary table is obtained.






Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
204
TABLE I. CHOICE OF THE ARRANGEMENT AND THE NUMBER OF
NEURONS FOR LOGSIG FUNCTION

Logsig Function, Arrangement 8,
Number of layers : 3

Number of
epochs
RMSE MAE
1st layer: 1 neuron
2000 2.8577e-004 0.0213 2nd layer: 1 neuron
3rd layer: 7 neurons
1st layer: 2 neurons
1000 3.1362e-004 0.0204 2nd layer: 2 neurons
3rd layer:11 neurons
1st layer: 3 neurons
1000 2.5530e-004 0.0208 2nd layer: 9 neurons
3rd layer:12 neurons
1st layer: 4 neurons
2000 2.9046e-004 0.0208 2nd layer: 3 neurons
3rd layer: 9 neurons

The best result was obtained for 02 neurons in the first
hidden layer, 02 neurons in the second hidden layer and 11
neurons in the third hidden layer. The number of iterations is
now 1000 iterations.
We change the number of iterations from 500 iterations to
10000 iterations.
TABLE II. EFFECT OF THE NUMBER OF ITERATIONS FOR LOGSIG
FUNCTION
Logsig Function, Arrangement 8, Number of layers : 3
1
st
layer: 2 neurons 2
nd
layer: 2 neurons 3
rd
layer:11neurons
Number of epochs RMSE MAE
500 3.6131e-004 0.0211
1000 3.1362e-004 0.0204
2000 3.0301e-004 0.0205
3000 2.9838e-004 0.0206
4000 2.9530e-004 0.0208
5000 2.9324e-004 0.0210
10000 2.4590e-004 0.0217

The best result is obtained for 1000 iterations, for the case
of 02 neurons in the first hidden layer.
The learning of the neural network is represented by the
following Figure:

Fig. 4. Learning of the neural network

Testing the neural network is that of Fig. 5.


Fig. 5. Test of the neural network
B. Effect of changing activation function on RMSE and MAE
TABLE III. CHOICE OF THE ARRANGEMENT AND THE NUMBER OF
NEURONS FOR TANSIG FUNCTION

Tansig Function, Arrangement 6,
Number of layers : 3

Number of
epochs
RMSE MAE
1
st
layer: 1 neuron
2000 0.0103 0.2717 2
nd
layer: 1 neuron
3
rd
layer: 7 neurons
1
st
layer: 2 neurons
1000 0.0505 1.5603 2
nd
layer: 2 neurons
3
rd
layer:11 neurons
1
st
layer: 3 neurons
1000 1.8818e-004 0.0343 2
nd
layer: 9 neurons
3
rd
layer:12 neurons
1
st
layer: 4 neurons
2000 0.0030 0.1053 2
nd
layer: 3 neurons
3
rd
layer: 9 neurons

The best learning error for tansig function for 1000
iterations, while for the test error MAE, the lowest being for
logsig function for 1000 iterations too.
TABLE IV. EFFECT OF THE NUMBER OF ITERATIONS FOR TANSIG
FUNCTION
Tansig Function, Arrangement 6, Number of layers : 3
1
st
layer: 3 neurons 2
nd
layer: 9 neurons 3
rd
layer:12neurons
Number of epochs RMSE MAE
500 2.0609e-004 0.0259
1000 1.8818e-004 0.0343
2000 1.4680e-004 0.0477
3000 1.4124e-004 0.0526
4000 1.3299e-004 0.1147
5000 1.0685e-004 0.3703
10000 1.2829e-005 0.5559

The increase in the number of iterations in this case has the
effect of reducing the learning error for tansig function for
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
205
10,000 iterations. The error on the test remains the lowest for
logsig function for only 1000 iterations.
C. Comparison between Feedforward and Radial Basis
Function Networks
TABLE V. SUMMARY TABLE BETWEEN FEEDFORWARD AND RBF
NETWORKS

Feedforward Network
(Trainlm function)
Radial Basis
Function Networks
(Newrb)


Logsig Tansig
RMSE

RMSE MAE
01 layer 2.3910e-004 2.2327e-004
2.9883E-004 02 layers 2.3292e-004 2.1608e-004
03 layers 2.4590e-004 1.2829e-005

The best RMSE was obtained for the Feedforward network
for the learning function Trainlm and the activation function
tansig.
The comparative curves of the experimental velocities
(measured) and simulated according to voltages applied
to the streamer, for different insulations and for air, are
given in Figure the 8 following:

2 2.5 3 3.5 4 4.5 5
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
Voltage U , (KV)

V
e
lo
c
it
y

,
(
1
0
e
5
x
m
/
s
)

The comparative curves of the experimental velocities (measured) and simulated
according to voltages applied to the streamer, for different insulations, and for air


Air
CERG
PTFE
NYLON
MPTFE
CPTFE
measured velocity
simulated velocity
measured velocity
simulated velocity
measured velocity
simulated velocity
measured velocity
simulated velocity
measured velocity
simulated velocity
measured velocity
simulated velocity

Fig. 6. Comparative curves of measured and simulated speeds depending
on applied voltages
VI. CONCLUSIONS
The best result concerning the error of learning or RMSE is
given for 03 hidden layers.
The number of iterations that gave the best result is 10000.
The best arrangement is the arrangement No. 06.
The activation function used in hidden layers is the function
tansigmode.
The function used in learning is trainlm.
Concerning the learning error for the RBF network, the
number of iterations is small (100 iterations) which increases
the speed of learning.
REFERENCES
[1] G. Le Roy, C. Gary, B. Hutzler, J. Lalot, Ch. Dubanton , " Les
proprits dilectriques de lair et les trs hautes tensions ", Edition
Eyrolles, Paris 1984.
[2] Recherches sur lamorage des grands intervalles dair aux Renardires,
Electra, n35, juillet 1974.
[3] T. SUZUKI, Breakdown process in rod-to-plane gaps with negative
switching impulses, IEEE trans. On power Apparatus and systems,
Vol.PAS 94 n4.Juillet/aout 1977
[4] N.L. Allen and P.N.Mikropoulos, Streamer propagation along
insulating surfaces, IEEE Transaction on Dielectrics and Electrical
Insulating, vol. 6 No. 3, June 1999.
[5] Nadine Lahoude, Modlisation du vieillissement des isolants
organiques sous contrainte lectrique. Application la fiabilit des
matriaux, Thse de Doctorat, 25 Mars 2009. Universit de
TOULOUSE.
[6] L. Li, N. Bowler, M. R. Kessler and S. H. Yoon, Dielectric Response
of PTFE and ETFE Wiring Insulation to Thermal Exposure, IEEE
Transactions on Dielectrics and Electrical Insulation, Vol. 17, No. 4;
August 2010.
[7] Marc Parizeau, Rseaux de neurones, GIF-21140 et GIF- 64326,
UNIVERSITE DE LAVAL, automne 2004.
[8] Lotfi Baghli, Contribution la commande de la machine asynchrone,
utilisation de la logique floue, des rseaux de neurones et des
algorithmes gntiques, janvier 1999, Universit Henri Poincar
Nancy-I.
[9] K.S. Narenda, and K. Parthasarathy, Identification and control of
dynamical systems using neural networks, IEEE Transactions on
Neural Networks, Vol.1, pp. 4-27, 1990.
[10] S. Chen, and S. A. Billings. Neural networks for non-linear system
modeling and identification, International Journal of Control, Vol.2,
pp. 319-346, 1992.
[11] K. Hornick, M. Stinchcombe and H. White, Multilayer feedforward
networks are universal approximators, Neural Networks, vol.2, pp.
359-366, 1989.
[12] J. Parks and I. W. Sandberg, Universal approximation using radial-
basis function networks, Neural Computation, Vol.3, pp 246-257,
1991.
[13] S. Lee, and R. M. Kil, A Gaussian potential function network with
hierarchically self-organizing learning, Neural Networks, Vol. 4, pp.
207-224, 1991.
[14] S. Haykin, Neural Networks : A Comprehensive Foundation, IEEE
PRESS, 1994.
[15] J. Park, I.W. Sandberg, Approximation and radial basis function
network, Neural Computation, 5, 1993, pp. 305-316.
[16] A. Idri, S. Mbarki, A. Abran, Linterprtation dun rseau de neurones
en estimation du cot de logiciels, Actes du 6me Colloque Africain
sur la recherche en Informatique (CARI02), 14-17 octobre 2002, pp.
221-228.
[17] I. Yilmaz, N. Y. Erik, and O. Kaynar, Different types of learning
algorithms of artificial neural network (ANN) models for prediction of
gross calorific value (GCV) of coals, Scientific Research and Essays,
Vol. 5(16), pp. 2242-2249, 18 August, 2010.
[18] A.S.Farag, Estimation of Polluted Insulators Flashover Time Using
Artificial Neural Networks, IEEE, 1997.



Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
206
Phenol Sulfonic Acid Oxidation in aqueous solution
by UV, UV/H
2
O
2
and Photo-Fenton Processes


N. Jamshidi*, HSE Training Manager,
National Petrochemical Company, Tehran, I.R. Iran.,
Naserjam@yahoo.com
M.T. Jafarzadeh*,Manager of Environment,
National Petrochemical Company, Tehran, I.R. Iran.,
Jafarzaadeh@yahoo.com
A. Khoshgard, Chemical Engineernig Dep. Islamic
Azad University South Tehran Branch, Tehran, Iran.,
A_Khoshgard@Azad.ac.ir
L. Talebiazar, Senior Expert of Environment Lab.,
AmirKabir University of Technology, Tehran, I.R. Iran.,
ltalebiazar@yahoo.com
R.Aslaniavali, Senior Expert of Environment,
AFA Company, Tehran, I.R.Iran.
R_Aslani@gmail.com


Abstract In this study, advanced oxidation processes (UV,
UV/H
2
O
2
, UV/H
2
O
2
/Fe(II) and UV/H
2
O
2
/Fe(III)) were
investigated in lab-scale experiments for degradation of phenol
sulfonic acid (PSA) in aqueous solution. The study showed that
the UV/H
2
O
2
process has removal percentage 90.9, 93.0 and
94.4 for neutral, basic and acidic conditions in 20 minutes
respectively.
The experimental results showed that the optimum
conditions were obtained at a pH value of 3, with 4 mmol/1
H
2
O
2
, and 0.25 mmol/1 Fe(II) for the UV/H
2
O
2
/Fe(II) system
and 6 mmol/l H
2
O
2
and, 0.4 mmol/1 Fe(III) for the
UV/H
2
O
2
/Fe(III) system.
The reaction was influenced by the pH, the input
concentration of H
2
O
2
and the amount of the iron catalyst and
the type of iron salt. As for the UV processes, UV/H
2
O
2
showed
the highest degradation rate under acidic conditions.

Keywords Photochemical Oxidation; phenol sulfonic acid;
Photo-Fenton; UV radiation; Hydrogen peroxide; Degradation.

I. INTRODUCTION
Many industrial processes, such as oil refineries,
petrochemical industries (olefin plants), Steel factories,
plastic plants, paper plants, synthetic chemicals, pesticides,
coal conversion generate flow streams that contain small
concentrations of phenols and phenolic compounds. The
removal of these pollutants from wastewater is one of the
most critical topics in environmental research and is required
prior to discharge or reuse of the waste flow.
Phenolic compounds are one of the major classes of organic
pollutants generated through various industrial activities. For
example, more than 97,000 tonnes of phenolic wastes were
generated by the industries in the United States in 2000 [1].
Electrolytic tin plating on steel substrate has been widely
used in food and beverage industries
due to its non-toxic nature [2]. Recently, it also has been
applied in the semiconductor industry because of its strong
resistance to corrosion and tarnishing of component leads,
solderability and ductility. Phenol Sulfonic Acid (PSA) and
its isomers work as electrolytes in electroplating baths for
tin-plating applications also as a catalyst in the production of
phenolic floral foam and in paint, textile and carpeting
industries, tanneries, pharmaceutics, glue production and etc.
The acute toxicological effects of phenol and its derivatives
are largely on the central nervous system. Acute poisoning
can lead to severe gastrointestinal disturbances, kidney
malfunction, circulatory system failure, lung edema and
convulsions. Fatal doses can be absorbed through the skin.
Key organs damaged by chronic phenol exposure include the
spleen, pancreas and kidneys.
The toxic effect of phenol sulfonic acid (PSA) resembles
those of phenol [3]. Various treatment technologies are
available for the reduction of all levels of initial phenol
concentration in phenolic wastes. These are classified as
solvent extraction for high levels of phenols (above 500
ppm), physico-chemical and biological treatments for
intermediate levels of phenols (5-500 ppm), ozonation and
carbon adsorption for low levels of phenols [4].
The Photo-Fenton process, the combination of homogeneous
systems of UV/H
2
O
2
/Fe compounds, produced the highest
photochemical elimination rate for phenol (up to 100 ppm)
[5, 6].
In this study, removal of PSA using advanced oxidation
processes (UV, UV/H
2
O
2
, UV/H
2
O
2
/Fe(II) and
UV/H
2
O
2
/Fe(III)) has been studied and its removal
efficiency is compared.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
207
II. MATERIALS AND METHODS
Phenol sulfonic acid (4-hydroxybenzenesulfonic acid), 65%
solution in stable form was provided from Mreck. For PSA
concentration measurement, calorimetric method with
spectrophotometer was used. In this stage, solutions with
concentrations of 0.1, 0.5, 1, 5, 10, 50, 100 and 400 mg/lit
were prepared and their light absorption in UV mode and in
two light wavelength of 235 and 259 nm were tested. Results
showed that 235 nanometer wavelengths are sensitive to
concentrations less than 10 mg/lit PSA and 259 nanometer
wavelengths are sensitive to concentrations more than 10
mg/lit of PSA. Using these data, standard curves for the
solutions were prepared and used for subsequent
measurements.
Ferrous (FeSO
4
.7H
2
O) and ferric [Fe
2
(SO
4
)
3
.7H
2
O] sulphate
heptahydrate used as sources of Fe(II) and Fe(III), were all
analytical grade and purchased from Merck. Hydrogen
peroxide solution (35% w/w) in stable form was provided by
Riedel-deHaen Company. All reagents employed were not
subjected to any further treatment. Water was double
distilled quality.
Samples were taken at appropriate time intervals from the
reaction vessel and pipetted into (5 ml) glass vials. The vials
were filled so as to leave no headspace and sealed with
teflon-lined silicon septa and screw caps. The samples were
immediately analyzed to avoid further reaction.
Concentration changes of phenol sulfonic acid were
determined by a spectrophotometer (CARY 100 Scan,
VARIAN) according to the standard methods [7]. The initial
and treated solutions of phenol sulfonic acid were
determined by the standard methods procedure [7]. The pH
measurements were carried out with a Metrohm model 691
pH meter, calibrated with two buffer solutions of 3 and 7.
A. Experimental setup
All experiments were performed in a batch reactor with a
cooling jacket. The schematic diagram of the experimental
set-up used in the study is shown in Fig. 1.

Fig. 1. Schematic diagram of photochemical oxidation system experi-
mental set-up.
The reactor was cylindrical with 1.5 L volume and the
internal part is made of quartz glass which was available for
the transfer of the radiation and the outer part is made of
Pyrex glass. Irradiation was achieved by using UV lamp
(medium pressure mercury lamp UVOX 300 of 300 W, 245-
265 nm, from ARDA Company in France) which was
immersed in the glass tube.
The reactor was equipped with a cooling water jacket system
(with recycle water thermostat model OPTIMA 740 , Japan).
The reactor was filled with the reaction mixture. Mixing was
accomplished by the use of a magnetic stirrer.

C. Photodegradation procedures
For each experiment, synthetic aqueous solution of phenol
sulfonic acid (to simulate a high loaded phenol sulfonic acid
containing industrial wastewater) was prepared in double
distilled water as solvent. The laboratory unit was filled with
1.5 L of the phenol sulfonic acid solution. For runs using
UV/H
2
O
2
system, hydrogen peroxide at different amounts
was injected in the reactor before the beginning of each run.
For runs, using the photo-Fenton process, the pH value of the
solution was set at the desired value by the addition of a
H2SO4 solution before startup, then a given weight of iron
salt was added. The iron salt was mixed very well with the
phenol sulfonic acid before the addition of a given volume of
hydrogen peroxide. The time at which the ultraviolet lamp
was turned on was considered time zero or the beginning of
the experiment which was taking place simultaneously with
the addition of hydrogen peroxide.

III. RESULTS AND DISCUSSION
A. The effect of the amount of H
2
O
2

Although hydrogen peroxide did not oxidize phenol at all, as
observed in this work, when it combined with UV
irradiation, the rate of phenol degradation increased
significantly compared to that of direct photolysis. Fig. 2
illustrates the percent degradation of phenol as a function of
the irradiation time at different doses of H
2
O
2
input. The
photolysis of phenol in the absence of H
2
O
2
gave rather
moderate results and resulted in a slow degradation of
phenol. By addition of H
2
O
2
, the degradation rate of phenol
increased when hydrogen peroxide concentration increased.
As can be seen from Fig. 2, the percent degradation of
phenol sulfonic acid at 4 mmol/L hydrogen peroxide dosage
was 67.5 and was 67.9 at 6 mmol/L hydrogen peroxide
dosage. In this process, hydroxyl radicals generated from the
direct photolysis of hydrogen peroxide were the main
responsible species of phenol elimination. However
hydrogen peroxide also reacts with these radicals and hence
acts as an inhibiting agent of phenol sulfonic acid
degradation [8].
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
208
Fig. 2. Degradation of phenol sulfonic acid with the UV/H2O2 process.
The effect of hydrogen peroxide concentration (irradiation time= 5 min.).
B. Photo-Fenton process
The formation of the hydroxyl radicals by using the
photo-Fenton process under application of Fe(II) occurs
according to the following Eq. (1) [9].
Fe
2+
+ H2O2 Fe
3+
+ OH
-
+OH* (1)
Reaction (1), already known as the Fenton reaction,
possesses a high oxidation potential, but its revival in the
application to wastewater treatment began only recently [10].
UV irradiation leads not only to the formation of additional
hydroxyl radicals but also to a recycling of the ferrous
catalyst by reduction of Fe(III). By this the concentration of
Fe(II) increases and therefore the gross reaction is
accelerated [11]. The reaction time needed for the photo-
Fenton reaction is extremely low and depends on the
operating pH value and the concentrations of H
2
O
2
and iron
added. Within 5 mins above 80% destruction of phenol
sulfonic acid could be observed using photo-Fenton
processes.
C. The effect of the pH value
The pH value affects the oxidation of organic substances
both directly and indirectly. The photo-Fenton reaction is
strongly affected by the pH-dependence. The pH value
influences the generation of OH radicals and thus the
oxidation efficiency. Fig. 3 (a,b and c) show the effect of the
pH value during the use of the photo-Fenton process. A
maximum degradation of 94.4% was obtained with the
system UV/H
2
O
2
at a pH=3, degradation of 93.0% with the
same system at a pH=8.5 and degradation of 90.9% at a
pH=7.
D. The influence of initial hydrogen peroxide concentration
Fig. 2 shows the effect of the initial hydrogen peroxide on
the degradation of phenol with the use of photo-Fenton
processes. As expected, the degradation of phenol was
increased by increasing the concentration of H
2
O
2
added.
This can be explained by the effect of the additionally
produced OH
0
radicals. Addition of H
2
O
2
exceeding 20 m
mol/L for UV/H
2
O
2
system did not improve the respective
maximum degradation; this may be due to auto-
decomposition of H
2
O
2
to oxygen and water and the
recombination of OH
0
radicals. Since OH
0
radicals react with
H2O2, H2O2 itself contributes to the OH scavenging
capacity [8].

Fig. 3. Phenol sulfonic acid degradation as a function of the pH value by
using UV/H2O2 process: (H2O2)0 = 4 mmol/1 [pH=3(a), pH=7(b) and
pH=8.9(c)].
Therefore, H
2
O
2
should be added at an optimal
concentration to achieve the best degradation.
E. The effect of the amount of iron salt
Iron in its ferrous and ferric form acts as photo-catalyst
and requires a working pH below 4. To obtain the optimal
Fe(II) or Fe(III) amounts, the investigation was carried out
with various amounts of the iron salt. Fig. 4 and Fig. 5 show
the percent degradation of phenol as a function of the added
Fe(II) and Fe(III). The figures show that the addition of
either Fe
2+
or Fe
3+
enhanced the efficiency of UV/H
2
O
2
for
phenol degradation. The degradation rate of phenol sulfonic
acid distinctly increased with increasing amounts of iron salt.
Addition of the iron salt above 0.25 mmol/L Fe(II) or 0.40
mmol/L Fe(III) did not affect the degradation, even when the
concentration of the iron was doubled. A higher addition of
iron salt resulted in brown turbidity that hindered the
absorption of the UV light required for photolysis and caused
the recombination of OH radicals. In this case, Fe
2+
reacted
with OH radicals as a scavenger [12].
It is desirable that the ratio of H
2
O
2
to Fe(II) should be as
small as possible, so that the recombination can be avoided
and the sludge production from iron complex is also reduced.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
209
Fig. 4. Phenol sulfonic acid degradation as a function of iron catalyst
(Fe(II)) addition: (H2O2)0 =4 mmol/1, pH=3.
Fig. 5. Phenol sulfonic acid degradation as a function of iron catalyst
(Fe(III)) addition: (H2O2)0 =4 mmol/1, pH=3.
IV. COMPARISON BETWEEN UV/H
2
O
2
SYSTEM
AND PHOTO-FENTON PROCESS
A. Degradation rate
The photodegradation of phenol was investigated in both
systems UV/H
2
O
2
and photo-Fenton process
[UV/H
2
O
2
/Fe(II) and UV/H
2
O
2
/Fe(III)]. The loss of phenol
sulfonic acid was observed as a function of irradiation time
and data were fitted to a first-order rate model
Ln(C
1
/C
0
)=-K
0
t (2)
Where C0 and C1 are the concentration of phenol sulfonic
acid at irradiation times 0 and t, K
0
is a first-order rate
constant (in min
-1
) and t is the irradiation time (in min). The
rate constants were determined using a first-order rate model
[Eq. (2)]. The results are listed in Table 1.
The experimental data in Table 1 show that UV/H
2
O
2

process had a significant accelerating effect on the rate of
oxidation of phenol sulfonic acid. The data in Table 1 show
that adding Fe(II) or Fe(III) to the UV/H
2
O
2
system
decreased the rate of phenol oxidation by a maximum factor
0.86 and 0.82 for Fe(II) and Fe(III), respectively, over the
UV/H
2
O
2
system, depending on both H
2
O
2
and Fe doses.
Table 1: Values of reaction rate constants of the degradation of phenol
sulfonic acid by different types of AOP.

IV. CONCLUTIONS
The results show that the degradation rate of phenol
sulfonic acid strongly accelerates by the photochemical
oxidation processes. The UV/H
2
O
2
process produced the
highest photochemical elimination rate for phenol sulfonic
acid. The oxidation rate was influenced by many factors,
such as the pH value, the amount of hydrogen peroxide and
iron salt and the type of iron added. The optimum conditions
obtained for the best degradation were a pH = 3 and a H
2
O
2

concentration of 4 mmol/1 for UV/H
2
O
2
system.
The advantages of the UV/H
2
O
2
process as an oxidative
pre-treatment step over other photochemical oxidation
processes are economics, efficiency especially if aromatic
compounds are to be destroyed, easy handling of the method
because no specific technical equipment is necessary, low
investment, less energy demand and harmless process
products. The acidic pH (<4) is major problem currently
under examination.
Combination of an AOP with biological treatment is a
promising alternative because one can take advantage of
both methods and develop as a result a potent wastewater
purification method.
Considering the UV/H
2
O
2
method as a preliminary step
prior to a biological wastewater treatment, one has to adjust
pH twice, first to an acidic pH below 4 to perform the
reaction and then back to a neutral pH.

IV. ACKNOWLEDGMENT
The authors wish to thank to National Petrochemical
Company (NPC) for support of this study.


REFERENCES
[1] US EPA, 2000 Toxics Release Inventory (TRI)
Public Data Release Report. EPA-260-R-02-003 United
States Environmental Protection Agency, Washington, DC.
(2002).
[2] E. Morgan, Tinplate and Modern Can making.
Pergamon Press Ltd. (1995).
[3] E.M. Stanly, Environmental Chemistry. Lewis Pub.,
7th ed. (2000).
[4] J.W. Patterson, Industrial Wastewater Treatment
Technology. 2nd ed., Butterworth Publisher Inc., Boston,
371-393 (1985).
[5] N. Jamshidi, A. Torabian, A.A. Azimi and A.A.
Ghadimkhani, Degradation of phenol in aqueous solution by
AOP. Asian Journal of Chemistry, 21 (1), 673-681 (2009).
[6] A. Torabian, N. Jamshidi, A.A. Azimi, G.R. Nabi
Bidhendi and M.T. Jafarzadeh, Asian Journal of
Chemistry, 21 (7), 5310 (2009).
Type of advanced oxidation process K
0
(min
-1
)
UV 0.379
UV/H
2
O
2
0.792
UV/H
2
O
2
/Fe(II) 0.684
UV/H
2
O
2
/Fe(III) 0.652
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
210
[7] Standard Methods for the Examination of Water and
Wastewater, 22th Edition. American Public Health
association (APHA), American Water Works Association
(AWWA) and Water Environment Federation (WEF).
Washington DC, USA, 2012.
[8] A. Mokrini, D. Oussi, S. Esplugas, Water Sci.
Technol., 35(4), 95 (1997).
[9] R.J. Bigda, Chem. Eng. Prog., 91(12), 62 (1995).
[10] S.H. Bossmann, E. Oliveros, S. Gob, S. Siegwart,
E.P. Dahlen, J.L. Payawan, M. Straub, M. Worner, A.M.
Braun, J. Phys. Chem. A, 102(28), 5542 (1998).
[11] J Gimenez, D. Curco, P. Marco, Reactor modeling
in the photocatalytic oxidation of wastewater. Water Sci.
Tech., 35(4), 207-13 (1997).
[12] C. Wailling, Acc. Chem. Res., 8, 125-131 (1975).



Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
211
The power options for transmitting systems using
thermal energy generator

Michal Oplustil
Department of Automation and Control Engineering
Tomas Bata University in Zlin
Zlin, Czech Republic
oplustil@fai.utb.cz
Martin Zalesak
Department of Automation and Control Engineering
Tomas Bata University in Zlin
Zlin, Czech Republic
zalesak@fai.utb.cz


Abstract This paper was aimed on system design of electric
energy converting from thermal energy. This electric energy was
harvested by human actions or from human impact on the device.
The first part of this article describes principles of thermal energy
conversion to electric energy. The second part of this paper
describes design of the thermoelectric generator for the
autonomous broadcast systems.
Keywordsthermal energy Seebeck effect; Peltier module;
thermoelectric generator; thermoelectric cooler; step-up DC/DC
converter
I. PROBLEM FORMULATION
There are very popular wireless controllers application in
wide field of the control technology at present. As other
electrical devices, wireless transmitting elements need some
amount of the electrical energy to run. The most common way
for powering of autonomous wireless transmitting systems is
by utilization of batteries or accumulators. These energy
sources are of short time means type and require frequent
replacements and in some cases their utilization is not
applicable at all.
There are some other means available at present, which enable
generation of necessary electric power by the human action -
so called human action energy harvesting systems. The most
common ways for electric energy generation are those of using
electromagnetic, piezoelectric or thermoelectric principles.
This paper deals with a possible way of utilization of the
thermoelectric generator (TEG) for the application in the
wireless control systems in buildings, with energy supply by
human palms. Specific problems in this application is the way
of required power generation from the source of small
temperature difference. The aim of this article is introduce the
numerator using thermoelectric generator as a power source
for wireless transmitting systems.
The design of the transmitter could be divided in parts as
the thermoelectric generator (TEG), electrical transformer and
wireless signal generator. There are described TEG and
electrical transformer in this article.
II. FUNDAMENTALS
A. Starting points
The thermoelectric power source is based on Peltiers
cells, which could generate electric power by heat. But using
only Peltiers cells without any supporting device is
inadequate, because output voltage is only few tens to hundred
milivolts, based on the temperature difference and density of
thermoelectric junctions. Temperature difference depends on
human physiology (palm temperature) and surrounding air
temperature. It is roughly in few centigrade, while density of
thermoelectric connections is purely technology mean. It
means that the solution of the task is only technology matter.
But even technology solution is limited by the acceptable size
of the transmitter. So the limits for the design and construction
of transmitter generated by the heat of human palms are
temperature of the palm, (warmer junction), temperature of the
surrounding air (cold junction), parameters needed by RF
transmitter and the acceptable size of the transmitter.
B. Thermoelectric effect analyses

The thermoelectric effect of the Peltiers module could be
described by the (1) and (2)

) ( T E i
T
= (1)
T T i q = (2)
Where:
i is density of electric current, in A/m
2
;
E electric field, in W/m;
q heat flow density, in W/m
2
;
T temperature gradient in K;
T
electric conduction, in S;
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
212
Seebeck coefficient, in V/(m.K);
thermal conductivity of thermocouple wire measured at
0 = E , in W/(m.K);
T temperature difference of the junctions, in K;
The output power of the thermoelectric generator in case of
no load,
0
U , could be described as follows:
T U =
0
(3)
Where:
0
U is output voltage from generator, in V;
average Seebeck coefficient, in V/K;
T temperature difference across the junction in K. Where
c h
T T T = (4)
h
T temperature of the warmer junctions, in K;
c
T temperature of the colder junctions, in K.
If a load is connected to the output of the thermoelectric
generator, voltage drops as a result of an internal generator
resistance and of the load. The current in the load circuits, I ,
could be expressed as follows:
l c
R R
U
I
+
=
0
(5)
Where:

I is output current of the generator in load circuits, in A;
0
U see (3);
c
R internal resistance of the generator, in ;
l
R the load circuit resistance, in .

The total heat input to the thermoelectric junction,
h
P , is
defined as follows from (1) and (2):

+ = T
l
R I I T S P
c h

2
5 , 0 (6)
Where:
h
P is thermal power entering to the generator, in W;
S area of the junction, in m
2
;
l distance between thermocouple junctions, in m;
T R I T
c h
, , , , , for explanation see above.
The efficiency of the thermal generator,
g
E , could be
expressed as:

h
g
P
I U
E

=
0
(7)
The thermoelectric module consists of number of junctions:
) (
0 l M M
R R T U = (8)

Where:

0
U is generator voltage output, in V;
M
modules average Seebeck coefficient, in V/K;
M
R modules average resistance, in ;
l
R load circuit resistance, in .

The Seebeck coefficient,
M
, resistance,
M
R , and thermal
conductivity, , are temperature depend and their values
must be calculated case to case, otherwise, the values could be
selected based on the average temperature of the module,
avg
T where:
2
c h
avg
T T
T
+
= (9)

III. PROBLEM SOLUTION
A. Device description
The thermoelectric generator (TEG) is composed of high
density of P-N metallic junctions (telur bismuth) between
two ceramic plates. One of the plates represents the warm side
of TEG and the other forms the cold one. The junctions are
formed from several P-N metallic couples shaped
orthogonally. The schematic of the TEG shows Fig.1.

Fig. 1. Section of Peltiers cell.

The output power from the module which contents of
NT number of junctions, P
o
could be expressed as follows:

M
M
R
T NT
I U P


= =
4
) (
2
0 0

(10)

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
213

Problem is, that the heat flowing from the warm side to the
cold side of the TEG increases temperature of the cold side
and thus decrease, the temperature difference, T, and farther
decrease the output power of thermocouples, P
o
. That is why it
is necessary to transfer the heat flowing from the warm side to
the cold side of TEG as much to the surrounding area it
means to decrease the thermal resistance of the part of the
TEG between the cold plate and the outside colder surface of
the TEG and farther from the outside surface to the
surrounded environment (air, in case of hand control
transmitter or wall, in wall mounted transmitter). Heat flow
transferred to the cold side of the TEG by the heat conduction
via thermocouples from the warm side, P
c
is given.

+ = T
l
R I P
C c

2
5 , 0 (11)
And heat flow density,
c
q is defined as:
c
c
c
A
P
q = (12)
In relations (11) and (12) stand for
c
P is heat flow, in W;
c
A area of the TEG cold side surface, in m
2
.

The maximum allowable thermal resistance,
T
R , of the
TEG between cold plate and the surrounded air:

c
rise
T
q
T
R

= (13)

Where:

T
R is allowed thermal resistance, in (m
2
.K/W);
rise
T maximal allowed cold junction temperature increment,
in C;
c
q heat flow density see (12).

According to analyses of the measured data we have found
that the output voltage of the relevant TEG are in ranges of
tens (TEC1) or hundreds maximally of milivolts (TEC2). This
is too low voltage level for power to other devices. It results to
the fact, that the TEG could not be used directly, but it
requires some means of transfer the output level voltage to
higher one. There exists more possibilities how to achieve
higher output voltage levels. The simplest way is connect the
output of TEG to input of a conversion transformer with a
suitably chosen transformation ratio and appropriate resistance
of primary windings. But this way would disproportionately
load of the TEG. Another way to increase the output voltage
level of the generator is to use DC/DC converter. Whereby is
possible to achieve an increase in output voltage at the units of
volts. This voltage level will be already sufficient to power the
connected equipment. Fig.2. described supposed method of
adjusting of output voltage.


Fig. 2. Block diagram of the generator
Mechanical construction of device is simple. An input part
of the device is formed by Peltiers module (TEC1, TEC2).
Voltage output from the Peltiers module is connected to a
primary winding of a transformer with transformation ratio in
this case 1:100. Secondary winding of this transformer is
connected to first part of step-up DC-DC converter. This first
part of this block is synchronous rectifier, second part is
composed of shunt voltage regulator. Output voltage of this
regulator is used for supply other block of device. Next step of
voltage modification is Power manager which consists of
reference generator, low dropout voltage, Vout controller,
charge control and storage capacitor. For the level of voltage
control is used an ultra low voltage step-up converter and a
power manager consisted of LTC 3108-1 from Linear
Technology and five supported components. This device
operates from inputs of 20 mV which is suitable for our
application. The converter powers a wireless transmission
element normaly used in building control systems. Based an
above stated requirements the device is larger than u wall
switch usually used. Whole device is situated on the one side
PCB with dimensions of 55 x 28 mm. Table 1. described
parameters of Peltiers modules.
TABLE I. TEG PARAMETERS
Type Imax
[A]
Tmax
[K]
U
[V]
Pmax
[W]
L
[mm]
B
[mm]
H
[mm]
TEC1 8.5 68 2.06 9.2 15 15 3.3
TEC2 8.5 68 15.4 81 50 50 4.5

With wall mounted cold side of the Peltiers module it is
possible to obtain temperature difference between the human
palm and a wall surface approx. T = 11C. This temperature
difference represents output voltage of 35mV with TEC1and
150mV with TEC2 Peltier module.

Fig.3.Shown schematic diagram of used DC/DC converter.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
214














Fig. 3. DC/DC converter schematic
Theoretical maximum power output from the used Peltier
modules TEC1 and TEC2 could be calculated from (10):

W
R
T
P
M
M
013 , 0
364 , 0 4
) 11 01257 , 0 (
4
) (
2 2
1 max , 0
=

(14)
W
R
T
P
M
M
023 , 0
350 . 3 4
) 11 05150 , 0 (
4
) (
2 2
2 max , 0
=

(15)

This voltage output could meet requirements to power low-
power broadcasting systems.
Fig.4. shows step response of voltage output of the
module. Fig 5. shows the dimensions of the unit.
Used step-up converter also includes regulated output
voltage in range of 2,5V; 3V; 3,7V and 4,5V. The level of
output voltage can be changed by connecting pins VS1 and
VS2 to V
AUX
or GND. Device was tested with all options of
output voltage. In the application as a transmitter for building
control systems voltage level 3V, 3,7V and 4,5V could be
used. Voltage level depends on required supply voltage of the
transmitter.











Fig. 4. PGD and Vout sequencing



Fig.5. Test device

CONCLUSION
This paper describes the construction of the electric
generator for wireless transmission systems. The main obstacle
to the use of these electric generators is that, in this way we use
a relatively small value of the temperature difference, T,
which depends on the human palm temperature and a wall
surface temperature. From a measured data shows that. Even
with the small temperature difference it enables to get enough
electric energy for wireless transmitting in buildings control
systems. Thermal generator described above has been designed
to be able to supply current of 40 mA for the time of 1ms. This
time is sufficient enough to transmit a simple telegram. The
future work will deal with design of LonWorks telegram
wireless transmission system with the application of
thermoelectric generator as described in this paper.
ACKNOWLEDGMENT
At first of all, I would like to thank my colleagues from the
office for quite working environment. This paper has arisen in
the frame of Internal Grant Agency of Tomas Bata University
in Zlin, Faculty of Applied Informatics IGA/FAI2013/026 and
under the project CEBIA-TECH NO. CZ.1.05/2.1.00/03.

REFERENCES

[1] D. Guyomar, Y. Jayet, L. Petit, E. Lefeuvre, T. Monnier, C. Richard, M.
Lallard, Synchronized switch harvesting applied to self- powered smart
systems: piezoactive microgenerators for autonomous wireless
transmitters, Sensors and Actuators A:Phys, Vol.138, No.1, 2007, pp.
151-160.
[2] E. Lefeuvre, A. Badel, C. Richard, D. Guyomar, A comparison
between several vibration powered piezoelectric generators for stand
alone systems, Sensors and Actuators A, Vol.126, No.2, 2006 , pp. 406-
416.
[3] Y. Deng, W. Zhu, Y. Wang, Y. Shi, Enhanced performance of solar
driven photovoltaic- thermoelectric hybrid system in an integrated
design, Solar energy, Vol. 88, 2013, pp. 182-191.
[4] D.T. Crane, J.W. Lagrandeur, F. Harris, L.E. Bell, Performance results
of high-power-density thermoelectric generator: Beyond the couple,
Journal of Electronic Materials, Vol.38, No.7, 2009, pp. 1376-1381.
[5] A.Z. Sahin, B.S. Yilbas, Thermodynamic irreversibility and
performance characteristic of thermoelectric power generator, Energy,
Vol 55, 2013, pp 899-904.
[6] X. Gou, S. Yang, H. Xiao, Q. Ou, A dynamic model for a
thermoelectric generator applied in waste heat recovery, Energy, Vol.
52, 2013, pp 201-209.
[7] J.C. Moreno, E. Rodrigues, J. Frias, G. Esnal, Z. Liznam, K. Vouros,
Energy sources manager in buildings: Control and monitoring,
Proceeding of the 3rd international conference on Circuits, Systems,
Control, Signals (CSCS 2012), pp. 15-21.
[8] Linear Technology, LTC 3108 datasheet, available online:
http://cds.linear.com/docs/en/datasheet/3108fb.pdf.
[9] ZT Service Inc. C.B.Vining, The thermoelectric Process, available
online: http://www.poweredbythermolife.com/research.htm
[10] Ferro Tec, Thermoelectric Technical Reference, available online:
http://thermal.ferrotec.com/technology/thermoelectric/thermalRef01/




Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
215
The impact of climate change on farm business
performance in Western Australia
Understanding farmers adaptation responses and their key characteristics in response to a changing and variable climate

Anderton, L.
1
, Kingwell, R.
2,
Feldman, D.
1
, Speijers, J.
3
, Islam, N.
1
and Xayavong, V.
1
and Wardell-Johnson, A.
4


1
Department of Agriculture and Food Western Australia (DAFWA),
2
Australian Export Grains Innovation Centre (AEGIC) and
University of Western Australia
3
Private consultant and
4
University of Sunshine Coast

Australia


Abstract This study examines ten years of financial and
production data of 249 farm businesses operating in south-
western Australia. It also identifies the behavioural
characteristics of the farm operators through a
comprehensive socio-managerial survey of each farm
business.
The study area has a Mediterranean climate, where three
quarters of the rainfall is received during the growing season
from April and October. Growers have learned to produce 2
tonnes per hectare of wheat on less than 200 ml of growing
season rainfall.
Australia is the driest continent in the world and is
renowned for its climate variability. In addition, evidence is
emerging that its southern parts, like south-western
Australia, are experiencing a warming, drying trend in their
climate. Average annual rainfall over the last thirty years in
the study area has declined and average minimum and
maximum temperatures have risen. Moreover, in the last ten
years a number of droughts have occurred.
This multidisciplinary study examines the business
performance of 249 farms from 2002 to 2011 and identifies the
strategies farm managers have adopted to adapt to a drying,
warming environment. Farms are categorised according to
their performance. Their characteristics are compared and
contrasted. We find many significant differences between
farm performance categories and the adaptation strategies
used by the farmers in each category. There are also
different socio-managerial and behavioural characteristics
between the groups of farmers identified.

Keywords climate change; farm performance; behavioural
characteristics.
I. INTRODUCTION
Australian farmers face two major climate risks: climate
variability and climate change. Climate variability refers to
the short-term fluctuations in temperature, rainfall and other
climatic conditions over a season or across years. In contrast
climate change describes the longer term trends (decadal or
longer) in the underlying climate [1]. Australias climate is
recognized as one of the most variable in the world [2 & 3]
and as a result it is one of the greatest sources of risk for
Australian agriculture [1 & 4]. Long-term climate change for
southern Australia is projected to involve an increase in
temperatures and decrease in rainfall. This projected warming
and drying trend, has already begun to be observed in
southern Australia [5, 6, 7 & 8] and is complemented by
increasing atmospheric concentrations of carbon dioxide
(CO
2
). Contemporaneous with this unfolding change in
climate, farmers have experienced a period of marked
volatility in farm product prices since the late 1990s [9 &10]
Against this backdrop of price volatility and climate
challenge, farm businesses in Australia have also needed to
cope with the business pressures arising from a strong
Australian dollar, scarce farm labor and an ageing farm
workforce; all factors adding to the challenge and complexity
of broad acre farming [9].

This study examines the impact of the above issues on the
financial performance of 249 broad acre farms in Western
Australia located in the shaded area shown in Figure 1. Their
financial data is complemented with data from a socio-
managerial survey completed by farm consultants who have
worked with the family farm businesses in the same period.
The aim of the study is to understand and identify the different
characteristics of farms and their capacity to adapt to climate
change.
A. The study region

A Mediterranean climate prevails, characterized by long,
hot and dry summers and cool, wet winters. In the northern
and central parts three-quarters of the average annual rainfall
is received between April and October. Summer rainfall is
highly variable, and is more common along the south coastal
parts of the region. (Figure 1)
The farming systems are mixed grain and livestock,
predominantly sheep enterprises, only 23 properties in the
sample data had cattle at the end of the study period. Wheat,
barley, canola, and lupins are often grown in crop sequences
This work was carried out with financial support from the Australian
Government (Department of Climate Change and Energy Efficiency) and the
National Climate Change Adaptation Research Facility (NCCARF).
Department of Agriculture and Food Western Australia and Australian Export
Grains Innovation Centre
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
216


Fig. 1. Southern Agriculture region of Western Australia
rather than strict rotations. The area of lupins has decreased
substantially in recent years due to their poor profitability. By
contrast, the area of canola has increased due to improved
varieties, better agronomic practices and good profitability.

Sheep are run on annual pastures during winter and spring.
In summer months, livestock feed on pasture residues and
crop stubbles. In late summer through to early winter there is
often a feed gap and grain supplements of lupins or barley are
fed to maintain animal welfare. The quantity and quality of
pasture produced is mainly influenced by the timing of the
first winter rains known as the break of the season, soil type
and management. When the break of the season is early
pasture production is greater.
B. The climate of the study region

Figure 2 shows the annual mean temperature anomaly in
south-western Australia from 1910 to 2012, indicating a
warming trend.



Fig. 2. Annual mean temperature for SouthWestern Australia (1910-2012)
Accompanying the warming trend has been a drying trend
as illustrated in figure 3.


Fig. 3. Annual percentage area in decile 10 (1900 -2011)
Most parts of the south-western region have not
experienced extremely wet years since the 1970s (i.e. decile
10 rainfall years). The absence of wet years makes runoff into
farm dams problematic and lessens soil moisture reserves,
making plant growth very dependent on growing season
rainfall, and making crop yields more vulnerable to spring
conditions. Moreover, the overall trend in annual rainfall is
downwards. The regions expected annual rainfall at the start
of the 1900s was around 750mm. Currently, the trend value
for annual rainfall is around 620mm. This drying trend is
observed throughout the southwest region, from inland to
coastal parts.
Besides weather-year variation and its underlying
warming, drying trend, farms in the study region also have
faced pronounced price volatility, especially for grains. This
volatility has been a global phenomenon [9].
Since the mid-2000s large changes in grain prices have
been observed. For example, in the early months of 2008 the
cash price for wheat peaked at A$430 per tonne yet towards
the end of 2008 the price was as low as A$285 per tonne, a
one third drop in price. Such volatility in price has greatly
affected the profitability of grain enterprises and highlighted
the very important role that grain marketing and price risk
management now plays in grain production since Australia
deregulated the wheat market in 2008.
II. METHODS

Farm business records of 249 farms were obtained from
three farm consulting firms for the period 2002 to 2011. These
longitudinal datasets describe the farm production and
financial records of each farm over the decade. The data was
carefully synchronized to ensure consistent variables were
used across the three sources of data.
The sample sizes in the main zones represent around 15
percent of the farm population in those zones. However, it
may not necessarily be truly representative of the wider
farming community in each zone since the data is supplied
from farms sufficiently viable to afford agricultural
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
217
consultants. The data may be upwardly biased if only above
average farmers use consulting firms.
Complementing the physical and financial datasets of farm
businesses were socio-economic and managerial data. These
are client questionnaire assessments provided by the
consultants. Because the farmers have been clients of the
particular consultancy firms for at least the period 2002 to
2011, and because the farmers tend to retain the same
consultant, often a close professional relationship between the
consultant and their client exists. Accordingly the consultant is
often well-informed about the socio-managerial environment
that underpins the operation of the farm business and
consequently they are well-placed to provide independent
assessments.
The questionnaire was pilot-tested and revised before
sending out to the consultants. A rich dataset of the socio-
managerial characteristics of each business was acquired. The
information collected includes the demographics of each
business, training history throughout the period, cropping and
livestock innovations implemented, technical innovations used
and business and time management skills evaluated.
Often farm performance is assessed and reported on an
annual basis. Rarely, are metrics used that consider longer
term performance. Benchmarking and farm survey reports are
usually based on annual samples that can change in size and
rarely is the same set of farms compared through time. In this
paper we examine the same 249 farms over a decade.
We employ five categories of farm businesses
performance, adapted from Blackburn and Ashby [11]. The
categories of farm performance are described as growing,
strong, secure, less secure and non-viable. The derivation of
these categories is shown in Table 1. The operating
surplus/deficit is calculated as gross farm income minus
variable costs and fixed costs. Profit for each year is calculated
by subtracting the cost of finance (interest), personal expenses
of the business and depreciation (calculated as 10% of total
machinery value for the year), from the operating surplus.
TABLE I. FARM PERFORMANCE DEFINED
Growing Strong Secure Less
Secure
Non-
viable
Operating
surplus

MINUS
Finance
(interest)

Personal
expenses

Depreciation
EQUALs Profit +ve +ve -ve -ve -ve
EQUITY Increase Maintain Maintain
or
Decline
Decline Decline

The change in equity was calculated as the difference
between net assets in 2002 versus their value in 2011, using
constant land values based on the values in the first year,
2002. A business which achieved a profit at least seven years
in ten and showed a real increase in equity from 2002 to 2011
was classified as a growing business. The distinction between
a growing and strong business was that the strong business
only maintained equity and achieved a profit in six of the ten
years. Secure businesses could pay for their personal
expenses, finance costs and depreciation but they made
minimal profit and their equity was either maintained at a
constant level or decreased over the period. Less secure
businesses failed to achieve a profit after allowing for their
finance cost, depreciation and personal expenses. Their equity
declined as a consequence.
If an operating surplus is not achieved consistently over a
period of time, the viability of the farm is eventually
questionable. However it is possible to have a bad year or a
number of bad years where an operating surplus is negative
and equity declines, but the business can eventually recover if
sufficient profit is subsequently achieved.
The farms are categorized using the five categories
outlined in Table 1, 64% of the sample farms are classed as
growing (40%) or strong (24%). There are 23% classified as
secure and only 13% are in the less secure category. Although
some farms experienced bad years during the period where
they did not achieve an operating surplus, none of the farms in
the sample is categorized as non-viable. This last result may
be an artefact of the source of data. Due to requiring a
decades worth of observations on each farm business, this
necessarily excluded businesses that were unviable and who
left farming during the study period.
The farms were also categorized by farm type; crop
specialists, livestock specialists and mixed based on their
dominant land use. The majority (72%) of the farms were
mixed where they cropped between 40% and 80% of their
land. Twenty per cent of the farms were crop specialists, some
with no livestock and 6% were livestock specialists only
cropping 40% or less of their farm land area.
When farms are categorized on the basis of farm type and
performance, the findings show that all three types of farms
are growing. However a higher proportion of crop specialists
are growing but a higher proportion are also less secure,
implying there is additional risk in specializing in a crop
dominant production system. Whereas, the livestock
specialists are more likely to be in the secure group (38%),
and fewer are less secure (8%). Although they are less likely
to have financial difficulty they are also less likely to be in the
growing or strong groups. This reflects the lower profitability
but also the lower volatility of a livestock dominant
production system.
The mixed farms are the largest group (72%) of the sample
and have the highest proportion of less secure businesses
(16%) than either the crop specialists (10%) or the livestock
specialists (8%), see Figures 4. However, the majority of these
businesses (64%) are either growing or strong and 36% are
secure or less secure. There is only 8% of livestock specialists
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
218
classified as less secure but a large proportion are only
managing to maintain their equity levels and are considered
secure.



Fig. 4. Farm performance by enterprise mix
Crop specialists are out-performing the other groups and
45% are growing their businesses, but a cautionary note is
required around this observation because crop specialists also
have 10% of less secure farms. (see Figure 4)
When farms are categorized by region (see Figure 5), the
observations to note are: The northern agriculture region,
particularly the zone M1 is performing very well and more
than 50% of farm businesses are growing, 71% of these
businesses are crop specialists and mostly large farms, the
remainder are mixed farmers. Less secure businesses in this
region are not present in this sample. The central wheat belt
area, zones M2, M3, and M4 (see Figure 1) all have less than
20% growing and in the M4 region it is less than 10%. M2
however, has a large proportion of strong businesses where
50% are either strong or growing.
More than 50% of farm businesses in the M5, H4 and H5
are either growing or strong.

Fig. 5. Regional farm performance
On average the farms in all four performance groups
increased their crop area from 2002 to 2012. The growing
group of farmers have the highest cropping area as a
percentage of farm area. They started with the highest area in
2002, with an average of 65% of their land being allocated to
crops. By 2011 this average area increased to 85% of their
land. They have also increased their cropping areas the most,
compared to the other three groups.
However, the downside to increased cropping areas is the
increase in revenue volatility. The variation in average profit
for crop specialists who are growing see an increase in the
standard deviation for the average profit which shows that
greater profitability is associated with increased crop area yet
also increases volatility in profit and therefore risk increases.

There is a reward for increasing risk but there is also a
penalty with increased volatility, also observed in the
data with a higher proportion of less secure crop
specialists compared to livestock specialists.
There is a strong correlation with growing season
rainfall [12] and yield potential, however a straight line
relationship does not exist and factors like the
distribution of rainfall across growing season, size and
distribution of rainfall events, water holding capacity
and soil type all effect yield outcomes [13].

Applying the farm performance classification criteria
outlined in Table 1 to the dataset generates the results in Table
2. The mean values of each main characteristic of farm
businesses in each of the four categories of farm performance
are listed.























Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
219

TABLE II. MEAN FARM PERFORMANCE FOR THE GROUPS




Growing Strong Secure Less secure
Unit
Gross farm income $ 1,577,486 1,204,430 1,070,855 791,490
Operating costs $ 996,072 808,160 730,798 594,360
Operating surplus $ 581,414 396,270 340,057 197,130
Profit $ 273,090 138,128 114,573 - 43,983
Personal Expenses $ 111,752 105,847 83,202 84,701
Interest payments $ 81,477 52,699 58,261 81,524
Machinery replacement $ 115,259 99,596 84,021 74,439
Debt to income ratio no. 0.99 1.05 1.35 1.64
Operating expenses as a % of gross
farm income
% 69.5 73.1 79.3 91.9
Land owned ha 3,875 3,422 3,093 2,739
Land operated ha 3,935 3,502 3,269 2,660
Land value $ 4,685,816 4,496,043 3,557,352 3,276,747
Farm assets $ 6,987,197 6,202,225 4,864,321 4,608,275
Business assets $ 7,717,971 7,048,667 5,356,378 4,985,611
Liability $ 1,417,091 1,193,862 1,389,985 1,213,838
Equity $ 6,431,107 5,743,213 3,963,110 3,749,779
Equity as a % % 82.4 82.2 75.6 76.7
Crop area ha 2,826 2,313 2,188 1,770
Pasture area ha 1,110 1,190 1,081 890
Crop Income as % of farm income % 80 77 76 74
Crop income per ha $/ha 464 427 403 379
Livestock income per ha $/ha 250 201 295 255
Farm asset value per ha $/ha 1,853 1,963 1,646 2,040
Business asset value per ha $/ha 2,054 2,194 1,815 2,200
Debt per ha $/ha 375 393 429 515
Equity per ha $/ha 1,709 1,768 1,376 1,677
Return on capital %. 5% 3% 4% -1%
Return on equity %. 11% 8% 10% 6%

The seasonal impact on profit outcomes is significant and
is shown by a gross margin analysis of the data.
The growing farms achieve more than $300 gross margin
per hectare four years out of ten for crops. Despite the ten year
average crop gross margin being less than the livestock
$160/ha and $190/ha respectively, the years where more than
$300/ha is achieved the business growth occurs. These farms
were able to capitalize on the high prices with reasonable
seasons to achieve high gross margins. The years this occurred
are 2003, 2007, 2008 and 2011 which all coincide with good
seasonal conditions, favorable terms of trade and high grain
prices. The strong businesses achieve two years with gross
margins above $300/ha, secure businesses only one year and
the less secure businesses none.

Identifying the differences between these farm categories
provides insight into the drivers for farm businesses success. It
is interesting to note that five out of the ten years the growing
business achieved the same result from livestock and
cropping, it did not matter which enterprise they chose.
The enterprise mix and allocation of resources on farm is a
choice made by farm business managers based on a number of
factors such as prices, capabilities, land suitability and
personal preferences i.e. dislikes and likes of types of work.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
220
These decisions are mostly based around the farmers risk
preferences in context of their individual circumstances and
resources available to them. There is a clear impact of the
enterprise mix on farm business performance. Gross margin
analysis of the data reveals that achieving gross margins above
$300/ha four years in ten allowed the business to grow. A few
livestock specialists were able to achieve gross margins above
$200 to $250 and generate growth.
Although not discussed here in detail, significant
productivity differences between the groups of farm
performance were found and between farm type. Crop
specialists experienced productivity improvement from
technical efficiency change; that is the adoption and
adaptation of existing technology. The livestock specialists
experienced no productivity improvement during the study
period [14 & 15].
A comparison of the farm performance groups using the
socio-managerial information collected about each family
farm provides some useful and unique insights into the
characteristics of farm businesses who are adapting to variable
and changing climates. A strong correlation was found
between the farm performance groups and their application of
technology and implementation of innovations. The growing
farms tend to look after their machinery better and are more
organized. Their machinery is more likely to be ready for
seeding in a timely manner, they have introduced more
cropping innovations and for a longer duration of time and
lease more land. The growing farms who are livestock
specialists also implement more livestock innovations than the
other groups. Growing farms, regardless of the type of farm
are also more involved with their communities. Table 3 shows
the results from the statistical analysis of the organizational
and time management analysis.
TABLE III. RESULTS OF ANALYSIS FOR ORGANISATION AND TIME
MANAGEMENT SKILLS

Growing Strong Secure Less
secure
P-
value
Seeding
equipment
ready to go
91 89 80 59 0.000
Header ready at
harvest
90 89 80 75 0.139
Header cleaned
and put away
after harvest is
finished?
84 77 67 69 0.108
Do they
regularly
service their
tractors
88 87 85 81 0.825
How do you
rate their plant
and machinery
care?
88 81 81 70 0.147
Do they take
annual holidays
and/or regular
breaks?
69 52 60 67 0.221
Labor
management
76 51 48 39 0.000
Work life
balance
58 48 43 55 0.313
Office away
from home
36 17 19 27 0.044
Is your client
involved in the
rural
community?
72 65 60 45 0.052
Do they play
sport locally?
51 42 34 30 0.124

There is a significant difference between farm performance
groups in the way they look after their machinery. Growing
farms are timely in their management practices by ensuring
their equipment is ready for operational jobs like seeding,
suggesting they have a high level of organizational skills.
They also achieve a work/life balance. The growing group of
farmers implement strategies which allow them to grow. They
are more organized.

III. DISCUSSION
The growing farms when compared to the less secure
farms tend to have the following key differences. Growing
farms are larger, generate a higher rate of return to capital and
equity, carry less debt per hectare, are slightly more crop
dominant, have higher personal and machinery replacement
expenses, have a much lower debt to income ratio, have
slightly higher equity in percentage terms, generate similar
livestock income per hectare but much higher crop income per
hectare and overall generate much higher profits.
The practical implication of the finding is that it has not
generally been possible for farm businesses to achieve a high
mean in the operating surplus per hectare whilst
simultaneously achieving little variance in the operating
surplus per hectare. Hence, a farm business strategy of lifting
the farms mean operating surplus per hectare has necessarily
involved an increase in the variance of the operating surplus
per hectare.
The results show farmers who have a positive attitude
towards taking risks by increasing their cropping areas will
benefit from the upside associated with cropping and their
businesses will grow. A risk-reward response is evident from
the data; the businesses which have succeeded in growing
their business during the ten year period have consistently
increased their percentage of area cropped, but at the same
time have experienced an increase in variability of profit.
A sequence of favorable production years will allow crop
dominant farmers to produce their way towards business
growth. However, there are some important caveats to the
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
221
findings of this study and that is the converse is also
applicable. An increased frequency of very poor production
years will eventually lead crop dominant farm businesses
towards insolvency. Hence, the crucial issue for climate
change is not just the trend in environmental change but, more
importantly, the nature of the variation about that trend. An
increased frequency in very dry years, for example, will
undermine farm profitability. It is the farmers response to
these years and their ability to manage their business which
underpins their ability to grow their business.
Each business is measured against itself at the start of the
period. A comparison between businesses is not made. This
unique study looking at longitudinal data does not make cross-
sectional comparisons of businesses which are more
commonplace in other studies. Inappropriate comparisons
between firms with different resource base is avoided in the
methodology employed and data used. However it is not
possible in this study to provide a gradation with-in each
group or distribution, therefore the growing group includes
different types of farms and sizes which are growing at
different rates, none the less they are growing because they are
risk takers, use more innovations, are more organized enabling
them to have a better work life balance and make management
decisions which enable their business to grow.
ACKNOWLEDGMENT
We would like to thank the three farm management
consultancy firms that enabled this research project to proceed:
Farmanco, PlanFarm and Evans&Grieve and Associates. In
particular Greg Kirk, Cameron Weeks, Rob Sands, Rod Grieve
and Ian Evans for their valuable assistance and cooperation.
REFERENCES

[1] Loch A, Hatt, M., Mamum, E., Xu, J., Bruce, S., Heyhoe, E., Nicholson,
M., Ritman, K., (2012) Farm risk management in a changing climate.
ABARES conference paper 12.5, Canberra, March 2012.
[2] CSIRO and Bureau of Meteorology (2007) Climate change in Australia.
Technical Report, CSIRO publishing , Melbourne.
[3] Hennessy, K., Fitzharris, B., Bates, B., Harvey, N., Howden, S., Hughes,
L., Salinger, J. and Warrick, R. (2007) Australia and New Zealand.
[4] Kimura, S. and Antn, J. (2011) Risk Management in Agriculture in
Australia. OECD Food, Agriculture and Fisheries Working Papers, No.
39, OECD publishing.
[5] Frederiksen, J.S., Frederiksen, C.S., Osbrough, S.L. and Sisson, J.M.
(2011) Changes in Southern Hemisphere rainfall, circulation and
weather systems. 19th International Congress on Modelling and
Simulation.
[6] Cai, W., Cowan, T. and Thatcher, M. (2012) Rainfall reductions over
Southern Hemisphere semi-arid regions: the role of subtropical dry zone
expansion. Nature (Scientific Reports) 2, Article number: 702,
doi:10.1038/srep00702
[7] Asseng, S. and Pannell, D. (2012) Adapting dryland agriculture to
climate change: Farming implications and research and development
needs in Western Australia. Climatic Change DOI 10.1007/s10584-012-
0623-1
[8] Addai, D. (2013) The economics of technological innovation for
adaptation to climate change by broadacre farmers in Western Australia,
Unpublished PhD thesis, School of Agricultural and Resource
Economics, University of Western Australia.
[9] Kingwell, R (2011) Revenue volatility faced by Australian wheat
farmers. Australian Agricultural and Resource Economics Society 2011
Conference (55th). http://purl.umn.edu/100572
[10] Martin, W. (2013) Managing high and volatile food prices, Invited paper
to presented at the Australian Agricultural & Resource Economics
Societys annual conference, Sydney, Feb 5-8, 2013.
[11] Blackburn, A. and Ashby, R. (1995) Financing Your Farm - 3rd Edition,
Australian Bankers Association, Melbourne.
[12] French, R.J. and Schultz, J.E. (1984) Water use efficiency of wheat in
Mediterranean-type environment..The relation between yield, water use
and climate. Australian Journal of Agricultural Research 35, 743-764.
[13] Ludwig, F., Milroy, S. and Asseng, S. (2009) Impacts of recent climate
change on wheat production systems in Western Australia. Climate
Change 92 (3), 495 -115.
[14] Isalm, N., Kingwell, K., Xayavong, V., Anderton, L., Feldman, D. and
Speijers, J. (2013) Broadacre farm productivity trajectories and farm
characteristics. A contributed paper to the 57th Annual Conference,
Australian Agricultural and Resource Economics Society, Sydney,
February 5-8, 2013.
[15] Kingwell, R, Anderton, L, Islam, N, Xayavong, V, Wardell-Johnson, A,
Feldman, D & Speijers, J 2013, Broadacre farmers adapting to a
changing climate, National Climate Change Adaptation Research
Facility, Gold Coast, 171 pp.
.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
222
Evolution of Smart Buildings
GABRIEL IULIAN FNTN
Department of System Theory
, Polytechnic University Bucharest
Bucharest, ROMANIA
gabi_fantana@yahoo.com

STEFAN ADRIAN OAE
Department of Engineering and Technological Systems
Management , , Polytechnic University Bucharest
Bucharest, ROMANIA
oae_stefan@yahoo.com

AbstractToday, in times of economic crisis, the need for
optimization is more present than ever. Since buildings are
responsible for almost 50% from total environment pollution,
which means a significant resource spending, changes for
improvement must be done.
Keywordssmart, building, sustainability, integration, energy,
management.
I. INTRODUCTION
In every country in the world, the built environment
normally constitutes more than half of the total national capital
investment and construction represents only 10% of GNP. [1].
If this value will be translated into energy consumption which
defines the quality of life, two things will emerge: the
enormous pollution and the amount of spent resources.
Clean air is a mixture of gases - 78% nitrogen and 21%
oxygen - with traces of water vapor, carbon dioxide, argon,
and various other components.
In Europe only, air pollution from the 10,000 largest
polluting facilities, costs citizens between 102 and 169 billion
in 2009. Half of the total damage cost (between 51 and 85
billion ) was caused by just 191 facilities [2].
The industrial facilities considered include large power
plants, refineries, manufacturing combustion and industrial
processes, waste and certain agricultural activities. Emissions
from power plants contributed the largest share of the damage
costs (estimated at 66 112 billion ). Other significant
contributions to the overall damage costs came from
production processes (2328 billion ) and manufacturing
combustion (821 billion ). Sectors excluded from this
analysis are transport, households and most agricultural
activities if these were included the cost of pollution would
be even higher.
The tall buildings were meant to exploit the land but have
the negative effects in the environment and create new
problems including increasing congestion population,
pollution, reduce citizen access to fresh air and sunlight.
In actual tendencies of population increasing and land
shortage, tall buildings cannot be replaced.
In the US, according to the Department of Energy,
buildings consume approximately 37% of the energy and 68%
of the electricity produced annually. Energy-management
practices and energy-efficient equipment can reduce energy
costs by at least 20% - a net savings opportunity worth more
than 11 billion $ by 2010 [3].
In Eastern Europe, Romania is the first country that will
fulfill Kyoto objectives by reducing its greenhouse gas by 8%.
The CO
2
emission is also 58% less than reference and this was
possible because of reduction of burned fossil fuel [4].
Pollution prevention is the "gateway" to sustainability and
also a waste. Understanding how is generated and how can be
minimized is the first step to eliminating it and increase
efficiency and develop sustainable production methods.
Sustainable design is not just about cost; it offers
economic, environmental and social benefits. The solar panels,
for instance, can reduce the amount of natural resources spent
and a general lower cost for used energy. This also helps by
reducing the load on local energy distribution stations, so the
black-out phenomena can be easier avoided.
II. SMART BUILDING EVOLUTION
Born in early 80`s, the smart building concept was
involving especially an extensive use of elaborate centralized
electronic systems to make possible the control of building
support and communication systems for voice and data. The
initial stage promoted communication networks in order to
centralize word processing services and limited interaction
between occupants and BAS though an easy user interface
such as touch switches.
Intelligent buildings use technological solutions to
improve the building environment and functionality for
occupants/tenants while monitoring costs. Improving end user
security, comfort and accessibility will enhance users
productivity and comfort levels. The owner/operator wants to
provide optimal functionality while reducing individual costs.
An effective energy management system, for example,
provides lowest cost energy, avoids waste of energy by
managing occupied space, and makes efficient use of staff
through centralized control and integrating information from
different sources.
There are many definitions that were developed but the
most accepted one is formulated by Cerd institute in
Barcelona [5]: system that support the flow of information
throughout the building, offering advanced services of
business automation and telecommunications, allowing
furthermore automatic control, monitoring management and
maintenance of the different subsystems or services of the
building in an optimum and integrated way, local and/or
remote, and designed with sufficient flexibility to make
possible in a simple and economical way the implementation
of future systems.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
223
An Intelligent Building is one which provides an efficient
and cost-effective environment through optimization of its
four basic elements [6]:

Fig 1 Smart buildings main four elements
In its early stages, the concept was referring more to
automation than intelligence. The main difference between
these two was that the first one use only the component
systems and provide their optimal behavior, while the second
one deals specially with mathematical and computers sciences
which, by using of extended algorithms, can think and
dynamically change parameters in order to anticipate and
optimize occupants expectations.
The history of IB can be divided into several stages: during
60s, the smart or intelligent building had been equipped with
building automation system that had capabilities as work
saving time device; then, first energy crisis in 1973 led people
to reexamine the ways of energy use and generation awareness
on energy conservation, so the smart features had become
associated with energy conservation and crime prevention
systems; integrated single function/dedicated systems
(80s);integrated multifunction and building level systems
(90s); computer integrated building (2000); enterprise
network integrated systems (today).

Fig 2 Smart building evolution through time
Integrated single function/dedicated systems, all the BA
subsystems (including security control; access control;
heating, ventilation and air conditioning [HVAC] control;
lighting control; lift control; other electrical systems; fire
automation; etc.) and CA subsystems (including electronic
data processing and data communication; telefax and text
communication; voice communication; TV and image
communication; etc.) were integrated at the level of a single or
individual function subsystem. Integration and communication
between the automation systems of different subsystems was
impossible.
Integrated multifunction systems provided integrated
security, access control, and automation and/or services
systems. Also were unified networks for text, data, voice and
multimedia communications.
Building level integrated systems had both BA and
communication systems integrated at building level as
building automation system (BAS) and integrated
communication system (ICS). At this stage, a BA system
could be accessed remotely using a modem, while the cellular
phone for voice and data communication was introduced.
Computer integrated building, made available various
types of networks became available that were used in practice
progressively, by the use of Internet protocol (IP) network
technologies and increased network capacity. Remote
monitoring and control could be achieved via the Internet.
Enterprise network integrated system promotes the
intelligent systems that can be integrated and managed at
domestic, enterprise or city level. Intelligent building systems
are not enclosed within buildings anymore but connected with
IB systems in separate buildings via the global Internet
infrastructure. Combining BAS and IT through the backbone
of an Internet Protocol network allows multiple services to be
delivered to occupants.
Integration and management at this level become possible
due to the applications of modern IT technologies such as
Web Services, XML, remote portfolio management and
helpdesk management, among others. In terms of
communication, multimedia communication via cellular phone
has been brought into practical use.
III. CONCLUSIONS
The integration of all components and subsystems has
been the main focus of IB technology development.
Integration is essential for most functions, such as automatic
monitoring and management, building performance
optimization and diagnosis. Function integration increases the
flexibility and possibilities of intelligent management of
buildings. Digital technology is a key component into the
integration because traditional ones still have many constraints
in terms of information exchange and integration. The
microprocessor proves amazing power in computation,
transmitting and processing information, being the key
element of digital systems and of IB and BA systems.
Modern building systems have been becoming very large
and complex in terms of system scale, hardware and software
system configurations, while their functions and capacities
have been increasing progressively [7]. As can be seen in
fig.3, the plug loads increased from 1995 and prognosis is, at
least, frightening regarding its evolution till 2035 [8].
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
224

Fig 3 Plug loads evolution between 1995 and 2035
Plug loads are a major provider to building energy
consumption, especially in offices. In commercial buildings,
plug loads are one of the fastest-growing end uses in terms of
energy consumption and typically account for 30-35 percent
of the total electricity used [9].
In order to optimize costs, many commercial building
owners and managers have installed smart building energy
management systems in order to reduce energy use and
operating expenses.
Today, the automated, or smart, building energy
management systems market is to almost quadruple in size and
be worth over $1 billion by 2020 [10].


Fig 4 Smart Building Managed Services Spending [10]
Pike puts the market potential for all buildings in the U.S.
at more than $37 billion in 2010. By 2020, the prediction is
that the market to be worth more than $45 billion. [10]
System reliability is an important issue also. Developing a
decentralized network or a local area network (LAN) is the
key will solve the system reliability issues and simplify IB
networks. Distributed intelligence is a major optimal solution
to ensure connection through complex IB and BA systems.
Integrated but independent is one of the most essential
concerns in the development and configuration of these
systems.


Fig 5 Total Potential Market for Energy Management Systems by Building
Usage [10]
Intelligent buildings need to be sustainable (i.e. sustain
their performance for future generations), healthy and
technologically up to date; meet regulatory demands; meet the
needs of the occupants; and be flexible and adaptable enough
to deal with change. Buildings will contain a variety of
systems devised by many people, and yet the relationship
between buildings and people can only work satisfactorily if
there is integration between the supply and demand-side
stakeholders as well as between the occupants, the systems
and the building. To achieve this, systems thinking is
essential in planning, design and management, together with
the ability to create and innovate while remaining practical.
The ultimate objective should be simplicity rather than
complexity. This requires not only technical ability but also
the powers of interpretation, imagination and even intuition.
Building Regulations can diminish creativity but are necessary
to set a minimum level of expectation and obey health and
safety requirements. However, we should aim at designing
well above these conditions. After all, buildings form our
architectural landscape and the environment they generate,
should uplift the soul and the spirit of those people within
them as well as those who pass by them.
REFERENCES

[1] CICA (2002). Construction. UK, Beacon Press.
[2] European Environment Agency, Kongens Nytorv 6 1050 Copenhagen K
Denmark, Revealing the costs of air pollution from industrial facilities
in Europe a summary for policymakers, EEA Technical report No
15/2011.
[3] McKinsey, Reducing US GHG Emissions: How much at What Cost?
US GHG Abatement Mapping Initiative
[4] Gaman (Pasvantu) Irina Cristina, Triunghiul Dezvoltarii
Durabile(Economic, Mediu, Social)
[5] Lafontaine, J. (1999). Intelligent building concept. Ontario: EMCS
Engineering Inc.
[6] Wang Shengwey, ntelligent Building and Building Automation, Spon
Press, USA, 2010.
[7] Wong, J. K. W., Li, H. and Wang, S. W. (2005) Intelligent building
research: a review,Automation in Construction, 14(1): 14359.
[8] Barnaby Chambers, Graphic based on data from the Energy
Information Administrators, Annual Energy Outlook 2012
[9] Tolga Tutar, How to defeat the massive plug-load monster,
greenbiz.com, May 16, 2012.
[10] Navigantresearch.com, Energy Management Systems for Commercial
Buildings, november 18, 2009.

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
225
Municipal waste water toxicity evaluation with Vibrio
fisheri

Helena Raclavska, Jarmila Drozdova, Silvie Hartmann
Energy Units for Utilization of non Traditional Energy Sources (ENET)
VB-Technical University of Ostrava
Ostrava, Czech Republic
jarmila.drozdova@vsb.cz


AbstractToxicity of municipal waste water observed by
means of Vibrio fischeri proved a primary dependence between
the content of organics and inhibition. A linear dependence
between COD and inhibition determined by means of Vibrio
fischeri was established. In 30 days following sampling organic
matter was degraded and the value of inhibition fell from 80 % to
27.67 %. This value corresponds to the real impact of
micropollutants (risk elements, organic micropollutants) on
inhibition. The content of risk elements in municipal waste water
implies that the values are significantly higher than EC
50
defined
for Daphnia magna. Risk element ecotoxicity is affected by the
form of occurrence and it is thus probable that there are risk
elements predominantly in the form of complexes in sewage
water.
KeywordsVibrio fischeri; ecotoxicity; COD (chemical oxygen
demand); BOD (biochemical oxygen demand); risk elements
I. INTRODUCTION
The requirements for the quality of discharged water from
municipal waste water treatment plants (WWTPs) have been
considerably tightened within the European Water Framework
Directive (Directive 2000/60/EC), which, apart from chemical
parameters, defines an environmental quality aspect [1].
Ecotoxicity monitoring may significantly contribute to the
evaluation of discharged water quality [2,3,4]. Waste water
ecotoxicity may be greatly influenced by risk elements, organic
micropollutants and other factors (turbidity, increased content
of nutrients, etc.). Knowledge of the species of metals is
important in removing elements in the framework of WWTPs
technology but also from potential ecotoxicity of sludges and
water discharge from WWTPs [5]. Some elements (B, Mn, Co,
Ni and Mo) are difficult to treat in WWTPs, 60 80% are
discharged with treated water [6]. Pb, V, Cu, Ag, Cd, Sb and
Ba are concentrated mainly in sewage sludge and (As and Zn)
can occur in treated water or sewage sludge. The removal of
Cr, Cu, Pb and Fe is likely to be strongly linked to the removal
of suspended solids. Cd and Ni compounds are most often
dissolved (86 % and 78 %, respectively) [7]. Detailed
information regarding the speciation of heavy metals in urban
sewers is lacking [8].
The objective of the paper is to identify variability in
municipal waste water ecotoxicity in the individual districts of
the City of Ostrava and to assess the impact of meteorological
conditions.
II. METHODS
The waste water toxicity was determined in accordance
with the EN ISO 11 348 Standard, which defines the inhibition
effects of water samples on light emission by Vibrio fischeri
(luminescent bacteria test). The use of the method is limited by
interfering phenomena loss of luminescence caused by
absorption or diffusion of light in heavily-coloured or turbid
samples or the presence of organic, well biodegradable
nutrients (urea, peptone, yeast extract > 100 mg/L), which may
cause a reduction in bioluminescence independently on the
pollutants.
The observation of inhibition of Vibrio fischeri was
conducted with 60 samples of municipal waste water drawn
from 13 sampling points of the sewerage system of the City of
Ostrava (OVAK, a.s.). Once a month samples were drawn in
the sampling points stated in Fig. 1. Significant hydrochemical
markers (BOD, COD, TOC, major anions and cations, risk
elements and toxicity) were determined in the waste water
samples. The determination of toxicity was repeated in 30 days
after sampling, having kept the samples in a refrigerator at the
temperature of 4C (60 samples).
III. ORGANIC POLLUTION IMPACT ON THE INHIBITION OF
VIBRIO FISHERI
Studying the relations between inhibition and chemical
parameters observed in municipal water, we identified a
statistically important dependence between inhibition and
BOD, COD, TOC, phosphates, total nitrogen and turbidity
(Fig. 2 and 3). The dependence between inhibition of Vibrio
fischeri and BOD, COD and the content of undissolved
substances is also reported by [9] and [10].
Fig. 4 shows the median values for the inhibition of Vibrio
fischeri. The position of the sampling points in the Fig. 4
corresponds to their location in the sewerage system with
regard to their potential mutual influence. The lowest
inhibition was identified in the Polanka, Svinovsky and
Vratimovsky collectors; the inhibition determined in the three
collectors may be considered as background values for
immediate sampling.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
226
Fig. 1. Catchments of individual collectors and locations of sampling points
The Polanka collector shows the lowest median value for
COD (170 mg/L) and in Svinovsky collector it is 185.7 mg/L,
while the complete set median is 357 mg/L. Fig. 1 reveals a
mutual influence of some sampling points: Vratimovsky
collector (beginning of collector) and Novinarska collector
(end of collector), Bohuminska collector and Sokolska
collector.
It is apparent from the dependence of the organic pollution
impact on inhibition (Fig. 2 and 3) that the inhibition will
decrease in waste water along with the removal of organics, all
the way to the value expressing the inhibition caused by the
presence of pollutants (risk elements, organic micropollutants).
Fig. 4 documents a drop in inhibition values. The inhibition
median for the complete set of waste water in the day of











Fig. 2. Dependence between COD and inhibition
sampling is 79.58 %; the inhibition median after 30 days is
27.67 %.
The decrease in inhibition after 30 days was identified in all
the samples, except for the Martinovsky collector (Fig. 5 and
6). Apart from sewage, waste water from the food-processing
industry and metallization is discharged into the Martinovsky
collector.
In this case probably the risk elements, which formed
complexes with organics at the instant of sampling,
transformed into an ion form after the degradation of organics,
which causes toxicity. The Martinovsky collector demonstrates
the highest content of Zn, the median value is 450 mg/L; in
other collectors the Zn contents ranged from 120 to 210 mg/L.









Fig. 3. Dependence between phosphates and inhibition



Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
227






Fig. 4. Median of inhibition values (120 samples) in the sampling day; median of inhibition values after 30 days (60 samples)







collector
Fig. 5. Inhibition determined in the sampling day and after 30 days -
Martinovsky collector








Fig. 6. Inhibition determined in the sampling day and after 30 days -
Bohuminska collector
Fig. 7 reveals that in all collectors the lowest inhibition was
measured in May and November. Considering the fact the
inhibition value is influenced by the presence of organics
(BOD, COD). It may be presumed that the reduction in
inhibition is caused by dilution of the organic pollution in the
collectors due to higher rain fall. Fig. 8 states the volume of the
inflowing water into the WWTP in the days of sampling. The
quantity of inflowing water in May and November was the
highest during the observed period.













Fig. 7. Inhibition values in dependence on the sampling times
It is apparent from the results that organic pollution
participates in the high value of inhibition of Vibrio fischeri.
After degradation of organics during 30 days the inhibition
drops to a value which corresponds to the impact of
micropollutants on ecotoxicity.















Fig. 8. Inhibition values - median compared with the quantity of inflowing
water into a WWTP




Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
228










Table I states the contents of observed risk elements for the
Polanka collector, which showed the lowest inhibition in the
samples analyzed in the day of sampling and the median of the
complete set. Compared with the ecotoxicity limits according
to TRV (Toxicity Reference Value), which were observed in
Daphnia magna, the values of element contents in the
municipal water are significantly higher. Also the values of
LOEC (Lowest Observable Effect Concentration) for Cu and
Cr reported by [11] are lower by as much as three orders. The
toxic contribution of the individual elements was defined by
[12]. EC20 values for Vibrio fischeri, which represent a
measurable thresholds of toxicity were found to form following
series: Pb(II) > Ag(I) > Hg(II) Cu(II) > Zn(II) > As(V) >
Cd(II) Co(II) > As(III) > Cr(VI). The presence of humic
acids in the municipal waste water may significantly influence
the toxicity of risk elements through the formation of
complexes, where the toxicity is predominantly caused by the
abundance of free ions. The toxicity of copper decreased with
the addition of humic acid, while the toxicity of zinc remained
almost constant. On the other hand, the toxicity of lead
increased, depending on the concentration of humic acid. The
interactive effects between copper and zinc and between lead
and zinc were synergistic, while the interactive effect between
copper and lead on the bioluminescence of Vibrio fischeri was
additive [13].
IV. CONCLUSIONS
The study of inhibition of Vibrio fischeri in the municipal
waste water implies that the inhibition is primarily affected by
the presence of organics and nutrients in water. After
degradation of organics (30 days), the value of measured
inhibition (27.67 %) corresponds to the real impact of the
pollutants. During 30 days organics degraded and
consequently, risk elements combined onto organic complexes
must have been released. Organic micropollutants (PAH,
hydrocarbons C
10
-C
40
and tensides, or substances contained in
cosmetics) must have been released in a similar manner [14].
The median value for COD of the complete set was 362 mg/L,
the median value of COD after 30 days was 174 mg/L, and the
degradation efficiency of organics was 52 %. The impact of
degradation of organics on the increase in ecotoxicity showed
in the Martinovsky collector, where the inhibition of Vibrio
fischeri after 30 days increased.










ACKNOWLEDGMENT
This paper was supported by the research projects of the
Ministry of Education, Youth and Sport of the Czech Republic:
OpVaVpi ENET CZ.1.05/2.1.00/03.0069.
REFERENCES
[1] M. Farr, L. Kantiani, S. Prez, and D. Barcel, Sensors and biosensors
in support of EU Directives, Trac-Trends Anal. Chem., vol. 28, pp.
170-185, February 2009.
[2] E. Mendona, A. Picado, S.M. Paixo, L. Silva, M.A. Cunha, S. Leito,
I. Moura, C. Cortez, and F. Brito, Ecotoxicity tests in the environmental
analysis of wastewater treatment plants: Case study in Portugal, J.
Hazard. Mater., vol. 163, pp. 665-670, April 2009.
[3] A.M. Christensen, F. Nakajima, and A. Baun, Toxicity of water and
sediment in a small urban river (Store Vejle, Denmark), Environ.
Pollut., vol. 144, pp. 621-625, November 2006.
[4] CH.J. Kelly, N. Tumsaroj, and C.A. Lajoie, Assessing wastewater
metal toxicity with bacterial bioluminescence in a bench-scale
wastewater treatment system, Water Res., vol. 38, pp. 423-431, January
2004.
[5] H. Raclavsk, . Dokov, and H. krobnkov, Ecotoxicity of
sewage sludge from waste water treatment plant, Inynieria Mineralna,
vol. 27, pp. 39-50, January 2011.
[6] M. Ozaki, M. Suwa, and Y. Suzuki, Study on risk management of
heavy metals for reuse of biosolids, Water Sci. Technol., vol. 53, pp.
189-195, May 2006.
[7] R. Buzier, M.-H. Tusseau-Vuillemin, C.M. dit Meriadec, O. Rousselot,
and J.-M. Mouchel, Trace metal speciation and fluxes within a major
French wastewater treatment plant: Impact of the successive treatments
stages, Chemosphere, vol. 65, pp. 2419-2426, December 2006.
[8] J. Houhou, B.S. Lartiges, E. Montarges-Pelletier, J. Sieliechi, J.
Ghanbaja, and A. Kohler, Sources, nature, and fate of heavy metal-
bearing particles in the sewer system, Sci. Total Environ., vol. 407, pp.
6052-6062, November 2009.
[9] A. Katsoyiannis and C. Samara, Ecotoxicological evaluation of the
wastewater treatment process of the sewage treatment plant of
Thessaloniki, Greece, J. Hazard. Mater., vol. 141, pp. 614-621, March
2007.
[10] M. Nohava, R.W. Vogel, and H. Gaugitsch, Evaluation of the bacteria
bioassay for the estimation of the toxicological potential of effluent
water samples - Comparison with data from chemical analyses,
Environ. Int., vol. 21, pp. 33-37, 1995.
[11] Ch.Y. Hsieh, M.H. Tsai, D.K. Ryan, and O.C. Pancorbo, Toxicity of
the 13 priority pollutant metals to Vibrio fisheri in the Microtox
chronic toxicity test,Sci. Total Environ., vol. 320, pp. 37-50, March
2004.
[12] E. Fulladosa, J.C. Murat, M. Martnez, and I. Villaescusa, Patterns of
metals and arsenic poisoning in Vibrio fischeri bacteria, Chemosphere,
vol. 60, pp. 43-48, June 2005.
TABLE I. MEDIAN OF THE OBSERVED ELEMENT CONTENTS IN THE COLLECTORS COMPARED WITH THE POLANKA COLLECTOR
As Cd Cu Cr Fe Hg Mn Pb V Zn
(mg/L)
Polanka
3.10 0.14 27.2 11.9 320 0.30 190 47 9.45 98
Complete set
3.33 0.24 34.16 11.6 452 0.33 189 66 5.16 210
TRV
a
0.19 0.06 0.12 1.0 0.00001 0.0013 0.020 0.120
LOEC (g/L)
b
6.78-13.6 62.6-1251
EC50 (mg/L)
c

0.11 0.117
a.
Toxicity reference value [15].
b.
[11].
c.
[12].
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
229
[13] V. Tsiridis, M. Petala, P. Samaras, S. Hadjispyrou, G. Sakellaropoulos,
and A. Kungolos, Interactive toxic effects of heavy metals and humic
acids on Vibrio fischeri, Ecotox. Environ. Safe., vol. 63, pp. 158-167,
January 2006.
[14] S. Girotti, E.N. Ferri, M.G. Fumo, and E. Maiolini, Monitoring of
environmental pollutants by bioluminescent bacteria, Anal. Chim. Acta,
vol. 608, pp. 2-29, February 2008.
[15] Environmental Restoration Division, Aquatic toxicity reference values
(TRVs), U.S.EPA Region 6, Office of solid Wastes e1-e100, ERD-AG-
003, August 1999.






Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
230
Inhibition of activated sludge respiration by heavy
metals

Silvie Hartmann, Hana Skrobankova, Jarmila Drozdova
Energy Units for Utilization of non Traditional Energy Sources (ENET)
VB-Technical University of Ostrava
Ostrava, Czech Republic
silvie.hartmann@vsb.cz


AbstractInhibition of the respiration microbial activity in
the activated sludge caused by heavy metal concentrations (Cr,
Cd, Cu and Ni) was studied by means of respirometric method
using Strathtox respirometer (Strathkelvin Glasgow).The studied
sludge samples were obtained from the two waste water
treatment plants with different types of pollution (municipal
waste water and domestic waste water).
Keywordsheavy metals; toxicity; respirometry; sludge; waste
water treatment plant
I. INTRODUCTION
At the beginning of the twentieth century, the method of
biological treatment was established, and now it is the basis of
waste water treatment worldwide. It uses naturally occurring
bacteria at very much higher concentrations in tanks. These
bacteria, collectively with some microbes and protozoa
contained in sludge, are together referred to as activated sludge.
The bacteria remove small molecules of organic carbon and
consequently, there occurs bacteria growth and the waste water
is purified. The effluent (treated waste water) can then be
released to water streams or the sea. The control of the
treatment process is very complicated due to the large number
of parameters that can affect it. Waste water treatment plants
have to deal with recalcitrant chemicals that can only be
degraded by the bacteria very slowly, and with toxic chemicals
that inhibit the performance of the activated sludge bacteria.
Excessive concentrations of toxic chemicals can generate a
toxic shock that kills the bacteria [1; 2]. The purpose of this
work was to determine the toxicity of heavy metals (Cd, Cr,
Cu, Ni) on activated sludge microorganisms with the help of
the respiration inhibition tests using the Strathtox unit.
II. RESPIRATION INHIBITION
A. Heavy metals in waste waters
The potential influence of heavy metals on the water
environment has been a major interest over the last decades.
Heavy metals such as copper, zinc, nickel and cadmium are
commonly present in untreated waste waters coming from
households and from the industry, particularly mining,
smelting, metallurgy, electroplating, coking and chemical
production, and metal-finishing industry. The toxicity of the
metal is directly related to its solubility in the presence of the
sludge [3]. The concentrations of heavy metals in waste waters
vary significantly depending on the industrial activities and on
the chemical form of occurrence. Levels of heavy metal
concentrations in municipal wastewaters are expected to be
considerably lower because industrial effluents will be diluted
by domestic waste waters [4].
Several studies mention that heavy metals can change the
microbial structure of activated sludge by modifying both cell
density and species richness, even at moderate concentrations,
thus having a noxious effect on the growth and survival of
microorganisms [5; 6; 7; 8]. Heavy metals incline to affect the
metabolic functions of microorganisms in activated sludge and
lower the effectiveness of the biological processes in waste
water treatment plants [9].
Trace amounts (g/l) of some metal ions such as Cu, Zn,
Pb, Ni, Co are required by some organisms as cofactors for the
enzymatic activities. Though, for most of the organisms, heavy
metal ion concentrations at the level of mg/l are known to be
toxic because of irreversible inhibition of some enzymes by the
heavy metal ions. Toxicity of heavy metal ions on activated
sludge bacteria varies depending on the type and
concentrations of heavy metal ions and the microorganisms as
well as the environmental conditions such as pH, temperature,
dissolved oxygen (DO), presence of other metal ions, ionic
strength and also the operating parameters such as, sludge age
(Sludge Residence Time - SRT) and hydraulic residence time
[10].
Former studies have shown that adapted sludge maintains a
high removal efficiency of dissolved organic matter although
exposed to constant input of heavy metals [11]. This suggests
that adaptation can reduce the negative effects of toxic
substances on biological reactions, and then some microbial
groups can become predominant. Another study found that
shock loads of toxicants produce remarkable effects on
activated sludge whether it is adapted or not [12]. Obviously,
as the characteristics of microorganisms in complex activated
sludge system for nutrient removal have not been known yet,
the study of the effects of toxic metals on activated sludge
becomes important, especially the issue which kinds of adapted
microorganisms can resist the toxicity of heavy metals and
consequently what levels they can tolerate [13].
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
231
B. Toxicity
Even though there are several thousands of chemical
reactions engaged in the metabolism of bacteria, we can
identify three major processes that are applicable to the
biological treatment of sewage. These are Ingestion;
Respiration; Growth and division. These three processes are
highly integrated.
Ingested organic carbon is processed in two ways. Some of
it goes along the pathway of catabolism or respiration and ends
up as carbon dioxide. This carbon is lost to the system. The
remaining organic carbon follows the anabolism or growth
pathway and ends up in newly formed biomass. This carbon is
therefore kept in the system. The purpose of respiration is to
provide the energy that is necessary for the growth and for the
maintenance of the bacteria [14].
Toxic chemicals in the waste water can enter the bacteria
and inhibit one or more enzymes of the pathways involved in
either anabolism or catabolism. If the catabolic reactions of
respiration are affected, the rate of respiration and energy
production is reduced and the rate of growth is therefore
reduced. On the other hand, if the anabolic pathways of
biosynthesis are inhibited, the rate of growth is reduced, and
this is accompanied by a decrease in the rate of respiration, as
the requirement for energy is reduced. It shows that wherever
the toxicity takes effect, there is inhibition of both respiration
rate and rate of biodegradation [15].
III. MATERIALS AND METHODS
A. Respirometry method
Respiration is an essential activity of aerobic bacteria. For
this reason respiration inhibition is a significant decisive factor
for assessing the ecotoxicological risk of chemical substances
in wastewater. Respirometry is used to assess waste water
toxicity to heterotrophic and nitrifying bacteria in activated
sludge. In contrast to bioluminescence, activated sludge
respirometry is a more direct method for measuring sludge
activity and thus toxicity to sludge [16]. The basis for
respirometric tests is that the respiration rate of activated
sludge or sludge microorganisms can be reduced in the
presence of toxic substances. The most common way of
measuring the bacterial respiration rate is the oxygen uptake
rate [17].
B. Experimental apparatus
The respirometer Strathtox (Strathkelvin Intruments Ltd.
Glasgow) is a laboratory instrument which is used to conduct a
range of temperature controlled tests on activated sludge
samples, so it can simulate the conditions of the treatment
process. This equipment is based on the respirometry
applications in the biomedical field and uses 6 oxygen
electrodes simultaneously. Sludge sample volumes have been
reduced to 20 ml, the rates measured on these 20 ml samples
are more or less equal to those measured in a 1 liter sample.
The use of 6 oxygen electrodes allows the respiration rate of a
control sample of sludge to be measured at the same time as
that of samples of the same sludge mixed with 5 different
concentrations of waste water. Respiration inhibition tests,
involve the measurement of the concentration of waste water
causing a 50% (or other selected percentage) inhibition of the
respiration rate [18].
C. Sludge source
The Waste Water Treatment Plant (WWTP) Ostrava
Privoz was designed for a population equivalent of 638 000.
This facility treats an average flow of 110 994 m3/day with
211 mg/l BOD, 426 mg/l COD and 325 mg/l of Total
Suspended Solids (average annual concentrations in waste
water input to WWTP). The input contains maximum year
concentrations of 4.72 g/l Cd; 375 g/l Cr; 65 g/l Cu and
115 g/l Ni. The WWTP Hermanice II was designed for a
population equivalent of 3600 with no industrial inflow.
The activated sludge was collected from the secondary
treatment tanks of a WWTP Ostrava Privoz and WWTP
Hermanice II. It was kept aerated during transit to the
laboratory, using a portable 12V aeration device. In the
laboratory, 600 ml of the activated sludge was placed into the
stock flask. In the stock flask the sludge was maintained at a
constant temperature selected for the test and kept fully
aerated. The synthetic sewage feed for the unit calibration was
prepared by ISO 8192.
D. Heavy metals
The studied heavy metals are listed in Table I. All solutions
were made up in distilled water of different concentrations. Cr,
Cu, Cd and Ni were chosen as representative metals,
commonly found in municipal waste water of Ostrava. Store
solutions of metals were prepared prior to testing and each was
diluted to 20 concentrations as being relevant for pollutant
concentrations causing respiratory inhibition of activated
sludge.
E. Respiration inhibition test
The respiration inhibition is one of the tests preferred in the
case of heterogeneous cultures of microorganisms in an
aqueous medium. The standardized method for testing
inhibitory effects of substances on the respiratory activity of
microorganisms in activated sludge is described in the
documents ISO 8192:1986 revised by ISO 8192:2007 and
OECD 209 Regulation EC 440/2008. The respiration
inhibition test measures the respiration inhibition caused by 5
different concentrations of waste water compared to the
respiration of a control sample of activated sludge.
Tests were carried out in six 20 ml glass tubes. Synthetic
sewage (2 ml) and test mixture (depending on concentration
diluted with distilled water) were added to the tubes. The tubes
were kept stirred with a magnetic stir-bar in a waterbath of
Strathtox unit. After reaching of the constant temperature of
20C, 8 ml of activated sludge were quickly added to the tubes
and oxygen electrodes were inserted into the tubes for
recording the respiration rate values. As soon as the oxygen
TABLE I. METAL TOXICANTS USED FOR RESPIRATORY INHIBITION
TESTING
Metals Chemical form Producer Purity (Minimum Assay)
Cadmium CdCl2 2.5H2O Lachema Analar grade, 99.0%
Chromium CrO3 Lachema Analar grade, 98.0%
Copper CuCl2 2H2O Lachema Analar grade, 99.0%
Nickel NiCl2 6H2O Lachema Analar grade, 98.0%
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
232
content of the tube with the fastest respiration rate had fallen
to near zero, the test was stopped. Percentage inhibition was
calculated as Equation (1):
(1 Rs/Rc) 100 (1) + = . (1) (1)
where Rs = sample oxygen uptake rate
Rc = control oxygen uptake rate
IV. RESULTS AND DISCUSSION
The four metals were tested in different concentration
intervals, taking into account their solubility limit and their
average concentration values detected in industrial waste water.
Figures show the characteristic inhibition profile observed in
the respirometric tests for the four tested metals. There can be
observed evident inhibitory effect of the metals on the biomass
activity. As shown in Figs. 1; 2; 3; 4 there are differences in the
behavior of activated sludge from two different types of
WWTPs. As described above, WWTP Ostrava Privoz is a large
plant treating also industrial waste water specific for the region
of Ostrava with heavy industry (steel industry, coking plants
etc.) unlike the Hermanice II WWTP, a small plant treating no
industrial inflow.















Fig. 1. Characteristic inhibition profile of cadmium.















Fig. 2. Characteristic inhibition profile of cooper.















Fig. 3. Characteristic inhibition profile of nickel.















Fig. 4. Characteristic inhibition profile of chromium.
Table II shows the comparison of respiration inhibition in
% of four selected metals for various concentrations; from very
high of 4000 mg/l to low of 10 mg/l. Large differences between
the two types of WWTPs can be observed, which shows the
variance in microbiology of activated sludge and behavior of
microorganisms.
Generally, heavy metals incline to have a bacteriostatic
effect. The increasing metal concentrations lead to mortality.
The results suggest that some types of microorganisms in
different activated sludge have developed mechanisms to deal
with elevated concentrations of heavy metals in their
environment [19].
The relative degree of maximum respiration inhibition for
WWTP Ostrava has been found to be:
Very high concentrations:
Ostrava Cr
6+
> Cu
2+
> Cd
2+
> Ni
2+

Hermanice II Cr
6+
> Cd
2+
> Cu
2+
> Ni
2+


Low concentrations:
Ostrava Cr
6+
> Ni
2+
> Cd
2+
> Cu
2+

Hermanice II Cr
6+
> Cd
2+
> Ni
2+
> Cu
2+






Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
233
TABLE II. COMPARISON OF THE RESPIRATION INHIBITION EFFECTS OF DIFFERENT METALS ON ACTIVATED SLUDGE BIOMASS
Toxicant Dosage of metals causing inhibition in [%]
4000 [mg/l] 500 [mg/l] 50 [mg/l] 10 [mg/l]
Ostrava Hermanice II Ostrava Hermanice II Ostrava Hermanice II Ostrava Hermanice II
Cu 89 73 48 55 1 7 0 2
Cd 76 75 45 54 5 19 2 6
Ni 47 60 31 39 13 17 5 3
Cr 93 97 91 91 79 55 52 17

TABLE III. AVERAGE AMOUNT OF METALS PER YEAR IN WASTE WATER FLOWING INTO WWTP OSTRAVA AND IN WASTE WATER FROM EXTERNAL
IMPORTERS.
Metals Input channel A
of WWTP Ostrava
[g/l]
Input channel D
of WWTP Ostrava
[g/l]
Importers of WW to WWTP
Ostrava
[mg/l]
Avg. Min. Max. Avg. Min. Max. Avg. Min. Max.
Cr 19,1 10,7 29,3 46,2 5,9 375,6 7,2 0,0015 320
Cd 0,8 0,2 1,9 1,0 0,2 4,7 0,0226 0,0000145 0,374
Cu 42,7 13,0 65,0 40,4 16,0 58,0 10,9 0,004 483
Ni 28,4 5,0 86,0 34,0 5,0 115,0 5,44 0,0025 109

The toxicity at very high concentrations shows the
differences between Cd and Cu for different types of
activated sludge and between Ni and Cd at low
concentrations. It can be also concluded that Ni is much
more toxic at low concentrations [20]. For all cases Cr6+ has
been found to be the most toxic, causing highest respiration
inhibition. In comparison to relative toxicity of metals
common to the UK, EU and the USA of the following order
Hg > Cd > Ni/Pb > Cu > Zn and for anaerobic inhibition in a
municipal sludge Ni > Cu > Cd > Cr > Pb [3] it can be
assumed that aerobic or anaerobic process plays a significant
role for respiration inhibition. In most municipal waste water
treatment plants, the occurrence of heavy metals can
significantly affect the efficiency of the plant, reducing the
chemical oxygen demand adsorption capacity and the
settling characteristic of the sludge. In literature [21] it was
reported that the order of inhibitory effect as follows in this
nitrication system: Ag > Hg > Cd > Cr3+ = Cr6+. The
nitrifying micro-organisms are more susceptible to heavy
metal inhibition than the micro-organisms responsible for the
oxidation of carbonaceous material [11]. Our study shows
that Cr6+ is the most toxic heavy metal for aerobic activated
sludge with a 52 % respiration inhibition above the dose of
10 mg/l. Vaipolou and Gikas [22] summarize that clear
conclusions about the critical chromium concentrations that
affect activated sludge growth cannot be derived. Literature
data on Cr6+ effects on activated sludge are controversial.
Some of them mention that activated sludge growth is
stimulated at Cr6+ concentrations up to 5 mg/l, above which
it is inhibited, while others report growth stimulation at
concentrations up to 25 mg/l. However, all reports agree that
Cr6+ is definitely an activated sludge growth inhibitor at
higher concentrations. A number of factors have been
identified to influence chromium toxicity on activated
sludge, such as pH, biomass concentration, presence of
organic substances or other heavy metals, adaptation process,
exposure time, etc. [23; 24].
The obtained respiration inhibition data were compared
with the average year amounts of metals flowing into
WWTP Ostrava and also with waste water from external
importers treated in WWTP Ostrava, these data are reported
in Table III. Average concentration amounts of metals from
the Ostrava WWTP input are in g/l units. Compared to
results of respiration inhibition caused by metals in mg/l, it
can be concluded that regular inflow cannot influence the
activated sludge respiration. On the other hand, high
exceptional concentrations in waste water from external
importers treated in WWTP Ostrava can influence the
activity of the biomass.
V. CONCLUSIONS
Respiration inhibition tests of four metals Cr, Cd, Cu
and Ni, representative heavy metals occurring in many
industrial waste waters, have been carried out by a
respirometric method based on the ISO 8192:2007 and
OECD 209 Regulation EC 440 and the Strathtox device.
All investigated heavy metals inhibit activated sludge
growth at relatively low concentrations. However, the critical
concentrations are only achievable by shock loads, above
which they significantly affect activated sludge respiration.
As with many heavy metals, adaptation can significantly
increase microbial tolerance to heavy metals.
The need for monitors of influent waste water toxicity at
municipal WWTPs has been demonstrated. It can be
concluded that for the needs of process control early
detection of metals allows the quick intervention of
appropriate control strategies to reduce the biomass-toxicant
contact time together with respiration inhibition.
ACKNOWLEDGMENT
This paper was supported by the research projects of the
Ministry of Education, Youth and Sport of the Czech
Republic: OpVaVpi ENET CZ.1.05/2.1.00/03.0069.
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
234
REFERENCES
[1] P.S. Davies, The biological basis of wastewater treatment. Glasgow:
Strathkelvin Instruments Ltd, 2005, pp. 3-11.
[2] H. Raclavsk, . Dokov, and H. krobnkov, Ecotoxicity of
sewage sludge from waste water treatment plant, Inynieria
Mineralna, vol. 27, pp. 39-50, January 2011.
[3] J. Binkley and J.A. Simpson, 35 - Heavy metals in wastewater
treatment processes, in Handbook of Water and Wastewater
Microbiology, D. Mara and N. Horan, London: Academic Press,
2003, pp. 597-610.
[4] V. Ochoa-Herrera, G. Len, Q. Banihani, J.A. Field, and R. Sierra-
Alvarez, Toxicity of copper(II) ions to microorganisms in biological
wastewater treatment systems, Sci. Total Environ., vol. 412-413, pp.
380-385, December 2011.
[5] P. Gikas, Kinetic responses of activated sludge to individual and
joint nickel (Ni(II)) and cobalt (Co(II)): An isobolographic
approach, J. Hazard. Mater., vol. 143, pp. 246-256, May 2007.
[6] P. Gikas, Single and combined effects of nickel (Ni(II)) and cobalt
(Co(II)) ions on activated sludge and on other aerobic
microorganisms: A review, J. Hazard. Mater., vol. 159, pp. 187-203,
November 2008.
[7] Y.-P. Tsai, S.-J. You, T.-Y. Pai, and K.-W. Chen, Effect of cadmium
on composition and diversity of bacterial communities in activated
sludges, Int. Biodeterior. Boidegrad., vol. 55, pp. 285-291, June
2005.
[8] I. Kamika and M.N.B. Momba, Comparing the tolerance limits of
selected bacterial and protozoan species to nickel in wastewater
systems, Sci. Total Environ., vol. 410411, pp. 172-181, December
2011.
[9] P. Madoni, D. Davoli, and L. Guglielmi, Response of sOUR and
AUR to heavy metal contamination in activated sludge, Water Res.,
vol. 33, pp. 2459-2464, July 1999.
[10] Y. Pamukoglu and F. Kargi, Biosorption of copper(II) ions onto
powdered waste sludge in a completely mixed fed-batch reactor:
Estimation of design parameters, Bioresour. Technol., vol. 98, pp.
1155-1162, April 2007.
[11] S.R. Juliastuti, J. Baeyens, C. Creemers, D. Bixio, and E. Lodewyckx,
The inhibitory effects of heavy metals and organic compounds on
the net maximum specific growth rate of the autotrophic biomass in
activated sludge, J. Hazard. Mater., vol. 100, pp. 271-283, June
2003.
[12] P. Battistoni, G. Fava, and M.L. Ruello, Heavy metal shock load in
activated sludge uptake and toxic effects, Water Res., vol. 27, pp.
821-827, May 1993.
[13] Ch.B. Bott and N.G. Love, The immunochemical detection of stress
proteins in activated sludge exposed to toxic chemicals, Water Res.,
vol 35, pp. 91-100, January 2001.
[14] P.S. Davies and F. Murdoch. The role of respirometry in maximising
aerobic treatment plant efficiency. Glasgow: Strathkelvin Instruments
Ltd, 2002.
[15] B. Cai, L. Xie, D. Yang, and J.-P. Arcangeli, Toxicity evaluation and
prediction of toxic chemicals on activated sludge system, J. Hazard.
Mater., vol. 177, pp. 414-419, May 2010.
[16] S. Ren, Assessing wastewater toxicity to activated sludge: recent
research and developments, Environ. Int., vol. 30, pp. 1151-1164,
October 2004.
[17] C. Gendig, G. Domogala, F. Agnoli, U. Pagga, and U.J. Strotmann,
Evaluation and further development of the activated sludge
respiration inhibition test, Chemosphere, vol. 52, pp. 143-149, July
2003.
[18] V. Bodington, A. Langford, M. Dooley, and K. Diamond, Cardiff
WWTW Aeration Optimisation Through Scientific Control, 3rd
European Water and Wastewater Management Conference,
Strathkelvin Instruments Ltd, pp. 1-9, September 2009.
[19] T.A. zbelge, H. . zbelge, and P. Altnten, Effect of
acclimatization of microorganisms to heavy metals on the
performance of activated sludge process, J. Hazard. Mater., vol. 142,
pp. 332-339, April 2007.
[20] S.-A. Ong, E. Toorisaka, M. Hirata, and T. Hano, Effects of
nickel(II) addition on the activity of activated sludge microorganisms
and activated sludge process, J. Hazard. Mater., vol. 113, pp. 111-
121, September 2004.
[21] F. een, N. Semerci, and A.G. Geyik, Inhibition of respiration and
distribution of Cd, Pb, Hg, Ag and Cr species in a nitrifying sludge,
J. Hazard. Mater., vol. 178, pp. 619-627, June 2010.
[22] E. Vaiopoulou and P. Gikas, Effects of chromium on activated
sludge and on the performance of wastewater treatment plants: A
review, Water Res., vol. 46, pp. 549-570, March 2012.
[23] L. Cheng, X. Li, R. Jiang, Ch. Wang, H.-B. Yin, Effects of Cr(VI)
on the performance and kinetics of the activated sludge process,
Bioresour. Technol., vol. 102, pp. 797-804, January 2011.
[24] D.J.B. Dalzell, S. Alte, E. Aspichueta, A. de la Sota, J. Etxebarria, M.
Gutierrez, C.C. Hoffmann, D. Sales, U. Obst, and N. Christofi, A
comparison of five rapid direct toxicity assessment methods to
determine toxicity of pollutants to activated sludge, Chemosphere,
vol. 47, pp. 535-545, May 2002.






Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
235




Abstract - A demand clean and renewable energy through the
use of submerged turbines. Using this new source of energy we
can grow the production of electrical energy in a sustainable way.
This paper presents the simulation of maritime currents using a
wind tunnel which allows the comparison of speed variations of
water compared to air. It also features a brake system that uses a
magnetic sensor in real time using the wavelets. As an example, it
can be the ones mentioned concerning the feedback control
system applied in a brand WEG motor of 100 hp with 2 poles
3500rpm rotation. Using software tools, the Acquired data are
post-processed.

Key words: Tidal Energy, Marine Currents, Wavelets
I. INTRODUCTION

nergy is a major constituent of modern society. It is
necessary to create goods based on natural resources and
to provide many of the services with which we have benefited.
Energy is a basic concept in all disciplines of science and
engineering. At around all the different decades forms of
generation have been researched and used. Global demand for
energy has tripled in the past 50 years and may triple in the
next 30 years. With this growth in consumption along with the
scarcity of current resources and non-renewable, the company
was forced to seek new alternatives for the future of energy
supply. Alternative energy, renewable, are new prospects for
energy resources. We are learning to use different sources of
energy photovoltaic, hydro, wind, thermal and sea. The marine
current turbines are energy conversion mechanisms that
leverage the movement of water caused by tides to move.
Resulting from interaction with the fluid the turbine blades
come into rotational motion transmitting power to the shaft to
which it is coupled. In turn, the shaft, which may or may not
be connected to a gearbox, a generator operates and produces
energy [1].
The blades of a turbine marine surfaces serve as sustaining
forces capable of producing and sustaining appreciable
considerably larger than the forces of resistance. Among the
various processes for production of renewable two mentioned
are very similar with regard to conversion technology used:
wind and sea current energy. In these processes are used
turbines that are driven turbo machinery which convert kinetic
energy of the fluid in which they are immersed into
mechanical energy and being coupled to a generator into
electrical power. The technology applied to the systems of
marine current follows the same basic principles of wind
turbines, with the main difference is the density of the fluid
passing through the turbine, since water is about 800 times
denser than air. This factor makes marine turbines have
several potential advantages over wind turbines and other
renewable energy technologies, including: Production of
greater potencies for rotors of similar size to a wind turbine
caused by the higher load factors exercised by the water . The
current velocity does not depend on climatic factors, since the
tides are caused mainly by the rise and fall of water bodies
resulting from the gravitational interaction between the earth,
moon and sun, and predictable throughout the year such as the
amount energy that can be extracted. Therefore, it is possible
to build simpler systems without complex mechanisms that
prevent destruction braking unit in case of a sudden increase
in fluid velocity [2].

II. FORMULATION

A. Tidal Power

Tidal energy is derived from the gravitational forces of
attraction that operate between a molecule on the earth and
moon, and between a molecule on the earth and sun. As the
earth rotates, the distance between the molecule and the moon
will vary. When the molecule is on the dayside of the earth
relative to the moon or sun, the distance between the molecule
and the attracting body is less than when the molecule is on
the horizon, and the molecule will have a tendency to move
away from the earth. Conversely, when the molecule is on the
night side of the earth, the distance is greater and the molecule
will again have a tendency to move away from the earth. The
separating force thereby experiences two maxima each day
due to the attracting body. It is also necessary to take into the
account the beating effect caused firstly by difference in the
fundamental periods of the moon- and sun-related
gravitational effects, which creates the so called spring and
neap tides, and secondly the different types of oscillatory
response affecting different seas. If the sea surface were in
static equilibrium with no oscillatory effects, lunar forces,
which are stronger than solar forces, would produce tidal
range that would be approximately only 5.34 cm high[3].


Energy Simulation of Marine Currents
through Wind Tunnel with use the Haar
Wavelet for Electromagnetic Brake Systems

Aldo A. Belardi and Antnio H. Piccinini
E
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
236

B. Stream Turbines

Stream turbines make use of the kinetic energy of the tidal
stream or ocean stream (river stream can also be exploited
with the same technology) by using both propeller type
horizontal axis turbines and vertical axis ones. Therefore they
are different from conventional hydraulic turbines used for
dams, which mainly use the potential energy due to high
hydraulic heads and pressure by employing high solidity, i.e.
their turbine blades cover most of the water flow passages.
Stream turbines cannot do so because they operate in free
stream conditions, thus they have to have relatively low
solidity and at the same time, as large rotors areas as possible
to capture the energy in the flow with low pressure and low
velocity.

C. Marine Current Turbines

Marine current turbines are developing twin horizontal axis
two bladed turbines. The currently has limited the feasible
installation sites to have spring peak current velocity of > 2
m/s and with a depth of 20 m to 40 m. We have successfully
installed a 300 kW prototype off the coast of Lynmouth which
has been in operation since 16/6/03, dumping power into a
load bank. Following successful installation and operation in
Strangford, the next stage will be a semi-commercial venture
with the installation of about 10 turbines at a site which has
not yet been announced according to figure 1.









Fig. 1. The current system design for a 1.2 MW
system to be located in Strangford Lough.

The prototype off the coast of Lynmouth was found to
produce better energy conversion efficiency than expected.
The model used to predict energy output was based on a wind
turbine model which has a maximum theoretical efficiency of
0.59 known as the Betz limit. The Betz limit is dependent on
the velocity difference between the front and rear of the
turbine. The energy contained in the flow (wind or tidal) is[4]:



(1)
where E
area
= Energy per unit area, E
k
= Kinetic energy per unit
volume, m = mass, V = velocity before the turbine ! = density.
By contrast with wind turbines of similar output the high
power densities achieved with streams of flowing water at the
velocities encountered mean that large horizontal thrust forces
are applied to marine turbines. The power available per square
meter of sea surface.




(2)

where h = water depth, k
s
= Daily availability factor (0.424),k
n

= neap / spring availability factor (0.57) k
ef
=efficiency, V =
max. current velocity.


D. Determination of Power Output

According to the literature the general power available per
square meter can be determined according to Eq.(2).
Including k
s
and k
n
will give the average power output over
both spring neap cycle and over a daily cycle. The efficiency
of the TED can also be included to appreciate the total amount
of energy available at any given location. However the TEDs
cannot be installed infinitely close behind and in front of each
other because there is a need to enable the tidal current to both
recover in velocity behind them and return to a laminar. The
area seen by each turbine row:




(3)

where h = water depth, c = blade clearance depth, "x = width
of the grid square, assumed length of the row
The number of rows in each grid square depends on the
spacing between each row. However an estimate was made to
use 15 blade diameters between each row. The number of
turbine rows per square:




(4)

where "y = breadth of the grid squares, r = the turbine radius, s
= spacing in blade diameters.
It is therefore, possible to determine the number of rows of
turbines in each grid square, and hence the area by the turbines
in each grid square. The number of turbine rows per square:




(5)


2 2
1 1
E = !V E = mV
k
2 2
3
s n ef
1
P = !hk k k V
2
( )
!
h - c "x
4
! "
# $
% &
!y
Numbers of rows =
2rs
( )
! " y
h - c "x
4 2rs
! " ! "
# $ # $
% & % &
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
237


Power output for a given grid square can be determined as
follows:




(6)

where A = Area seen by turbines, within the square

However the values of k
s
and k
n
were tested against values
calculated from the verified model output and were found to
vary according to location. Therefore when determining the
power output from each grid square for the resource
assessment these values were dropped and an approximation
of the velocity was used to calculate the power at 15 min time
steps throughout a spring neap cycle. The tidal velocity also
varies according to the spring neap cycle over a period of
14.75 days. The method of determining an approximate tidal
velocity for each square at any given time.




(7)




(8)

Where V
sn
= Velocity according to a spring neap cycle, V
t
=
Velocity according to a spring neap cycle and the semi-diurnal
cycle, V
s
= Maximum velocity during a spring, de Vn =
Maximum velocity during a neap tide, # = The tidal phase
difference and T = Time (hours from start time of model data).
The total tidal generation at any given time can be established,
taking into account the different tidal phases between the
squares with the use of the array given above. Although the
average tidal phase is given for the group when selecting
which generators to remove the individual tidal phase is used
to calculate the total tidal generation at a given time [5].

III. APPLICATION

We use the transfer methodology of obtaining the energy
through wind power in the energy drawn from the sea
currents. Taking account of the difference between the two
means of obtaining energy equals the ratio between the
relative densities of the media under consideration. To
estimate the values that we could get in marine turbine
systems, taking into account the proportions ideas and a
turbine equivalent to a wind turbine, we base this work in
order to estimate potential energy.
Typical values for the components when the efficiency of the
turbine is operating at its nominal conditions are as follows:
The efficiency with which the turbine extracts kinetic energy
of the incoming stream is approximately 45%. For water
flowing through a turbine for extracting the maximum
efficiency occurs when the flow velocity in the rotor face is
reduced to 1/3 in relation to the free stream velocity, which
provides a good extraction efficiency of 16/27 (= 59%), which
is the so-called "Lanchester Betz limit". The efficiency with
which the energy extracted from the stream is delivered to the
generator is 96%. Losses at this stage include friction within
the gearbox usually used to accelerate the rotational speed of
the turbine rotor (slower) rotational speed of the generator
(fast). The losses represent approximately 5% are due to
friction and mechanical energy dissipated as heat [6]. Taking
these efficiency criteria have an approximate representation of
the reality of energy a turbine generator. For generating an air
flow in order to provide measurements generated by a turbine,
the wind tunnel used prepared in our laboratories from the
center of mechanical engineering. The equipment is called
generator of an air flow in a concentrated area delimited
enabling approximate calculations of natural reality.
The tunnel is based on a cylindrical tube where we have a 5
HP motor concentrate (3.728 KW). The dimensions of the
tunnel are approximately 60cm in diameter with an extension
of 5mts thus the fluid (air) passes through the tube and is
forced by the engine passes the test box which is expressed
later in this article. The air flow at the turbine outlet has a
maximum speed of approximately 32 m/s at 1280 rpm. The
figure 2 we show the equipment used.













Fig.2. The wind tunnel


Using the method of energy transfer through the fluid (air -
water), relative density, we have the force generated from the
force exerted by the fluid under study only depends on its
density, its speed, the area absorbed by the generator.
Thus we can use this theoretical foundation to generate signals
and execute forms of security, magnetic brake system for the
generation of energy through currents maritime. For the
system of the proposed system we use the brake wavelet
analyze. So we will analyze the signals through the method,
and thus can create a form of magnetic brake. The stator
current evaluation is carried out using the signal acquired by a
current sensor. It should be mentioned that, for this propose,
the related magnetic field can also be evaluated.
The signal of the current sensor is converted for digital into
regular intervals of time. Using the Matlab, Simulink and a
analog-to-digital conversion board, the variation of the
coefficients of the transformed signal by the Haar wavelets are
computed in real time, verifying the instant where we got
some type of disturbance in the process, as for example one
tool in addition or a joining in the axle.
3
s n ef
1
P = !Ak k k V
2
( )
( )
sn s n
2! 24 + T
V = V - V sin
14.75
! " #
$ %
& '
( )
( )
t s n
2! T +
V = V - | V | sin
12.4224
! " #
$ %
& '
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
238

In figure 3 we present in form of diagram of blocks, the
acquisition method that converts and processes the developed
system [7].



















Fig.3. System of data acquisition


Normally the electric engines are composites for two
components: the separate stator and the rotor for a small called
interval "gap". The phase tensions are applied in the stator
causing a current field. These currents provoke a rotation in
the magnetic field that puts into motion the rotor in one
determined direction [8][9]. The speed of rotation of the
magnetic field is called synchronism speed ns (rpm) and
normally can be represented by the relation of frequency (f) of
the stator (Hertz) and by the number of poles for each phase of
the motor. In milling machines, the breaking of cutting tools
can cause deformations on the rest of the machine. Normally,
this phenomenon is caused by an extreme increasing force in
the cutting pieces laterals. This effect will be the increasing in
the current, as well as the torque increasing will be the
dissipate power in the motor. The current of the motor can be
gotten, by multiplying the signal of the output for the value of
the shunt resistance (10!/25W), for each one of the phases in
accordance with figure 4.













Fig.4. Measuring the current of the motor



The current in the electric engine can be calculated in
agreement with the following:




(9)

The signal of the output tension then is sent for a board
installed in the computer, which makes the conversion of the
analogical signal to digital. The interface between the signal
and the board in the computer is made through a digital to
analogical converter of high performance, installed in the
computer that allows the communication with the Simulink in
real time [10].
The figure 5 shows the Simulink equivalent block diagram
that was developed.















Fig.5. Example of Simulink block diagram

The block "Adapter" is used for the acquisition of the data, not
intervening with the signal of the model. It also serves to help
in the functioning of block "RT" that has as out, the signal
received from the data acquisition board. In this block, it is
possible also to configure the sampling time of the signal and
the number of used input. The block "DWT" carries through
transformed discrete of the wavelet input signal, that will be a
size vector in the input, so respecting dyadic intervals. It also
allows selecting the family of wavelets and the used level
number, and the analysis of vectors and the coefficients,
grouped in form of matrices. Finally the block "Scope", that
allows graphically showing the signals of the entrance and
outputting [11].

IV. RESULTS

With the lifting of the data generated through the wind tunnel,
we have the following propositions. The variation of engine
power is transferred proportionally to its rotation, from this
structure we have the following data obtained by wind tunnel.
The marked difference in power density of water compared to
that of wind, however, may be seen from Table 1 for various
velocities assuming a density for salt water of 1030 kg/m3 and
an air density of 1.2473 kg/m3 corresponding to air at 10
o
C.
V
60 1
R
I =
R 5 N
! !
Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
239




TABLE I
Relative power densities of marine currents and
air at different velocities.

Velocity
(m/s)
Power Density
Marine
(KW/m
2
)
Power Density
Wind
(KW/m
2
)
1 0.52 -
2 4.12 -
3 13.91 0.2
10 - 0.62
15 - 2.10
20 - 4.99

The pressure on the spot remained at 1.1 kg/m$, a variation of
approximately 10% of the pressure used in the studies
presented the temperature stayed 19.1 C. These
characteristics are due to climatic factors and the height of
where the experiment was conducted. Thus if we calculate the
proportion of energy generated by these data, transferring the
data to a maritime environment, we have to evaluate its
relative density, the fact that the same speed of 2 m/s in water
can generate about 4 kW/m
2
, and taking into account the wind
turbines would have to spend a wind flow of approximately 20
m/s. This is caused by the much lower density of air (1.225
kg/m
3
at sea level), relative to water (1024 kg/m3 for sea
water), wind speeds of 9.3 to 11.8 m/s are needed to achieve
the same energy density of wind turbines. The area used was
the area of the wind tunnel test, considering a small turbine
with approximately an area A = 0.450 m (width) x 0.425 m
(height). Totaling an area of approximately A = 0.1912 m%.
Due to the size of the minimized area of evidence, we have the
values were lower than the actual size according to Table II.

TABLE II
Power generated through the mathematical model.

Rotation
(rpm)
Velocity
relative
(m/s)
Power
Provided
(KW)
Power
Density
Marine
(KW)
Power
Density
Wind
(KW)
100 2,5 0,28 0,87 0,00
200 5 0,57 6,97 0,01
300 7,5 0,86 23,52 0,02
400 10 1,15 55,74 0,06
500 12,5 1,44 108,87 0,11
600 15 1,73 188,13 0,19
700 17,5 2,02 298,74 0,30
800 20 2,31 445,93 0,45
900 22,5 2,6 634,93 0,64
1000 25 2,89 870,96 0,88

Due to the relative density of the water is much larger than
that of the water, the speed greater than 7.5 m/s are not
considered because the high kinetic energy that would be
required to generate that force sing these data we can see how
close to the actual values acquired certainly noticed the
difference between the actual values and those obtained
mathematically. It is possible to notice a remarkable difference
between proportions of power generated through wind and
water, with the lowest values of maritime currents generate
approximately 10 times the values in the wind, due to its
density and thus its strength coupled kinetic mass of water
passing the kinetic pas generators. The difference in power
density of water compared to that of wind, however, may be
seen from chart 1 for various velocities, in the chart its
possible to visualized the marked difference of the output
power between the both fluids.

Chart I
Relative Power output of marine and wind at different
velocities.


Thus the actual value has a greater proportion however only
one difference between sizes of prototypes is. Thus we can
generate studies in small prototypes using air as a medium,
and thus transform the results to water. Thus far we use the
signals mathematically performed to study magnetic brake.
For the brake system the results had been acquired for the
easiness if also gets the data in real time and for the easiness
of the coupling, with the computerized system.
The figure. 6 and 7 presents the signal of the current of the
motor in unloaded and in loaded the discrete transformed one
into the signal, the approaching coefficients.










Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
240






Fig. 6. Signal of the current in the stator















Fig. 7. Coefficient of details using wavelet of Haar level 1

The figure 8 presents the signal of the current in the stator of
the rotating equipment identified in the graph for "s". The
peaks of the current level had been gotten using it electrical
rotating equipment machine in unloaded and load. Applying
the discrete transformed one into signal s, using the wavelet
de Haar with resolution level 1, we got the coefficients of
approach represented in the graph a
1
and the coefficients of
details represented in d
1
.

Fig. 8. Current in the stator in unloaded and load

Applying the discrete transformed one into signal s, using
the Haar wavelet with resolution level 1, we got the
coefficients of approaching represented in the graph a1 and
the coefficients of details represented in d1. The coefficient
of details using wavelet of Haar level one [12][13][14].
The figure 9 shows, from the top to bottom, the peak current
level signal, and the coefficients regarding the first level of
resolution using wavelet of Haar, and the details and the
discontinuity ones that are related to stator current increase
behavior, in real time.




















Fig.9. Current in the stator in unloaded and load

Those coefficients were obtained considering the unloaded
and loaded motor conditions.

One aspect to be emphasized is the ones concerning the
signal conditioning requirements due to the inherent noisy
present in the machine process.
The amplitude of the output signal can be modified, and a
proper approaching should be applied to take it into
consideration.
In the instant where the current in the stator increases quickly
with torque, the cutting section in addition to the system,
causes a discontinuity that is detected by the approaching
coefficients. The stator current signal, considering the
unloaded and loaded charge operation
and the break at 0.55s.
V. CONCLUSION

This study present a methodology capable of recognizing and
designing a model that can represent marine turbines in
another way, being made from physically modeled equations
from mathematical models and theories of fluid mechanics. It
was possible the measurement of the calculation in a
theoretical environment, structuring possible applied work in
building a prototype to make this technology commercially
viable in order to minimize impacts ambient in power
generation today. As for the brake system proposed
methodology can be used to detect faults in other media
generators of electricity, where the kinetic force is employed
in the process. The great advantage of the use of the magnetic
sensor is that beyond not having consuming, the time of reply
with the use of wavelets of Haar is extremely fast (< 0.55s)
ACKNOWLEDGMENT

Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
241

The authors are grateful to the University Center of FEI by the
availability of materials and laboratories with adequate
infrastructure for academic research and professional
assistance at all levels.
REFERENCES

[1] Hinrichrhs R. A., Kleinbach M., Reis L. B, Energia e Meio
Ambiente, Cengage Learning, So Paulo, 2010.
[2] Bryans A. G., Impacts of Tidal Stream Devices on Electrical
Power Systems, Belfast, 2006.
[3] Hammons T. J., Tidal Energy, Vol 81, N 3, March 1993.
[4]Fraenkel, P.L., Power from Marine Currents, Proceedings of the
Institution of Mechanical Engineers, Part A, Journal of Power and
Energy, Vol. 216, pp. 1-14. ISSN: 09576509, 2002.
[5]Tidal Power, A Factfile provided by The Institution of
Engineering and Technology, Irland, 2007.
[6] Fraenkel P.L., Power from Marine Currents Journal of Power and
Energy,vol.216,pp. 1-14,2002.
[7]Wright,M. , Sea Flow Tidal Current Turbine. Watts Conference,
London , March, 2004.
[8]Li, X. Detection of tool flute breakage in end milling using feed-
motor current signatures, IEEE/ASME Transactions on
Mechatronics, Vol. 6, N 4, pp. 491- 498, 2001.
[9]Skogestad, S.; Postlethwaite, I. Multivariable Feedback Control:
Analysis and Design. New York, John Wiley, 1996.
[10]Kim, G.D.; Chu, C.N. In-Process Tool Fracture Monitoring in
Face Milling Using Spindle Motor Current and Tool Fracture Index,
The International Journal of Advanced Manufacturing Technology
Vol. 18, N 6, pp. 383-389, 2001.
[11]Palm, W., Introduction to Matlab 7 and Simulink for Engineers,
McGraw-Hill, pp. 55-97, 2003.
[121]Belardi A. A. ,Cardoso J. R., Sartori C. F., Wavelets
Application in Electrostatic and their Computing Aspects, Electric
and Magnetic Fields, EMF 2009, Italy, pp. 43-46,2009.
[13]D. E. Newland, Random Vibrations Spectral and Wavelet
Analysis, Addison Wesley Longman, pp. 315-333,1993.
[14]Aboufadel E.; Schlicker S., Discovering Wavelets, John Wiley &
Sons, pp. 1-42, 1999.


Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
242
Authors Index
Amin-Nasseri, M. R. 141

Joogh, F. K. Q. 34

Peric, N. 62
Anderton, L. 216

Jung, J. H. 113

Piccinini, A. H. 236
Arroja, L. 27

Jung, J.-W. 78

Preda, C. F. 21
Asgarpour Khansary, M. A. 34

Kaare, K. K. 158

Raclavska, H. 226
Aslaniavali, R. 99, 207

Kait, C. F. 48

Ramli, A. 177
Baek, D. G. 113

Kang, D. D. 78

Rochas, C. 70
Barisa, A. 70, 147

Kherfene, R. L. 119, 170 202

Rodrigues, I. L. S. 165
Becirovic, E. 62

Khodja, F. 119, 170 202

Ro, M. 70
Beilicci, E. 187

Khoshgard, A. 91, 207

Sagau, M. 135
Beilicci, R. 187

Kim, K. 78

Snchez, L. A. 83
Belardi, A. A. 236

Kingwell, R. 216

Snchez, L. P. 83
Blumberga, A. 70, 147

Kiperstok, A. 165

Sani, A. H. 34
Blumberga, D. 70, 147

Koppel, O. 158

Sauhats, A. 152
Braga, E. A. 165

Kusljugic, M. 62

Scupi, A. A. 129, 135
Capela, I. 27

Larbes, C. 43

Semenescu, A. 21
Carrasco, A. 182

Lee, D. I. 78

Sepehrian, B. 91
Cho, J. H. 53

Lee, T. 127

Silva, A. 27
Cimdina, G. 147

Lee, Y. H. 127

Skrobankova, H. 231
Coban, H. 152

Leppiman, A. 158

Speijers, J. 216
Correia dos Santos, A. C. 165

Lopez, J. T. 107

ulc, B. 194
Couras, C. 27

Malek, A. 43

Talebiazar, L. 99, 207
David, I. 187

Mimoun, Y. 202

Umbrasko, I. 152
De Arajo, R. 165

Miras, M. M. 182

Varfolomejeva, R. 152
Drozdova, J. 226, 231

Misi, S. E. E. 177

Veidenbergs, I. 147
Dumitrescu, G. S. 129

Moatti, A. 141

Wardell-Johnson, A. 216
Dutta, B. K. 48

Mohamad, M. F. 177

Xayavong, V. 216
Escobar, B. 182

Momenifar, M. 91

Yoon, H. S. 113
Fntn, G. I. 223

Nadais, H. 27

Younes, M. 119, 170
Feldman, D. 216

Nasiri, G. 91

Yusup, S. 177
Gowharifar, S. 91

Nurlaela, E. 48

Zadeh, N. S. 34
Hartmann, S. 226, 231

Oae, S. A. 223

Zafarani, H. 141
Hernndez, J. J. C. 83

Oplustil, M. 212

Zalesak, M. 212
Hosseini, A. 34

Osmic, J. 62

Zapata, S. 107
Ioana, A. 21

Oswald, C. 194

Zeraia, H. 43
Islam, N. 216

Panaitescu, F.-V. 135

ogla, G. 70
Jafarzadeh, M. T. 99, 207

Panaitescu, M. 129, 135


Jamshidi, N. 99, 207

Park, S.-U. 53




Proceedings of the 2013 International Conference on Environment, Energy, Ecosystems and Development
243

Вам также может понравиться