Вы находитесь на странице: 1из 119

SHORT-TERM ELECTRIC POWER FORECAST IN THE

NIGERIAN POWER SYSTEM USING ARTIFICIAL

NEURAL NETWORK

BY

NWEKE EGEDE F.
PG/M.ENG/08/49172

DEPARTMENT OF ELECTRICAL ENGINEERING

UNIVERSITY OF NIGERIA,

NSUKKA.

SUPERVISOR: VEN. PROF T. C. MADUEME

JULY, 2012

i
SHORT-TERM ELECTRIC POWER FORECAST IN THE NIGERIAN
POWER SYSTEM USING ARTIFICIAL NEURAL NETWORK

BY

NWEKE EGEDE F.
PG/M.ENG/08/49172

A THESIS SUBMITTED IN A PARTIAL FULFILLMENT OF THE


REQUIREMENTS FOR THE AWARD OF M. ENG. DEGREE OF
ELECTRICAL ENGINEERING

UNIVERSITY OF NIGERIA, NSUKKA.

SUPERVISOR: Ven. Prof T. C. Madueme

July, 2012

i
APPROVAL PAGE

This is to certify that this research work was approved on behalf of the

Department of Electrical engineering, University of Nigeria Nsukka

BY

-----------------------------------------------
Student
Nweke Egede F.

------------------------------------------------
Supervisor
Ven. Engr. Prof T. C. Madueme

-----------------------------------------------------
H.O.D.
Engr. Dr. B. O. Anyaka

----------------------------------------------------
External Examiner

ii
CERTIFICATION

This is to certify that this thesis was written by Nweke Egede Friday, an M.Eng student of

Electrical Engineering Department, University of Nigeria, Nsukka, with registration

number PG/M. Eng/2008/49172 in a partial fulfillment of the requirements for the award

of M.Eng.

The work embodied in this thesis is original and has not been submitted in part or full for

any other diploma or degree in this or any other university.

………………………………… Date………………………….
Student
Nweke Egede F.

………………………………... Date………………………….
Ven. Engr. Prof. T. C. Madueme
(Supervisor)

…………………………… Date…………………………
Engr. Dr. Anyaka B. O.
Head of Department

iii
Solely dedicated to

Prof. ChineduOsitadinma Nebo

iv
ACKNOWLEDGEMENT

I would like to thank so immensely my supervisor, Prof. T. C. Madueme for

painstakingly reading this work between lines and effecting all the necessary

corrections on it and also for guiding me properly throughout this work. I am also

deeply indebted to Prof. Obe Simon of Electrical Engineering Department

University of Nigeria, Nsukka for introducing me to Matlab.

My thanks are also due to all the lecturers in the Department of Electrical

Engineering University of Nigeria especially, Dr. Ani for the warm treatment they

all gave me throughout my studentship in the department.

I also appreciate deeply the authorities of University of Nigeria Nsukka for giving

me the privilege to go for further studies.

My thanks equal goes to Miss Eberechukwu of Binary Concept Computing Centre,

University of Nigeria Nsukka for painstakingly typesetting the manuscript.

Finally my gratitude goes to my wife, Goodness, for being a wife that makes

keeping the marriage commitment easy.

v
vi
TABLE OF CONTENTS

Title page i

Approval page ii

Certification iii

Dedication iii

Acknowledgment iv

Table of contents v

List of figures ix

List of Tables x

List of Abbreviations xi

Abstract xii

CHAPTER ONE: INTRODUCTION

1.1 Background to the Study 1

1.2 Statement of the Problem 4

1.3 Objectives of the Study 6

1.4 Delimitation of the Study 6

1.5 Significance of the Study 8

CHAPTER TWO: LITERATURE REVIEW

2.1 Definition of Load Forecasting 9

2.2 Importance of Load Forecasting 9

2.3 Problems of Load Forecasting 11

2.4 Techniques for Load Forecasting 11

2.4.1 Extrapolation Technique 14

vii
2.4.2 End-Use Method 15

2.4.3 Scheer’s Method 15

2.4.4 Multiple Regression 17

2.4.5 Exponential Smoothing 20

2.4.6 Iterative Reweighted Least-Squares 21

2.4.7 Adaptive tool forecasting 22

2.4.8 Stochastic time series 24

2.4.9 Fuzzy Logic 28

2.4.10 Expert Systems 33

2.4.11 Support Vector Machines 35

2.4.12 Neural Networks 36

CHAPTER THREE: RESEARCH DESIGN/METHODOLOGY

3.0 Introduction 51

3.1 Research data 51

3.2 Data Pre – Processing 51

3.3 Choice of neural network paradigm 52

3.4 Construction of network architecture 53

3.5 Requirement of minimum number of patterns 54

3.6 Selection of input Variables 55

3.7 Network Training 56

3.7.1 Training algorithms 57

3.7.2 Back-propagation Implementation Strategy 58

3.8 Improving Generalization 64

viii
CHAPTER FOUR: EXPERIMENTAL RESULTS AND DISCUSSIONS

4.0 Introduction 67
4.1 Selection of Network Architecture and Parametric Values 67
4.2 Choice of training algorithm 69
CHAPTER FIVE:

CONCLUSION AND SUGGESTIONS FOR FURTHER RESEARCH

5.1 Conclusion 80

5.2 Suggestions for Further Research 81

REFERENCES 83

APPENDIX 97

ix
LIST OF FIGURES

2.1 Comparison of binary logic values and fuzzy logic values


for a 3-input function 29
2.2a. Op-amp equivalent of neuron 37

2.2b. Neural network processing element 38

2.2c. Mathematical representation of processing element 38

2.2d. Simple three-layer neural network 39

3.1 Structure of a three-layered feed forward type of ANN 60

4.1 Test result for Friday, 25th March 2011. 74

4.2 Test result for Saturday, 26th March 2011 74

4.3 Test result for Sunday, 27th March 2011 75

4.4 Test result for Monday, 28th March 2011. 75

4.5 Test result for Tuesday, 29th March 2011 76

4.6 Test result for Wednesday, 30th March 2011 76

4.7 Test result for Thursday, 31st March 2011 77

4.8 Test result for 7 days (Friday 25th- Thursday 31st March 2011) 77

x
LIST OF TABLES

2.1 Values of constant Y as required for calculation of load factor 17

3.1 Importance of recent daily loads in daily load prediction


based on correlation analysis. 55

4.1 Different network architectures and their performances with the


proposed model. 68

4.2 Various training algorithms and their performances with the


proposed model. 70

4.3 Effect of time delay vector on model accuracy and training time 71

4.4 Variation in performance ratio parameter with model accuracy 72

4.5 Optimal values of the model parameters 73

4.6 MAPE for the model on the test set 78

xi
LIST OF ABBREVIATIONS

AIM- Artificial Intelligence Means

ANN – Artificial Neural Network

ARIMA- Auto Regressive Integrated Moving Average

ARMA- Auto-regressive Moving Average

ARMAX- Auto-regressive Moving Average with Exogenous Variable

EUNITE – European Network of Excellence on Intelligent Technologies for


Smart Adaptive Systems

FARMAX- Fuzzy Autoregressive Moving Average with Exogenous


Variable

IRLS- Iteratively Reweighted Least-Squares

LTLF – Long Term Load Forecasting

MAPE – Mean Absolute Percentage Error

MMI – Man Machine Interface

MTLF – Medium Term Load Forecasting

NewFFTD- Feed –foreword Neural Network with Time Delay

NLRE – Nonlinear Load Research Estimator

PAR – Periodical Autoregressive

RES- Renewable Energy Resources

STLF – Short Term Load Forecasting

SVMS- Support Vector Machines

WRLS- Weighted Recursive Least Squares

xii
ABSTRACT

This thesis is a study of short-term electric power forecasting in the Nigerian power

system using artificial neural network model. The model is created in the form of a

simulation program written with MATLAB tool. The model, a multilayer time-

delayed feed-forward artificial neural network trained with error back propagation

algorithm, was made to study the pre-historical load pattern of a typical Nigerian

power system in a supervised training manner. After presenting the model with a

reasonable number of training samples, the model could forecast correctly electric

power supply in the Nigerian power system 24 hours in advance. An absolute mean

error of 4.27% was obtained when the trained neural network model was tested on

one week, daily hourly load data of a typical Nigerian power station. This result

demonstrates that ANN is a powerful tool for load forecasting.

xiii
CHAPTER ONE

INTRODUCTION

1.1 Background to the Study

A great deal of effort is required to maintain an electric power supply within the

requirements of the various types of customers served. Some of the requirements for

power supply are readily recognized by most consumers, such as proper voltage,

availability of power on demand, reliability and reasonable cost. By availability of power

on demand, we mean to say that power must be available to the consumer in any amount

that he may require from time to time. Stated yet in another way, motors may be started

or shut down, fans and lights may be turned on or off, without giving any advance

warning or notice to the electric power supply company.

It is this random behavior of consumers coupled with nature-controlled

demographic and weather factors alongside econometric factors that has posed the

greatest challenges like the amount of energy to generate, the load (circuits) to switch on

or off at a point in time on the part of power utility company. Hence, a power system

must be well planned so as to ensure adequate and reliable power supply to meet the

estimated load demand in both near and distant future.

The primary pre-requisite for system planning is to arrive at realistic estimates for

future demands of power. The foregoing concept is a part of load forecasting. Basically,

load forecast is no more than an intelligent projection of past and present demand patterns

to determine future ones with sufficient reliability [1]. The Nigerian power system today

is known for its epileptic, inadequate and unreliable nature [2]. Its performance will
1
improve if a system for accurate load forecasting is designed to aid its operation and

planning. Accurate load forecasting holds a great saving potential for electric utility

corporations. According to Bun and Farmer, [3] these savings are realized when load

forecasting is used to control operations and decisions such as economic load dispatch,

unit commitment, fuel allocation and off-line network analysis. The accuracy of load

forecasts has a significant effect on power system operations, as economy of operations

and control of power systems may be quite sensitive to forecasting errors [4]. Haida et al,

[5] observed that both positive and negative forecasting errors resulted in increased

operating costs.

Load forecasting may be applied in the long, medium, short, and very short-term

time scale. Srinivasan and Lee, [6] classified load forecasting in terms of the planning

horizon’s duration: up to 1 day for short-term load forecasting (STLF), 1 day to 1 year for

medium-term load forecasting (MTLF), and 1-10 years for long-term load forecasting

(LTLF). Short-term load forecasting (STLF) aims at predicting electric loads for a period

of minutes, hours, days, or weeks [7]. STLF plays an important role in the real-time

control and the security functions of an energy management system. STLF applied to the

system security assessment problem, especially in the case of increased renewable energy

sources (RES) penetration in isolated power grids, can provide, in advance, valuable

information on the detection of vulnerable situations. Long- and medium- term forecasts

are used to determine the capacity of generation, transmission, or distribution system

additions, along with the type of facilities required in transmission expansion planning,

annual hydro and thermal maintenance scheduling etc. [7] . Kalaitzakis et al, [7] noted
2
that short-term load forecast for a period of 1-24 h ahead is important for the daily

operations of a power utility since it is used for unit commitment, energy transfer

scheduling and load dispatch.

Achieving accurate electric load forecasting is by no way a simple thing. This is

because electric load is determined largely by variables that involve “uncertainty” and

whose relation with the final load is not deduced directly [8]. Some of these variables or

factors include economic factors, time, day, season, weather and random effects.

Electricity usage may be, therefore, predicted using data from previous history of load,

temperature, humidity, luminosity, and wind speed among other factors. However,

accurate models of load forecasting that use all these factors increase modeling

complexity. Several methods, therefore, have been used to perform load forecasting each

with its inherent shortfalls. Time series analysis is a very effective method to create

mathematical models for solving a broad variety of complex problems [9]. These models

are used to identify or predict the behavior of a phenomenon represented by a sequence

of observations. However, creating an accurate model for a time series that represents

non-linear processes or processes that have a wide variance is very difficult [9]. The trend

today, however, is to solve most problems of human using Artificial Intelligence Means

(AIM). Artificial intelligence methods for forecasting give better performance in

modeling of time series problems [10]. Artificial Neural Networks (ANNs) being one of

the artificial intelligence means have been successfully used to solve a broad variety of

systems, entailing linear and non-linear processes [9]. The application of ANNs in time

series prediction is presented in [11] and in [12]. The success in the application of ANNs
3
lies in the fact that when these networks are properly trained and configured, they are

capable of accurately approximating any measurable function. The neurons learn the

patterns hidden in data and make generalizations of these patterns even in the presence of

noise or missing information. Predictions are performed by the ANN based on the

observed data. Load forecasting is clearly a time series problem and an example of a time

series problem that can be solved with ANNs is electricity load forecasting.

Artificial intelligence or AI is the general term used to describe computers or

computer programs which solve problems with “intuitive” or “best-guess” methods often

used by humans instead of the strictly quantitative methods usually used by computer

[13]. Expert systems, neural networks, fuzzy logic and support vector machines are some

of the AIs currently in use today. Programs for some problems such as image recognition,

speech recognition, weather forecasting, electric load forecasting, and three dimensional

modeling are not easily or accurately implemented on fixed –instruction-set computers

such as 386/i486-based systems [13]. For applications such as these, new computer

architecture, modeled after the human brain and which is known as Artificial Neural

Network, shows considerable promise.

Hence, in this study a novel attempt is made to solve the problem of electric power

forecasting in the Nigerian power system by means of artificial neuronal network.

1.2 Statement of the Problem

It is an established issue that the Nigerian power utility company is nowhere in the

energy business. The utility company, PHCN, as it is called today, is yet to meet the

people’s demand for electric energy satisfactorily for any known period of time. It is
4
evident that the generated power is inadequate and so, the utility company considers load

shedding and restricted demand as a way out just as the government of the federation is

insisting on privatization of the energy sector as the last resort. Worst still, even under

these conditions of load shedding and restricted demand, the integrity of the supplied

power has always been questioned. The irony of this development is that it is happening

when Nigeria is striving to attain vision 202020.

The problem, therefore, is in spite of this inadequacy in generation, is there any way we

can manage what we have to satisfy our taste? Since economics is all about using limited

resources to address the endless human needs, there are ways. The issue now is, what are

these ways forward?

Before we x-ray one way forward, we need ask: can prompt and proper decisions on unit

commitment, fuel allocation, energy transfer scheduling, and load dispatch be of any

help? Certainly, YES. Since short term load forecasting is necessary for such prompt and

proper decisions on unit commitment, fuel allocation, power wheeling arrangement, load

dispatch etc., knowledge of load forecasting in the Nigerian power system is one such

way forward. Better still, what if this load forecasting is performed by means of artificial

intelligence- Artificial Neural Network (ANN) means? In other words Man Machine

Interface (MMI) can be guaranteed. The problem is more than half-way solved since

management decision can now be automated.

So, this research is aimed at suggesting a solution to the ailing Nigerian power system by

proposing a model which can perform 24-hours-ahead load forecasting in the Nigerian

power system by means of artificial neural network.


5
1.3 Objectives of the Study

Although the objectives of the study can be inferred from the background to the study

outlined in the previous section, it can still be clearly and concisely stated that the

objectives of the study are:

(a) To model artificial neural network which can forecast electric power supply for

one day in advance (Short Term Load Forecasting);

(b) To train the model (using back propagation algorithm) with pre-historical load

data obtained from a sample of the Nigerian power company so that each input

produces a desired output;

(c) To Test the model to get the values of future power supplies in the Nigerian

power system ; and

(d) In the light of the above, make necessary recommendations and suggestions for

further research.

1.4 Delimitation of the Study

It will be clear from our objectives that even though the impetus for this study was

generated by the ‘sorry’ state of the Nigerian power system, the scope of this study has

been restricted to New Haven Enugu 132/33KV Transmission station. In addition, short-

term load forecasting model is being proposed in this work. This restriction has been

dictated by the need to attempt a reasonable depth of treatment of data collected within

the time available and on the other part due to some financial limitations.

Although demographics, econometric and weather conditions need be considered

during load forecasting, the ANN modeled for the purpose of this research will be trained
6
without such factors as inputs. This became necessary so as to avoid the unnecessary

model complexity that is usually associated with a model encompassing all or some of

such factors. The forecasting model being proposed here does not take into account

temperature, although in general it might have significant impact on model accuracy.

Temperature data have been omitted simply, because the prediction is concerned with

data corresponding to the territory of the whole country and since temperature changes a

lot in different regions of Nigeria, it would be difficult to adjust the proper value of

temperature for a particular day. Though load data from a sampled Nigerian power

station will be used to test the model, the model remains for the entire Nigerian utility.

From available literature still, it is primarily the behavior of low voltage consumers or

residential consumers that is directly affected by weather variables [14], [15].

Furthermore, inaccuracy of weather forecasts, difficulties in weather-load relationship

modeling and implementation problems limit the use of load forecasting models requiring

weather data, thus several works have appeared recently omitting weather data [7], [16],

[17], [18], [19], [20]. To support this stance further, we make the following case:

EUNITE (European Network of Excellence on Intelligent Technologies for Smart

Adaptive Systems) had in the year 2001 organized a world-wide competition on methods

to accurately predict electricity load [9]. In the contest, the average temperature and load

data on half hourly basis for years 1997 and 1998 were provided. The objective of the

contest was to predict daily peak demands of electricity for January 1999 based on the

data from these previous years. The model proposed by Chang et al., [21] which in terms

7
of mean absolute percentage error (MAPE) obtained the first place in the competition

actually discarded the temperature data.

This shortfall is also due to non-readily availability of information on such factors at the

point of need.

The above restrictions notwithstanding, however, there are strong indications from

the available literature that any findings and conclusions will be generalizable to Nigerian

Power System at large even at a high degree of accuracy of the model results.

1.5 Significance of the Study

The proposed study is expected:

(i) To guide the operation and planning of the Nigerian Power System

(ii) To aid power system Engineers who may wish to design a power system newly

(iii) To serve as a research or academic material

(iv) To help validate the results obtained by other leading researchers who might

have worked on this same or similar area.

8
CHAPTER TWO

LITERATURE REVIEW

2.1: Definition of Load Forecasting

Forecasting according to Sarangi et al, [22] is a phenomenon of knowing what

may happen to a system in the next coming time periods. Chakrabarti and Halder, [23]

also defined load forecasting as a method to estimate the load for a future time point from

the available past data.

Vadhera, [1] seem, however, to have given a more comprehensive and acceptable

definition of load forecasting when he notes that load forecast is no more than an

intelligent projection of past and present demand patterns to determine future ones with

sufficient reliability. The term load according to Vadhera, [1] is a device or

conglomeration of devices that taps energy from the power system network. Load is a

general term meaning either demand or energy, where demand is time rate of change of

energy.

2.2: Importance of Load Forecasting

To cast the importance of load forecasting, Vadhera, [1] swiftly noted that good

could not be emphasized enough in forecasting future requirements.

Still on the justification of the need for load forecasting, Alfares and Nazeerudin, [4]

contended that load forecasting is a central and integral process in the planning and

operation of electric utilities. Alfares and Nazeerudin, [4] went further to note that load

forecasting involves the accurate prediction of both the magnitude and geographical

locations of electric load over the different periods (usually hours) of the planning
9
horizon. They went further to add that accurate load forecasting holds a great saving

potential for electric utility corporations. According to Bunn and Farmer, [3], these

savings are realized when load forecasting is used to control operations and decisions

such as dispatch, unit commitment, fuel allocation and off-line network analysis. Adepoju

et al, [24] shared the same view as Bunn and Farmer, [3] when they noted that load

forecasting being very essential to the operation of electricity companies enhances the

energy-efficient and reliable operation of a power system.

According to Adepoju et al, [24], the operation and planning of a power utility

company requires an adequate model for electric power load forecasting. Load

forecasting plays a key role in helping an electric utility to make important decisions on

power, load switching, voltage control, network reconfiguration, and infrastructure

development [24]. Feinberg et al, [25] was of the view that accurate models for electric

power load forecasting are essential to the operation and planning of a utility company.

Load forecasting helps an electric utility company to make important decisions including

decisions on purchasing and generating electric power, load switching, and infrastructure

development [25]. Those who can benefit from the knowledge of Load forecast include

energy suppliers, ISOs, financial institutions, and other participant in electric energy

generation, transmission, distribution, and markets [25].

Load forecasts can be divided into three categories: short-term forecasts which are

usually from one hour to one week, medium-term forecasts which are usually from a

week to a year, and long-term forecasts which are longer than a year [25], [4], [26], and

[24].
10
2.3: Problems of Load Forecasting

Load forecasting, however, is not an easy thing. This is because load is affected by

many physical factors such as weather, national economic health, popular TV programs,

public holidays, etc. [27]. This actually makes load forecasting a complex process

demanding experience and high analytical ability using probabilistic techniques including

neural networks.

2.4: Techniques for Load Forecasting

Several techniques are, however, available for forecasting. Matthewman and

Nicholson, [28] conducted an early survey of electric load forecasting techniques. Load

demand modeling and forecasting was also reviewed in the works of [3], [29], and [30].

Moghram and Rahman, [31] surveyed electric load forecasting techniques and in recent

times Alfares and Nazeerudin, [4] conducted a literature survey and classification of

methods of electric load forecasting.

In the remaining segment of this chapter, attempt is made to give an in-depth

review of newer papers on load forecasting techniques in general and load forecasting

using ANN in particular. An up-to-date brief verbal and mathematical description of each

category of load forecasting techniques will also be given where necessary.

Vadhera, [1] listed the methods normally used for load forecasting as:

• Extrapolation technique;

• Scheer’s method;

• End-use method, and

• Probabilistic extrapolation correlation method.


11
Pabla, [32] noted that Forecasting methods as applied to the electrical industry fall into

two broad categories namely:

• Estimates based on existing trends and

• Econometric models.

Chakrabarti and Halder, [23] Insisted that load forecasting might be done following any

of the methods given below:

i. Extrapolation;

ii. Correlation;

iii. A combination of both (i) and (ii).

Alfares and Nazeerudin, [4], however, classified methods/techniques of load forecasting

into nine thus:

i. Multiple regression;

ii. Exponential smoothing;

iii. Iterative reweighted least-squares;

iv. Adaptive load forecasting;

v. Stochastic time series;

vi. Autoregressive moving Average with Exogenous variable (ARMAX) models

based on genetic algorithms;

vii. Fuzzy logic;

viii. Neural networks; and

ix. Knowledge-based expert system

12
In addition to the above techniques Feinberg et al, [25] gave three additional

techniques named as (i) support vector machines (SVMs), (ii) Similar day approach, and

(iii) statistical model based learning.

In summary, therefore, we have the following distinguishable techniques of

forecasting electric load:

• Extrapolation Technique

• End-use method

• Scheer’s method

• Correlation method

• Iterative reweighted least-squares method

• Adaptive load forecasting method

• Exponential smoothing method

• Autoregressive moving average with exogenous variable (ARMAX) models based

on genetic algorithms

• Fuzzy logic technique

• Knowledge-based expert system

• support vector machine

• Neural networks.

We now give brief explanations and mathematical description of each category

where applicable.

13
2.4.1: Extrapolation Technique

The extrapolation technique is based on curve fitting to previous load data available.

Then with a trend curve obtained from curve fitted, the load can be forecast at any future

point (t = δ +j) by calculating the trend curve function at that point (t = δ +j). This

method is very simple and it is found to be very reliable in some cases. The errors in data

available and errors in curve fitting are not accounted for, therefore, it is called a

deterministic extrapolation. Standard analytical functions used in trend cure fitting are;

• Straight line, d = α + β δ ,

• Parabola, d = α + β δ 2 ,

2 3
• S – curve, d = α + βδ + λδ + ηδ

• Exponential, d = ∂α + βδ

• Gompertz, d = In −1
(α + β l )
δ

The method of least squares is generally adopted for curve-fitting. If the accuracy of

the forecast available is tested using statistical measures such as mean and variance, the

basic technique becomes a probabilistic extrapolation. The best trend curve may be

obtained using regression analysis and the best estimate (to forecast the load at any future

time point) may be obtained using equation of that best trend cure. The main drawback of

extrapolation technique is that it has no flexibility. Future load prediction depends

entirely upon the past trend and sometimes, this may give erroneous results [1], [23].

14
2.4.2: End-Use Method

The end-use approach directly estimates energy consumption by using extensive

information on end use and end users, such as appliances, the customer use, their ages,

sizes of houses, and so on [25]. Statistical information about customers along with

dynamics of change is the basis for the forecast. End-use method is a modified form of

extrapolation [1]. In the method of extrapolation only the system yearly peak demand of

past years are plotted and future years demand is obtained by the trend curve of the past.

In the End-use method, the demands of different categories of loads are projected

separately.

End-use models focus on the various uses of electricity in the residential,

commercial, and industrial sector. These models are based on the principle that electricity

demand is derived from customer’s demand for light, cooling, heating, refrigeration, etc.

Thus end-use models explain energy demand as a function of the number of appliances in

the market [33].

Ideally, this approach is very accurate. However, it is sensitive to the amount and

quality of end-use data. For example, in this method the distribution of equipment age is

important for particular types of appliances. End-use forecast requires less historical data

but more information about customers and their equipment [25].

2.4.3: Scheer’s Method

Scheer has studied the load growth pattern of the developing countries reporting to

United Nations [34], [35], [1]. His approach to the problem of load forecasting was
15
through the per-capita consumption of energy. The per capita consumption of energy in

all developing countries follows a unique pattern. Economic condition of the people,

policy of government, rainfall and mineral resources etc., in an area, are also responsible

for affecting the load growth pattern of the area. Scheer’s method of load forecasting

takes into consideration the above factors.

On the basis of statistical data collected, Scheer formulated the relation:

Log G = 1.28+C1 +C2 +C3 … -0.15(K1 K2 …Kn)logU…………………(1)

Where G= annual percentage growth in per capita consumption; U = annual Kwh usage

per person. C1, C2, C3, etc. are constants for population growth, growth of agriculture

pump sets, growth of industrial loads etc.

K1,K2,.…,Kn are constants, assumed unity unless a very high or very low growth rate is

expected. If the rate of growth is higher than normal then their value is less than unity and

vice-versa.

The value of C1 is normally obtained from the equation

C1= 0.05 x (percentage rate of growth of population)

C2 = 0.01 x (percentage rate of growth of agricultural and LT industrial load as obtained

in the past).

The annual percentage growth rate of per capita consumption, G is obtained.

Using the value of G the total energy consumption in the area can be calculated for a year

knowing the per capita consumption of the previous year. If the load factor is known,

then from total energy value, the peak load demand can be calculated.

16
Load factor does not remain constant for all times. According to Scheer, the load

factor approaches its ultimate value of 65% in an asymptotic fashion cutting down the

difference by halves in every sixteen years. The load factor up to the 16th year from the

base year can be calculated from the relation

L. F. = 65.0-Y (65.0-z) --------------------------------------(2)

Where z is the base year load factor, Y is a multiplier whose value for different years

counting from the base year is given in table 2.1 below.

Table 2.1: Values of constant Y as required for calculation of load factor

Year from 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

start

Values of Y 1.00 0.96 0.92 0.88 0.84 0.80 0.77 0.74 0.71 0.68 0.65 0.62 0.60 0.57 0.55 0.52 0.50

One problem with scheer’s model is the selection of the base year. For this

purpose, the best fit curve is drawn through the past growth of yearly peak load as well as

yearly energy consumption. Any point towards the end of the best fit curve may be taken

as the base year [1].

2.4.4: Multiple Regression

Multiple regression analysis for load forecasting uses the technique of weighted

least-squares estimation. Based on this analysis, the statistical relationship between total

load and weather conditions as well as the day type influences can be calculated. The
17
regression coefficients are computed by an equally or exponentially weighted least-

squares estimation using the defined amount of historical data. Mbamalu El-Hawary, [36]

used the following load model for applying this analysis:

Yt = Vtat + εt

Where t is sampling time, Yt measured system load, Vt vector of adapted variables such

as time, temperature, light intensity, wind speed, humidity, day type (workday, weekend,

holiday), etc., at transposed vector of regression coefficients, and εt model error at time, t.

The data analysis program allows the selection of the polynomial degree of

influence of the variables from 1 to 5. In most cases, linear dependency gives the best

results. Moghram and Rahman, [31] evaluated this model and compared it with other

models for a 24-h-ahead load forecast. Barakat et al, [37] used the regression model to fit

data and check seasonal variations. The model developed by Papalexopoulos and

Hesterberg, [38] produces an initial daily peak forecast and then uses this initial peak

forecast to produce initial hourly forecasts. In the next step, it uses the maximum of the

initial hourly forecast, the most recent initial peak forecast error, and exponentially

smoothed errors as variables in a regression model to produce an adjusted peak forecast.

Haida and Muto, [5] presented a regression-based daily peak load forecasting

method with a transformation technique. Their method uses a regression model to predict

the nominal load and a learning method to predict the residual load. Haida et al, [39]

expanded this model by introducing two trend processing techniques designed to reduce

errors in transitional season. Trend cancellation removes annual growth by subtraction or

division, while trend estimation evaluates growth by the variable transformation


18
technique. Varadan and Makran, [40] used a least squares approach to identify and

quantify the different types of load at power lines and substations.

Hyde and Hodnett, [41] presented weather-load model to predict load demand for

the Irish electricity supply system. To include the effect of weather, the model was

developed using regression analysis of historical load and data. Hyde and Hodnett, [42]

later developed an adaptable regression model for 1-day-ahead forecasts, which identifies

weather-insensitive and-sensitive load components. Linear regression of past data is used

to estimate the parameters of the two components. Nazarko, [43] used their new

regression based method, Nonlinear load Research Estimator (NLRE) to forecast load for

four substations in Arkansas, customer class, month and type of day.

Al-Garni, [44] developed a regression model of electric energy consumption in

Eastern Saudi Arabia as a function of weather data, solar radiation, population and per

capita gross domestic product. Variable selection is carried out using the stepping

regression method, while model adequacy is evaluated by residual analysis.

The non-parametric regression model of Charytonuk et al, [45] constructs a

probability density function of the load and load effecting factors. The model produces

the forecast as a conditional expectation of the load given the time, weather and other

explanatory variables, such as the average of past actual loads and the size of the

neighbourhood.

Alfares and Nazeerudin, [4] presented a regression based daily peak load

forecasting method for a whole year including holidays. To forecast load precisely

throughout a year, different seasonal factors that affect load differently in different
19
seasons are considered. In the winter season, average wind chill factor is added as an

explanatory variable in addition to the explanatory variables used in the summer model.

In transitional seasons such as spring and fall, the transformation technique is used.

Finally for holiday, a holiday effect load is deducted from normal load to estimate the

actual holiday load better.

2.4.5: Exponential Smoothing

Exponential smoothing is one of the classical methods used for load forecasting.

The approach is first to model the load based on previous data, then to use this model to

predict the future load, [4].

In exponential smoothing models used by [31], the load, y (t) at time, t is modeled

using a fitting function and is expressed in the form:

y(t) = β(t)Tf(t) + ε (t ) ,

where:

f(t) fitting function vector of the process, β(t) coefficient vector, ε (t ) white noise,

and T transpose operator

The Winter’s method is one of several exponential smoothing methods that can analyze

seasonal time series directly. This method is based on three smoothing constants for

stationary, trend and seasonality. Results of the analysis by [37] showed that the unique

pattern of energy and demand pertaining to fast growing areas was difficult to analyze

20
and predict by direct application of the Winter’s method. El-keib et al, [46] presented a

hybrid approach in which exponential smoothing was augmented with power spectrum

analysis and adaptive autoregressive modeling. A new trend removal technique by [47]

was based on optimal smoothing. This technique has been shown to compare favourably

with conventional methods of load forecasting.

2.4.6: Iterative Reweighted Least-Squares

Mbamalu and El-Hawary, [48] used a procedure referred to as the iteratively

reweighted least-squares to identify the model order and parameters. The method uses an

operator that controls one variable at a time. An optimal starting point is determined

using the operator. This method utilizes the autocorrelation function of the resulting

differenced past load data in identifying a sub-optimal model of the load dynamics.

The weighting function, the tuning constants and the weighted sum of the squared

residuals form a three way decision variable in identifying an optimal model and the

subsequent parameter estimates. Consider the parameter estimation problem involving

the linear measurement equation:

Y = ×β + ε ,

Where Y is an nx1 vector of observations, × is an n × p matrix of known coefficients

(based on previous load data), β is a p × 1 vector of the unknown parameters and ε is an n

× 1 vector of random errors. Results are more accurate when the errors are not Guassian.

β can be obtained by iterative methods [48]. Given an initial β , one can also use the

21
Beaton-Turkey iterative reweighted least-squares algorithm (IRLS). In a similar work,

[36] proposed an interactive approach employing least-squares and the IRLS procedure

for estimating the parameters of a seasonal multiplicative autoregressive model. The

method was applied to predict load at the Nova Scotia Power Corporation.

2.4.7: Adaptive tool forecasting

In this context, forecasting is adaptive in the sense that the model parameters are

automatically corrected to keep track of the changing load conditions. Adaptive load

forecasting can be used as an on-line software package in the utilities control system.

Regression analysis based on the Kalman filter theory is used. The Kalman filter

normally uses the current prediction error and the current weather data acquisition

programs to estimate the next state vector. The total historical data set is analysed to

determine the state vector, not only most recent measured load and weather data. This

mode of operation allows switching between multiple and adaptive regression analysis.

The model used is the same as the one used in the multiple regression section as

described in equation under multiple regression, namely:

Yt = Vt at + ε t

Lu et al, [49] developed an adaptive Hammerstein model with an orthogonal

escalator structure as well as a lattice structure for joint processes. Their method used a

front Hammerstein non-linear time-varying functional relationship between load and

temperature. Their algorithm performed better than the commonly used RLS (Recursive

Least-Square) algorithm. Grady et al, [50] enhanced and applied the algorithm developed
22
by [49]. An improvement was obtained in the ability to forecast total system hourly load

as far as 5 days. McDonald et al, [51] presented an adaptive-time-series model, and

simulated the effects of a direct load-control strategy.

Park et al, [52] developed a composite model for load prediction, composed of

three components: nominal load, type load and residual load. The nominal load is

modeled such that the Kalman filter can be used and the parameters of the model are

adapted by the exponentially weighted recursive least-squares method. Fan and

McDonald, [53] presented a real time implementation of weather adaptive short term load

forecast (STLF). Implementation is performed by means of an Autoregressive moving

Average ARMA model, whose parameters are estimated and updated on-line, using the

weighted Recursive least-squares (WRLS) algorithm.

Paarman and Najar’s, [54] adaptive online load forecasting approach automatically

adjusts model parameters according to changing conditions based on time series analysis.

This approach has two unique features: autocorrelation optimization is used for handling

cyclic patterns and, in addition to updating model parameters, the structure and order of

time series is adaptable to new conditions. An important feature of the regression model

of [42] is adaptability to changing operational conditions. The load-forecasting software

system is fully automated with a built-in procedure for updating the mdoel. Zheng et al,

[55] applied wavelet transform-Kalman filter method for load forecasting. Two models

are formed (weather sensitive and insensitive) in which the wavelet coefficients are

modeled and solved by the recursive kalman filter algorithm.

23
2.4.8: Stochastic time series

It has been observed that unique patterns of energy and demand pertaining to fast-

growing areas are difficult to analyze and predict by direct application of time-series

methods. However, these methods appear to be among the most popular approaches that

have been applied and are still being applied to STLF. Using the time-series approach, a

model is first developed based on the previous data then future load is predicted based on

this model. In other words, and as it was stated by Aggarwal, [56], we want to operate on

random phenomena to generate new random phenomena by means of mathematical

operations. Some of the time series models used for load forecasting will be discussed

here.

• AUTOREGRESSIVE (AR) MODEL

If the load is assumed to be a linear combination of previous loads, then the

autoregressive (AR) model can be used to model the load profile, which is given by [57]

m
as: Lˆk = −∑ σ ik Lk −i + wk ...................... ( 3)
i =1

Where Lˆ k is the predicted load at time k(min), wk is a random load disturbance,

σ i , i = 1, 2,L, m are unknown coefficients, and (3) is the AR model of order m. the

unknown coefficients in (3) can be tuned on-line using the well-known least mean square

(LMS) algorithm of [36]. The algorithm presented by [46] includes an adaptive

autoregressive modeling technique enhanced with partial autocorrelation analysis. Huang,

[58] proposed an autoregressive model with an optimum threshold stratification

algorithm. This algorithm determines the minimum number of parameters required to


24
present the random component, removing subjective judgment, and improving forecast

accuracy. Zhao et al., [59] developed two periodical autoregressive (PAR) models for

hourly load forecasting.

• Autoregressive Moving-Average (ARMA) Model

In the ARMA model the current value of the time series y(t) is expressed linearly

in terms of its values at previous periods [y(t-1), y (t-2),…] and in terms of previous

values of white noise [a(t-1), a(t-2),…]. For an ARMA of order (p, q), the model is

written as:

y(t ) = φ1 y(t − 1) + L + φ p (t − p) + a(t ) − θ1a(t − 1)Lθq a(t − q). ---------------------(4)

The parameter identification for a general ARMA model can be done by a

recursive scheme, or using a maximum-likelihood approach, which is basically a non-

linear regression algorithm. Barakat, [60] presented a new time-temperature methodology

for load forecasting. In this method, the original time series of monthly peak demands are

decomposed into deterministic and stochastic load components, the latter determined by

ARMA model. Fan and McDonald, [53] used the weighted Recursive least squares

(WRLS) algorithm to update the parameters of their adaptive ARMA model. Chen, [61]

used an adaptive ARMA model for load forecasting, in which the available forecast

errors are used to update the model. Using minimum mean square error to derive error

learning coefficients, the adaptive scheme outperformed conventional ARMA models.

25
• Autoregressive integrated moving average (ARIMA) model

If the process is non-stationary, then transformation of the series to the stationary

form has to be done first. This transformation can be performed by the differencing

process. By introducing the ∇ operator, the series ∇y ( t ) = (1 − B ) y ( t ) is formed. For a

series that needs to be differenced d times and has orders p and q for the AR and MA

components, i.e. ARIMA (p, d, q), the model is written as:

φ ( B ) ∇d y (t ) = θ ( B ) a (t )
--------------------------------------(5)

The procedure proposed by [62] used the trend component to forecast the growth

in the system load, the weather parameters to forecast the weather sensitive load

component, and the ARIMA model to produce the non-weather cyclic component of the

weekly peak load. Barakat, [37] used a seasonal ARIMA model on historical data to

predict the load with seasonal variations. Juberias, [63] developed a real time load

forecasting ARIMA model that includes the meteorological influence as an explanatory

variable.

• Autoregressive Moving Average with Exogenous variable (ARMAX) model

based on genetic algorithms

The genetic algorithm (GA) or evolutionary programming (EP) approach is used

to identify the autoregressive moving average with exogenous variable (ARMAX) model

for load demand forecasts. By simulating natural evolutionary process, the algorithm

offers the capability of converging towards the global extremum of a complex error

surface. It is a global search technique that simulates the natural evolution process and

26
constitutes a stochastic optimization algorithm. Since the GA simultaneously evaluates

many points in the search space and need not assume the search space is differentiable or

unimodal, it is capable of asymptotically converging towards the global optimal solution,

and thus can improve the fitting accuracy of the model.

The general scheme of the GA process is briefly described here. The integer or real

valued variables to be determined in the genetic algorithm are represented as a D-

dimensional vector p for which a fitness f(p) is assigned. The initial population of k

parents vectors pi, i=1,2,….k, is generated from a randomly generated range in each

dimension. Each parent vector then generates an offspring by merging (crossover) or

modifying (mutation) individuals in the current population. Consequently, 2k new

individuals are obtained. Of these, k individuals are selected randomly, with higher

probability of choosing those with the best fitness values, to become the new parents for

the next generation. This process is repeated until f is not improved or the maximum

number of generations is reached.

Yang et al, [64] described the system load model in the following ARMAX form:

A(q)y(t)=B(q)u(t)+C(q)e(t),-------------------------------(6)

where

y(t) load at time t

u(t) exogenous temperature input at time t,

e(t) white noise at time t, and

q-1 back-shift operator,

27
and A(q), B(q), and C(q) are parameters of the autoregressive (AR), exogenous (X), and

moving average (MA) parts, respectively. Yang et al, [64] chose the solution(s) with the

best fitness as the tentative model(s) that should further pass diagnostic checking for

future load forecasting. Yang and Huang, [65] presented a fuzzy autoregressive moving

average with exogenous variable (FARMAX) model for load demand forecasts. The

model is formulated as a combinatorial optimization problem then solved by a

combination of heuristics and evolutionary programming. Ma et al., [66] used a genetic

algorithm with a newly developed knowledge augmented mutation-like operator called

the forced mutation. Lee et al., [67] used genetic algorithms for long-term load

forecasting, assuming different functional forms and comparing results with regression.

2.4.9: FUZZY LOGIC

Consumer products such as video camcorders, cameras, refrigerators, washing

machines, and automobiles are increasingly using fuzzy logic control circuits. Linking

the term fuzzy, which here means “not precisely defined,” with the term logic may seem

to create an oxymoron like “work party”, but the concept is very real. The original work

on fuzzy logic was done by Professor Lofti A. Zadeh at U. C. Berkeley in the mid-1960s,

but Japanese companies have been the main ones to patent the technology and implement

it in products.

A fuzzy logic controller is programmed with rules as is an expert system, but the

rules are very flexible. Figure 1 below shows the graphic method Professor Bart Kosko of

the University of Southern California uses to illustrate the difference between traditional
28
fixed value logic and fuzzy logic. Each corner of the cube represents one of the eight

possibilities for a three-variable digital logic function.

{0,0,1}

{1,0,0}

{0,1,0}

Figure 1: Comparison of binary logic values and fuzzy logic values for a 3-input

function.

For this example, let’s assume that the function is true for the 010, 001and 100

combinations shown. In a traditional digital logic system the variables can only have

values of 0 or 1, so the only values that will produce a true output are these three. In a

fuzzy logic system the variables can have values other than 1 or 0, so the set of all the

possible values that will produce a true output is represented by the triangular plane

formed by the three points. One way to look at this is that traditional digital logic is just a

special case of fuzzy logic.

One advantage of fuzzy logic systems is that they can work with imprecise terms

such as cold, warm, hot, or near boiling that humans commonly use. In hardware terms

this means that a fuzzy logic system often doesn’t need precise A/D converters. The
29
Sanyo Fisher Corp. Model FVC-880 camcorder, for example, uses fuzzy logic to directly

process the outputs from six sensors and set the camera lens for best possible focusing

and exposure.

Fuzzy logic can provide very smooth control of mechanical systems. The fuzzy

logic-controlled subway in Sendai, Japan, is reportedly so smooth in operation that

standing riders do not use the hand straps during starts and stops.

In the United States, TogaiInfraLogic, Inc. Irvine, California has developed a

Digital Fuzzy Processor chip. They have also developed a Fuzzy-C compiler which can

be used to write a program containing the rules and knowledge base for the processor.

It is well known that a fuzzy logic system with centroid defuzzification can

identify and approximate any unknown dynamic system (here load) on the compact set to

arbitrary accuracy. Liu et al., [57] observed that a fuzzy logic system has great capability

in drawing similarities from huge data. The similarities in input data (L-i -L0) can be

identified by different first order differences (Vk) and second-order differences (Ak),

which are defined as:

Vk = (Lk-Lk-1)/T, Ak = (Vk-Vk-1)/T.………………………. (7)

The fuzzy logic-based forecaster works in two stages: training and on-line

forecasting. In the training stages, the metered historical load data are used to train a 2m-

input, 2n-output fuzzy-logic based forecaster to generate patterns database and a fuzzy

rule base by using first and second-order differences of the data. After enough training, it

will be linked with controller to predict the load change online. If a most probably

30
matching pattern with the highest possibility is found, then an output pattern will be

generated through a centroid defuzzufier.

Several techniques have been developed to represent load models by fuzzy

conditional statements. Hsu, [68] presented an expert system using fuzzy set theory for

STLF. The expert system was used to do the updating function. Short-term forecasting

was performed and evaluated on the Taiwan power system. Latter, Liang and Hsu, [69]

formulated a fuzzy linear programming model of the electric generation scheduling

problem, representing uncertainties in forecast and input data using fuzzy set notation.

Al-Anbuky, [70] discussed the implementation of a fuzzy-logic approach to provide a

structural framework for the representation, manipulation and utilization of data and

information concerning the prediction of power commitments. Neural networks are used

to accommodate and manipulate the large amount of sensor data.

Srinivasan et al., [71] used the hybrid fuzzy-neural technique to forecast load. This

technique combines the neural network modeling and techniques from fuzzy logic and

fuzzy set theory. The models were later enhanced by [72], [73]. This hybrid approach can

accurately forecast on weekdays, public holidays, and days before and after public

holidays. Based on the work of [71], [72] presented two fuzzy neural network (NN)

models capable of fuzzy classification of patterns. The first network uses the membership

values of the linguistic properties of the past load and weather parameters, where the

output of the network is defined as the fuzzy class membership values of the forecasted

load. The second network is based on the fact that any expert system can be represented

as a feed forward NN.


31
Mori and Koboyashi, [74] used fuzzy inference methods to develop a non-linear

optimization model of STLF, whose objective is to minimize model errors. The search

for the optimum solution is performed by simulated annealing and the steepest descent

method. Dash et al., [75] used a hybrid scheme combining fuzzy logic with both neural

networks and expert systems for load forecasting. Fuzzy load values are inputs to the

neural network, and the output is corrected by a fuzzy rule inference mechanism.

Ramirez-Rosado and Dominguez-Navaro, [76] formulated a fuzzy model of the optimal

planning problem of electric energy. Computer tests indicated that this approach out

performs classical deterministic models because it is able to represent the intrinsic

uncertainty of the process.

Chow and Tram, [77] presented a fuzzy logic methodology for combining

information used in spatial load forecasting, which predicts both the magnitudes and

locations of future electric loads. The load growth in different locations depends on

multiple, conflicting factors, such as distance to highway, distance to electric poles, and

costs. Therefore, Chow et al., [78] applied a fuzzy, multi-objective model to spatial load

forecasting. The fuzzy logic approach proposed by [79] for next-day load forecasting

offers three advantages namely, the ability to (1) handle non-linear curves, (2) forecast

irrespective of day type and (3) provide accurate forecasts in hard-to-model situations.

Mori et al., [80] presented a fuzzy inference model for STLF in power systems.

Their method uses tabu search with supervised learning to optimize the inference

structure (i.e., number and location of fuzzy membership functions) to minimize forecast

errors. Wu and Lu, [81] proposed an alternative to the traditional trial and error method
32
for determining of fuzzy membership functions. Anautomatic model identification is

used, that utilizes analysis of variance, cluster estimation, and recursive least-squares.

Mastorocostas, [82] applied a two-phase STLF methodology that also uses orthogonal

least-squares (OSL) in fuzzy model identification. Padmakumari et al., [83] combined

fuzzy logic with neural networks in a technique that reduces both errors and

computational time. Srinivasan et al., [84] combined three techniques fuzzy logic, neural

networks and expert systems in a highly automated hybrid STLF approach with

unsupervised learning.

2.4.10: EXPERT SYSTEMS

Probably the most developed area of AI at present is the area of expert systems.

An expert-system program consists of a large data base and a set of rules for searching

the data base to find the best solution for a particular type of problem. The data base and

rules are developed by questioning “experts” in that particular problem area. The data

base for a medical diagnosis expert system, for example, is built up by extensive

questioning of experts in each medical specialty.

Unlike most computer programs, which require complete information to make a

decision, expert system programs are designed to make a best guess, based on the

available data, just as a human expert would do. A medical diagnosis expert system, for

example, will indicate the illness that most likely corresponds to a given set of symptoms

and test data. To enable it to make a better guess, the system may suggest additional tests

to perform.
33
One advantage of a system such as this is that it can make the knowledge of many

experts readily available to a physician anywhere in the world via a modern connection.

Another advantage is that the data base and set of rules can be easily updated as new

research results and drugs become available. Other expert system programs are those

used to lay out PC boards and those used to lay out ICs.

Expert systems are new techniques that have emerged as a result of advances in

the field of artificial intelligence. An expert system is a computer program that has the

ability to reason, explain, and have its knowledge base expanded as new information

becomes available to it. To build the model, the ‘knowledge engineer’ extracts load

forecasting knowledge from an expert in the field by what is called the knowledge base

component of the expert system. This knowledge is represented as facts and IF-THEN

rules, and consists of the set of relationships between the changes in the system load and

changes in natural and forced condition factors that affect the use of electricity. This rule

base is used daily to generate the forecasts. Some of the rules do not change over time

while others have to be updated continually.

The logical and syntactical relationships between weather load and the prevailing

daily load shapes have been widely examined to develop different rules for different

approaches. The typical variables in the process are the season under consideration, day

of the week, the temperature and the change in this temperature. Illustrations of this

method can be found in [85], [86], and [87]. The algorithms of [88] and [89] combine

features from knowledge-based and statistical techniques, using the pair-wise comparison

technique to prioritize categorical variables. Rahman and Hazim, [90] developed a site-
34
independent expert system for STLF. This system was tested using data from several sites

around the USA, and the errors were negligible. Brown et al., [91] used a knowledge

based load-forecasting approach that combines existing system knowledge, load growth

patterns, and horizon year data to develop multiple load growth scenarios.

Several hybrid methods combine expert systems with other load-forecasting

approaches. Dash et al., [92], Dash et al., [75] combined fuzzy logic with expert systems.

Kim et al., [93] used a two-step approach in forecasting load for Korea Electric Power

Corporation. First, an ANN is trained to obtain an initial load prediction, then a fuzzy

expert system modifies the forecast to accommodate temperature changes and holidays.

Mohamad et al., [94] applied a combination of expert systems and NN for hourly load

forecasting in Egypt. Bataineh et al., [95] used neural networks and fuzzy logic for data

representation and manipulation to construct the expert system’s rule base. Chiu et al.,

[96] determined that a combined expert system-NN approach is faster and more accurate

than either one of the two methods alone. Chandrashekara et al., [98] applied a combined

expert system-NN procedure divided into three modules: location planning, forecasting

and expansion planning.

2.4.11: Support Vector Machines

Support Vector Machines (SVMs) are a more recent powerful technique for

solving classification and regression problems. This approach was originated from

Vapnik’s, [99] statistical learning theory. Unlike neural networks which try to define

complex functions of the input feature space, support vector machines perform a

nonlinear mapping (by using the so-called kernel functions) of the data into a high
35
dimensional (feature) space. Then support vector machines use simple linear functions to

create linear decision boundaries in the new space. The problem of choosing architecture

for a neural network is replaced here by the problem of choosing a suitable kernel for the

support vector machine [100].

Mohandes, [101] applied the method of support vector machines for short-term electrical

load forecasting. The author compares its performance with autoregressive method. The

results indicate that SVMs compare favourably against the autoregressive method. Chen

et al., [102] proposed a SVM model to predict daily load demand of a month. Their

program was the winning entry of the competition organized by EUNITE network. Li and

Fang, [103] also used a SVM for short-term load forecasting.

2.4.12: NEURAL NETWORKS

Programs for some problems such as image recognition, speech recognition,

weather forecasting, and three-dimensional modeling are no easily or accurately

implemented on fixed-instruction-set computers such as 386/i486-based systems. For

applications such as these, new computer architecture modeled after the human brain,

shows considerable promise.

As you may remember from a general science class, the brain is composed of

billions of neurons. The output of each neuron is connected to the inputs of several

thousand other neurons by synapses. If the sum of the signals on the inputs of a neuron is

greater than a certain threshold value, the neuron “fires” and sends a signal to other

neurons. The simple op-amp circuit in Figure 2.2a may help you see how a neuron works.
36
Let’s assume the output of the comparator is initially low. If the sum of the input signals

to the adder produces an output voltage more negative than the comparator threshold

voltage, the output of the comparator will go high. This is analogous to the neuron firing.

The weight or relative influence of an input is determined by the value of the resistor on

that input. Figure 2.2b shows a symbol commonly used to represent a neuron in neural

network literature and Figure 2.2c shows a simple mathematical model of a neuron.

As with the neurons in the human brain, the neurons in a neural network are connected to

many other neurons. Figure 2.2d shows a simple three-layer neural network. This

network configuration is referred to as “feed forward”, because none of the output signals

are connected back to the inputs. In a “feedback” or “resonance” configured network,

some intermediate or final output signals are connected back to network inputs.

Researchers are currently experimenting with many different network configurations to

determine the one that works best for each type of application.

R5
2
U1
1
U2

R1
R2 to other
R3 R4
0 neurons
3

Vref

Figure 2.2a: Op-


amp model of a
Neuron

37
CONNECTIONS

INPUT 1 output
Transfer
INPUT 2 Sum function
INPUT 3

INPUT n

Figure 2.2b: Neural Network


Processing Element

P1 w1

P2 w2 ∑ ∑ n f y

pn w n

w0

Neurodynamics

Summation function:

n=b*w0+ p1*w1+p2*w2+……+pn*wn

Transfer function:

f(x)=(1+e-x)-1

Output:

y=f(n)

figure 2.2c Mathematical representation of processing element.

38
NETWORK OUTPUT

OUTPUT
LAYER

HIDDEN
LAYER

INPUT
LAYER

DATA INPUT

Figure 2.2d: simple three-layer neural network. (Courtesy Neural/Ware, Inc))

Neural networks based computing can be implemented in several ways. One way is to

use a dedicated processor for each neuron. The large number of neurons usually makes

this impractical, and most applications don’t need the speed capability. An alternative

approach is to use a single processor and simulate neurons with lookup tables. The

lookup table for each neuron contains the connections, input weight values, and output

equation constants. Hecht-Nielsen Neurocomputers markets a PC/AT compatible

coprocessor board which uses this approach.

A neural network can also be implemented totally in software. Neuralware, Inc.

markets neural net simulation programs for both PC and Macintosh type computers.

These packages can be used to learn about neural nets or develop actual applications
39
which do not have to operate in real time. Another interesting neutral network program is

BrainMaker from California Scientific Software.

Neural network computers are not programmed in the way that digital computers

are, rather, they are trained. Instead of being programmed with a set of rules the way a

classic expert system is, a neural network computer learns the desired behavior. The

learning process may be supervised, unsupervised, or self-supervised.

In the supervised method a set of input conditions and the expected output

conditions are applied to the network. The network learns by adjusting or “adapting” the

weights of the interconnections until the output is correct. Another input-output is then

applied, and the network is allowed to learn this set. After a few sets the network will

have learned or generalized its response so that it can give the correct response to most

applied input data.

The scheme used to adapt the network is called the learning rule. As an example,

one of the simplest learning rules that can be used is the Hebbean learning law. This law

decrees that each time the input of a neuron contributes to the firing, its weight should be

increased, and each time an input does not contribute, its weight should be decreased.

This is somewhat analogous to a positive-negative reinforcement scheme often used in

human behavior modification. In the case of the network, the result is that these

successive “nudges” adapt the network output to the desired result.

40
The major advantages of neural networks are these:

1: They do not need to be programmed: they can simply be taught the desired response.

This eliminates most of the cost of programming.

2. They can improve their response by learning. A neural network designed to evaluate

loan applications, for example, can automatically adapt its criteria based on loan-failure

feedback data.

3. Input data does not have to be precise, because the network works with the sum of the

inputs. A neural network image-recognition system, for example, can recognize a person

even though he or she has a somewhat different hairstyle than when the “learning” image

was taken. Likewise, a neural network-based speech-recognition system can recognize

words spoken by different people. Traditional digital techniques have a very hard time

with these tasks.

4. Information is not stored in a specific memory location the way it is in a normal digital

computer: it is stored associatively as a network of interconnections and weightings. The

result of this is that the “death” of a few neurons will usually not seriously degrade the

operation of the system. This characteristic is also fortunate for us humans!

Software-based neural networks can be used for non-real-time applications such as

forecasting the weather or the stock market. For real time applications such as image

recognition and speech recognition, the software methods are obviously not fast enough.

University researchers and companies such as TRW and Texas Instruments are working

on ICs which implement neural networks in hardware. In the not-too-distant future these

ICs should allow you to talk to your computer instead of using a mouse, allow your
41
computer to read typed messages to you, and allow your car to drive itself down the

freeway. And so it is today, as at the time of compiling this report the first two

innovations have been met.

Neural networks (ANN) have very wide applications because of their ability to

learn. According to Damborg et al., [104], neural networks offer the potential to

overcome the reliance on a functional form of a forecasting model. There are many types

of neural networks: multilayer perceptron network, self-organizing network, etc. There

are multiple hidden layers in the network. In each hidden layer there are many neurons.

Inputs are multiplied by weights and are added to a threshold θ to form an inner product

number called the net function. The net function NET used by Ho et al., [105], for

example, is put through the activation function y, to produce the unit’s final output,

y(NET).

The main advantage here is that most of the forecasting methods seen in the

literature do not require a load model. However, training usually takes a lot of time. Here

we describe the method discussed by Liu et al., [57], using fully connected feed-forward

type neural networks. The network outputs are linear functions of the weights that

connect inputs and hidden units to output units. Therefore, linear equations can be solved

for these output weights. In each iteration through the training data (epoch), the output

weight optimization training method uses conventional back propagation to improve

hidden unit weights, then solves linear equations for the output weights using the

conjugate gradient approach.

42
Srinivasan and Lee, [6] surveyed hybrid fuzzy neural approaches to load

forecasting. Park and Osama, [106] used a NN approach for forecasting which, compared

to regression methods gave more flexible relations between temperature and load

patterns. Extending this work, [107] presented a NN algorithm that combines time series

and regression approaches. Park et al., [107] proposed an improved training procedure for

training the ANN. Atlas et al., [108] earlier compared a similar technique with other

regression methods. Hsu and Yang, [109] estimated the load pattern of the day under

study by averaging the load patterns of several past days, which are of the same day type

(ANN being used for the classification). To predict the daily peak load, a feed-forward

multilayer neural network was designed.

Peng et al., [110] used a minimum distance measurement to identify the

appropriate historical pattern of load and temperature weights to be used to find the

network weights. They also proposed an improved algorithm that combined linear and

non-linear terms to map past load and temperature inputs to the load forecast output. This

work was an extension to a strategy byPeng et al., [111] which was applied on daily load.

The major difference lies in the alternate method for the selection of the training cases.

Later, Peng et al., [112] applied a neural network approach to one-week ahead load

forecasting based on an adaptive linear combiner called the adaline.

Ho and Hsu, [113] designed a multilayer ANN with a new adaptive learning

algorithm for short term load forecasting. In this algorithm the momentum is

automatically adapted in the training process. Lee and Park, [114] proposed a non-linear

load model and several structures of ANNs were tested. Inputs to the ANN include past
43
load values, and the output is the forecast for a given day. Lee and Park, [114]

demonstrated that the ANN could be successfully used in STLF with accepted accuracy.

Chen et al., [115] presented an ANN, which is not fully connected, to forecast weather

sensitive loads for a week. Their model could differentiate between the weekday loads

and the weekend loads. Lu et al., [116] conducted a computational investigation to

evaluate the performance of the ANN methodology.

Djukanovic et al., [117] proposed an algorithm using an unsupervised/supervised

learning concept and historical relationship between the load and temperature for a given

season, day type and hour of the day. They used this algorithm to forecast hourly electric

load with a lead time of 24h.Papalexopoulos et al., [118] developed and implemented the

ANN based model for the energy control centre of the Pacific Gas and Electric Company.

Attention was paid to accurately model special events, such as holidays, heat waves, cold

snaps and other conditions that disturb the normal pattern of the load. Ho et al., [105]

extended the three-layered feed-forward adaptive neural networks to multi-layers. Dillon

et al., [119] proposed a multilayer feed-forward neural network, using a learning

algorithm for adaptive training of neural networks.

Srinivasan et al., [120] used an ANN based on Back propagation for forecasting,

and showed its superiority to traditional methods. Liu et al., [121] compared an

econometric model and a neural network model, through a case study on electricity

consumption forecasting in Singapore. Their results show that a fully trained NN model

with a good fitting performance for the past may not give a good forecasting performance

44
for the future. Kalra et al., [122] demonstrated how present methods for solving such

problems could be converted to NN approaches.

Azzam-Ul-assar and McDonald, [123] trained a family of ANNs and then used

them in line with a supervisory expert system to form an expert network. They also

investigated the effectiveness of the ANN approach to short term load forecasting, where

the networks were trained on actual load data using back-propagation. Al Anbuky et al.,

[70] presented fuzzy logic based neural networks for load forecasting. Dash et al., [72],

Dash et al., [73], and Dash et al., [75] also used fuzzy logic in combination with neural

networks for load forecasting. Their work has been discussed in the previous section.

Chen et al., [124] applied a supervisory functional ANN technique to forecast load

for three substations in Taiwan. To enhance forecasting accuracy, the load was correlated

with temperature as well as the type of customers served, which is classified as

residential, commercial or industrial. Al-Fuhaid et al., [125] incorporated temperature and

humidity effects in an ANN approach for STLF in Kuwait. Vermaak and Botha, [126]

proposed a recurrent NN to model the STLF of the South African utility. They utilized

the inherent non-linear dynamic nature of NN to represent the load as the output of some

dynamic system, influenced by weather, time and environmental variables.

McMenamin and Monforte, [127] used an econometric and statistical approach to

NN-based load forecasting. Considering NN models as flexible non-linear equations, they

used non-linear least-squares to estimate parameters, and simple statistics such as MAPE

to determine the number of nodes. Papadakis et al., [128]developed a three-step fuzzy

ANN approach, involving the prediction of load curve peaks and valleys and mapping
45
them to forecasted peak values. Dash et al., [129] presented a fuzzy NN load forecasting

system that accounts for seasonal and daily changes, as well as holidays and special

situation. An adaptive mechanism is used to train the system on line, providing accurate

results when tested with actual data of the Virginia Utility. Another adaptive NN

technique, employing genetic algorithms in the design and training phase, was used by

Kung et al., [130] on the Taiwan power system.

ANNs have been integrated with several other techniques to improve their

accuracy. Chow and Leung, [131] for example, combined ANN with stochastic time-

series methods, in the form of non-linear autoregressive integrated (NARI) Model. They

implemented an ANN capable of weather compensation, based on NARI, to forecast

electric load in Hong Kong. Choueiki et al., [132] used weighted least-squares procedure

in the training phase of developing an ANN for load forecasting. Several other hybrid

methods involving ANNs in combination with fuzzy logic and expert systems are also on

record. It is very hard to keep track of all publications on load forecasting using NN,

which is currently a very active area of research. Neibur, [133] and,Dorizzi and

Germond, [134] surveyed methods and applications of electrical load forecasting with

ANNs.

Oonsivilan and El-Hawary, [135] presented an approach for predicting electric

power system commercial load using a wavelet neural network. Their results showed that

wavelet NNs may outperform traditional architectures in approximation. Drenza et al.,

[136] presented a new ANN-based technique for STLF. The technique implemented

active selection of training data employing K-nearest neighbours concept. Excellent


46
results were reported using this technique. Yoo and Pimmel.,[137] developed a self-

supervised adaptive NN to perform STLF for a large power system. They used the self-

supervised network to extract correlation feature from temperature and load data. Their

results showed low forecasting errors. Kandil et al., [138] used multilayer perception

(MLP) type ANN for STLF using real load and weather data. Leyan and Chen, [139]

used variable learning rate method combined with quasi Newton method to expedite the

learning process of ANN for STLF.

Nazarko and Styczynski., [140] presented load-modeling methods useful for long

term planning of power distribution systems using statistical clustering and NN approach.

Ijumba and Hunsley.,[141] applied ANN model to predict hourly peak demands of loads

in a newly electrified area. Sinha and Mandal.,[142] presented an ANN-based model for

bus-load prediction and dynamic state estimation in power systems. Drezga and Rahman,

[143] used phase-space concepts to embed electric load parameters, including

temperature and cycle variables, into ANN-based STLF. Drezga and Rahman, [144]

applied another ANN-based technique that features the following characteristics: (1)

selection of training data by the K-nearest neighbours concept, (2) pilot simulation to

determine the number of ANN units and (3) iterative forecasting by simple moving

average to combine local ANN predictions.

Bakirtzis et al., [145] developed an ANN based short-term load forecasting model

for the Energy Control Centre of the Greek Public Power Corporation. In the

development they used a fully connected three-layer feed-forward ANN and back

propagation algorithm was used for the training. Input variables included historical
47
hourly load data, temperature, and the day of the week. The model could forecast load

profiles from one to seven days. Kalaitzakis et al., [7] implemented a 24-hr-ahead load

prediction using ANN model that was capable of parallel processing of data. In their

approach, n-neural blocks with a single output were implemented and trained separately

to provide the n hourly ahead load forecasts. According to this procedure, the requested

load for each specific hour is forecasted, not only using the load time-series for this

specific hour from the previous days, but also using the forecasted load data of the closer

previous time steps for the same day. Bassi and Olivares, [8] performed medium term

load forecasting using a time lagged feed-forward neural network (TLFN). Adepoju et

al., [24] implemented neural network based short-term (one hour ahead) electric load

forecasting and applied the model to the Nigerian power system for one hour in advance

load prediction. The load of the previous hour, the load of the previous day, the load of

the previous week, the day of the week, and the hour of the day all formed the inputs to

the models proposed by [24]. Sarangi et al., [22] performed short term load forecasting

using artificial neural network and compared the result with one obtained using genetic

algorithm based neural network to implement the same problem. They model was tested

with the daily load demand of Delhi state. The model performed wonderfully well.

Arroyo et al., [9] performed electricity load forecasting using a feed-forward artificial

neural network. They model was trained and tested with the load data used during

European Network of Excellence on Intelligent Technologies for Smart Adaptive

System(EUNITE) 2001 competition. In terms of mean average percentage error (MAPE),

[9]’s model would have ranked second in the competition. Outside the EUNITE
48
competition, [146] proposed a method to forecast electrical load using weather ensemble

predictions. In the experiments they employed a feed-forward neural network with 10

nodes in the input layer, 10 nodes in the single hidden layer, and 1 node in the output

layer. The input layer nodes were the 7 different days of a week and 3 weather variables.

From the 7 nodes, 6 were used to represent different days in the week, and the last one

was used for the second week of the industrial closure in the summer. The 3 weather

variables employed were the effective temperature, cooling power of the wind, and

effective illumination. Four different methods were modeled and tested to determine what

influence the weather had on forecasting accuracy. The three methods based on neural

networks, which used weather data showed better prediction results when compared to

the one that did not use weather data. Ringwood et al., [147] modeled electricity load

forecasting using neural networks at three different time scales: hourly, weekly and

yearly. Using data from the national electricity demand in Ireland, ANN-based models

were supplied with parameters obtained from previous experiences with linear modeling

techniques and from manual forecasting methods. The last two approaches described in

the works of [146] and [147] show that including data from other sources may improve

prediction accuracy considerably. However, accuracy is obtained at the cost of making

the models more complex [9]. Yi et al., [10] implemented a neural network based

electricity load forecasting model. In their model, only pre-historical load data were used

as inputs to the model which employed wavelet transformation techniques to preprocess

as well as post process the load data. Their model showed a high forecasting accuracy.

Mosalman [148] proposed artificial neural network based model for a one-day-ahead load
49
forecasting. Their model was trained using hourly historical load data, and daily historical

max/min temperature and humidity data. The results of testing the system on data from

Yazd utility was reported to be reasonably efficient.

From the above literature, it can be seen that much have been done on electric load

forecasting with the current trend being to use artificial intelligence means to predict

electric load. It can also be said that artificial neural network model is predominantly

being used to forecast electric load today. This notwithstanding, this research proposal

remains a novel attempt to treat electric load forecasting using artificial neuronal network

in the context of the Nigerian power system. Although the work of [24] was on electric

load forecasting by means of artificial neuronal network in the Nigerian power system,

this particular model is a clear variant in the following senses: input variable

homogeneity and model simplicity. The neural networks with non-homogeneous input

sets, mixing analogue and digital variables with very different ranges of values and

meaning, face weight adaptation problem over the training process [15].The network

architecture and training methods for both models are also different.

50
CHAPTER THREE

RESEARCH DESIGN/METHODOLOGY

3.0. Introduction

This chapter is focused on the simulation design which includes (a) Research Data

(b) Data pre-processing (c) Construction of Network Architecture (d) Requirement of

minimum number of patterns, (e) Selection of input variables (f) model training and

simulation, and (g) generalization problem.

3.1. Research data

The data that was used in training and testing of the model proposed in this

research are the daily Electric power supplied to Enugu state of Nigeria as contained in

the National Electric power authority New Haven, Enugu 132/33kV Transmission station

daily hourly load reading sheets for the months of February and March 2011. A month’s

data could do for the purpose of short-term electric forecasting [24, 22] and hence for this

research the load profile for the month of March 2011 was actually used. The essence of

collecting two months data is stated in the following subsection.

3.2. Data Pre – Processing

Although in theory all ANNs have arbitrary mapping capacities between sets of

variables, it is convenient to normalize the data before carrying out the training to

compensate for the inevitable scaling and variability differences between the variables

51
[8]. Neural network training can be made more efficient if certain preprocessing steps are

performed on the network inputs and targets [151].

Data pre-processing was in two stages: the first action on the load data was to find

replacement for missing load values. The case of missing load values arose either due to

system collapse, earth fault, transformers being on soak, feeder opened for the purpose of

maintenance operation etc,. In cases like the above, load information for the same hour,

weekday, and week of the preceding month would always be used to refill the missing

gap. Calendar, however, showed that the months of February and March 2011 were the

most compatible months for the purpose of this data doctoring. This is because the 1st

days of both months were coincidentally on the same day of the week, namely Tuesday.

The second operation on the data was performed to put the input values in the same scale.

The approach here was to normalize the mean and standard deviation of the training set.

This would be implemented using the matlab code ‘prestd’. This code normalizes the

network inputs and targets so that they will have zero mean and unity standard deviation.

3.3. Choice of neural network paradigm

Several neural network paradigms have been implemented such as radial basis

function network, multi-layer perceptron network, Jordan recurrent network, cascade-

forward back propagation network, an Elman back propagation network, feed-forward

back propagation network, etc., but none has shown better forecasting accuracy than a

feed-forward input-delay back propagation network which was used for this application.

52
One special feature of the input-delayed feed-forward network is that it combines

conventional network topology (multi-layer perceptron) with good handling of time

dependencies by means of a gamma memory. This is a versatile mechanism that

generalizes the short term structures of memory, based on delays and recurrences. This

scheme allows smaller adjustments without requiring changes in the general network

structure.

3.4. Construction of network architecture

Structure of the network affects the accuracy of the forecast. Network

configuration mainly depends on the number of hidden layers, number of neurons in each

hidden layer and the selection of activation function. Although there is no clear cut

guideline on how to select the architecture of ANN, [149] suggest that for a three layer

ANN, the number of hidden neurons can be selected by one of the following thumb rules:

a) (i -1) hidden neurons, where ‘i’ is the number of input neurons;

b) (i+1) hidden neurons, where ‘i’ is the number of input neurons;

c) For every 5 input neurons, 8 hidden neurons can be taken. This is developed

seeing the performance of a network with 5 inputs, 8 hidden and 1 output neuron;

d) Number of input neurons/number of output neurons;

e) Half the sum of input and output neurons; and

f) P/i neurons where ‘i’ is the input neurons and ‘p’ represents number of training

samples.

53
Determining the number of hidden nodes in the ANN, and the number of epochs

necessary to obtain optimal prediction results is difficult in practice [9]. So contrary

to Gowri and Reedy, [149] heuristic or simple rule of thumb approach to solving this

problem, this work employed experimental approach to determine network

architecture and parameter settings that would give better prediction results. In each

case of choice of network architecture, the network performance shall always be

evaluated using mean absolute percentage error, MAPE, defined as: MAPE (%)

 |  
 
|
= ∑
 ∗ 100 3.1
  

Where  ℎ
,  ℎ
, and N denote forecast load at hth hour, actual load for the

same hour, h and total number of hours respectively.

3.5. Requirement of minimum number of patterns

Gowri and Reedy, [149] suggest that the minimum numbers of patterns required

are half the number of input neurons and the maximum is equal to the product of number

of input neurons and number of hidden neurons or five times the number of input

neurons, whichever is less.

However, no justifications can be given to such thumb rules. In this work,

performance of configurations is studied with eight numbers of training patterns and three

numbers of test patterns.

In this research the idea is to use a total number of 31 days load data grouped into

24 input data sets which in turn are divided into two subsets- the training and test subsets.

54
The training set shall consist of first 17 data sets and the test set shall consist of last 7

data sets.

3.6. Selection of input Variables

Papalexopoulos et al., [118] stated that there is no general rule for the selection of

the number of input variables. It largely depends on engineering judgment, experience

and is carried out almost entirely on trial and error basis. The importance of the factors

may vary for different consumers. According to Wang and Tsoukalas, [150] for most

consumers, historical data (such as weather data, weekend load and previous load data) is

most important for predicting demand in short term load forecasting (STLF). In practice,

it is neither necessary nor useful to include all the historical data as input. Autocorrelation

helps to identify the most important historical data. Theimportance of recent daily loads

in daily load prediction based on correlation analysis is described below [150]:

Table 3.1: Importance of recent daily loads in daily load prediction based on correlation

analysis.

Factors X1 X2 X3 X4 X5 X6 X7

importance 1 2 3 5 7 6 4

55
The first row of the table shows the recent daily load history.

X represents the load consumed and its subscripts show the time index, i.e., 1

means yesterday (one day ago), 2 means the day before yesterday (2 days ago), 7 means

the same day of last week (7 days ago). In the second row, the rank or importance of the

load is described. 1 means the most important and 7 means the least important.

These information and results guided the decision of input factors in this work.

According to table 3.1, when daily load prediction is performed, one should include X1,

X2, X3, X7, as the inputs. Here in this work, use was made of X1 and X7, as the input

variables (i.e., the day before and the same day last week load data).

This choice of input variables left us with a total of 24 data sets for both training and

testing of the model presented in this work.

3.7 Network Training

Once created, the network shall be trained with samples of the research data so it

can recognize the latent pattern in all the utility load data.

Two training styles exist namely:- Incremental training in which the weights and biases

of the network are updated each time an input is presented to the network, and batch

training wherein the weights and biases are updated only after all of the inputs are

presented to the network [151]. The choice of training style can be primarily influenced

by the network model, which can be either a static or dynamic network, and partly by the

56
input data structure which in turn can be presented to the network as either a set of

concurrent vectors or as a sequence of cell arrays.

The neural network proposed in this paper namely a feed-forward neural network with

tap delay lines (NEWFFTD) is a dynamic network and was trained using the batch mode

style of training. This was implemented using the matlab code train. This being because

train has access to more efficient matlab training algorithms [152]. The input although by

its nature is a sequence, was presented to the network model as though a concurrent

vector. This nonetheless, the presence of the tap delay lines on the network input ports

readily makes the model see the input data as though sequential.

3.7.1: Training algorithms

Several training algorithms are known and used in training feed-forward networks

which are basically back-propagation networks. Some of the training algorithms suffer

the problem of slow rate of convergence and will not be considered for the purpose of

this work. Among the faster training algorithms are: variable learning rate back-

propagation with or without momentum, resilient back-propagation- these two, being

heuristic, were not also applied in this work.

The second category of fast algorithms uses standard numerical optimization techniques.

Three types of numerical optimization techniques for neural network training are:

conjugate gradient, quasi-Newton, and Levenberg-Marquardt. Levenberg-Marquardt

algorithm although rated the fastest training algorithm requires too much computer

memory especially, when the number of biases and weights in the network increase, and
57
so was not considered in this experiment [153]. Reduced memory Levenberg-Marquardt

was designed to offset this drawback. But even with reduced memory Levenberg-

Marquardt, the algorithm will always compute the approximate Hessian matrix, and if the

network is very large, then out of memory problem may still exist [151].

In spite of the above, during the experiment trial was given to the under listed algorithms

paying attention to their speed of convergence and performance in the problem as

criterion for making final choice of algorithm:

Scaled conjugate gradient algorithm; various conjugate gradient algorithms as proposed

by any of Fletcher-Reeves, Powell-Beale, or Polak-Ribiere;

One step secant algorithm a typical quasi-Newton algorithm;

From available literature, however, it was almost clear that conjugate gradient algorithms

would do better in this work.

3.7.2: Back-propagation Implementation Strategy

Conjugate gradient algorithm is a clear variant of the known standard back-

propagation algorithm viz, gradient descent algorithm. Back-propagation is the basis for

training a supervised neural network [10]. Back-propagation was created by generalizing

the Widrow-Hoff learning rule to multiple layer networks and non-linear differentiable

transfer functions [151]. The algorithm consists of a forward and backward passes. The

data used as inputs is transmitted through the network, layer by layer, and up to the

output layer where a set of outputs is obtained. During this first forward pass, the weights

of the network are set. The obtained outputs are compared with the desired outputs values

58
and, as backward pass, the difference between the desired outputs and the calculated

outputs (error) is used to adjust the synaptic weights of the net in order to reduce the level

of error [10].

This is an iterative process, which continues until an acceptable level of errors will be

obtained. Each time the network processes the whole set of data (both a forward and a

backward pass) is called an epoch. The network is in this way trained and the error is

reduced by every epoch until an acceptable level of error will be gained. So, learning is

just reduced to the minimization of the Euclidean error measure over the entire learning

set. This method is called error back back-propagation training algorithm and the most

effective learning approach applies gradient information and uses second order

optimization algorithms like Levenberg-Marquardt or conjugate gradient ones [154].

The implementation of back-propagation algorithm and the equations used to calculate

various intermediate values and error terms are explained with the help of the figure 3.1

below.

59
Input

Wij

Hidden layer

Output

Figure 3.1: Structure of a three-layered feed-forward Type of ANN

Back-propagation algorithm can be explained in a simpler form as follows:

The output from neuron i, Oi, is connected to input of neuron j through the

interconnection weight Wij. Unless neuron k is one of the input neurons, the state of

neuron k is:

 = ∑  
3.2

Where f(x) is layer non-linear activation function, usually a sigmoid function,  


=

1⁄1 +  
is commonly used at all layers but the output layer, and the sum is over all

neurons in the adjacent layer.

Let the target state of the output neuron be, t, thus the error at the output neuron can be

defined as:

! = # − 
" 3.3
"

Where neuron k is the output neuron


60
The gradient descent algorithm adapts the weights according to the gradient error, i.e.,

( ) ( ) ( -,
∆& ∝ − =− 3.4
( *+, ( -, ( *+,

Specifically, we define the error signal as

( )
.& = − 3.5
( -,

With some manipulations, we can get the following general delta rule:

∆ & = ɛ.&  3.6

Where, ɛ, is the adaptation gain. .& is computed based on whether or not neuron j is in the

output layer. If neuron j is one of the output neurons,

.& = # − 0
& 11 − & 2 3.7

If neuron j is not in the output layer,

.& = & 11 − & 2 ∑ .  3.8

In order to improve convergence characteristic, we can introduce a momentum term with

momentum gain α to equation 3.6,[107].

∆& 3 + 1
= ɛ.&  + 4∆& 3
3.9

where n represents the iteration index.

Some other variants of gradient descent algorithm introduce in addition to the momentum

term a learning rate parameter, η, as a way of achieving faster convergence, so the

weights are adjusted using the following rule [7].


5 5 5 5 5
 3 + 1
=  3
+ 6.  + 4∆ 3
3.10

61
Where η is the learning rate parameter, α is the momentum term which ranges from 0 to

1, and δ is the negative derivative of the total square error in respect to neuron’s output.

The objective here is the minimization of the following error cost function over time t,
7;+<=>
!7879: = ∑7 ! #
3.11

Where #?@9: is the final moment of the neural network run.

The steepest descent method is used for the minimization procedure.

All of the conjugate gradient algorithms start out by searching in the steepest descent

direction (negative of the gradient) on the first iteration.

Mathematically, 8 = −A8 3.12

A line search is then performed to determine the optimal distance to move along the

current search direction:

BC = B + 4  3.13

Then the next search direction is determined so that it is conjugate to previous search

directions.

The general procedure for determining the new search direction is to combine the new

steepest descent direction with the previous search direction:

 = −A + D   3.14

The various versions of conjugate gradient algorithms are distinguished by the manner in

which the constant D is computed [151].

For the Fletcher-Reeves update the procedure is:

EFG EF
D = G E 3.15
EFHI FHI

62
This is the ratio of the norm squared of the current gradient to the norm squared of the

previous gradient; T denotes matrix transposition operation.

For Polak-Ribiѐre update, D is computed by:


G
∆EFHI EF
D = G E 3.16
EFHI FHI

This is the inner product of the previous change in the gradient with the current gradient

divided by the norm squared of the previous gradient.

For all conjugate gradient algorithms, the search direction will be periodically reset to the

negative of the gradient. The standard reset point occurs when the number of iterations is

equal to the number of network parameters (weights and biases), but there are other reset

methods that can improve the efficiency of training. One such method was proposed by

Powell, based on an earlier version proposed by Beale. For this technique we will restart

if there is very little orthogonality left between the current and the previous gradient. This

is tested with the inequality:


K
|A  A | ≥ 0.2‖A ‖ 3.17

If this condition is satisfied, the search direction is reset to the negative of the gradient.

For any chosen training algorithm, the training approach to be adapted in this research

shall be to present the training data which constitutes of 17 input/target sets one set at a

time to the model until the last set is presented to the network model when the entire

training data set shall be presented to the model all at a time. The model will then adapt

its weights and biases to suit the trend of all the data set that ever participated in the

63
training exercise. The network response to the test data sets which are input sets that

never participated in the training exercise was then simulated. During this testing time,

the expected targets were not submitted to the model. Both the network response and the

desired target were plotted on the same figure window for comparison. The model

performance for the particular setting was evaluated.

3.8 Improving Generalization

One of the major problems that occur during neural network training is called

over-fitting or problem of generalization. The error on the training set is driven to a very

small value, but when new data is presented to the network, the error is large. The

network has memorized the training examples, but it has not learned to generalize to new

situations [151], [8].

Four techniques for improving network generalization are as follows:

 Use a network that is just large enough to provide an adequate fit. The problem

with this approach however is that it is difficult to know beforehand how large a

network should be for a specific application.

 Collect more data and increase the size of the training set and so eliminate entirely

the problem of over fitting[153]. This is true since if the number of parameters in

the network is much smaller than the total number of points in the training set,

then there is little or no chance of over fitting. This is tedious especially in the

Nigerian power system where the required data was too raw and available only in

hard copy form.


64
 Early stopping technique: in this technique, the available data is divided into three

subsets. The first subset is the training set, which is used for computing the

gradient and updating the network weights and biases. The second subset is the

validation or cross-validation set. The error on the validation set is monitored on

the training process. The validation error will normally decrease during the initial

phase of the training, as does the training set error. However, when the network

begins to over fit the data, the error on the validation set will typically begin to

increase. When the validation error increases for a specified number of iterations,

the training is stopped, and the weights and biases at the minimum of the

validation error are returned. The problem with this technique is sizing the subsets

properly.

 The last technique is called regularization method. This involves modifying the

performance function, which is normally chosen to be the sum of squares of the

network errors on the training set. A typical performance function that is used for

training a feed-forward neural network is the mean sum of the squares of the

network errors.
 
P = QR = ∑ " 

= ∑# − S

"
3.18
 

It is possible to improve generalization if we modify the performance function by adding

a term that consists of the mean of the sum of the squares of the network weights and

biases, [151] thus:

QRTA = γQR + 1 − γ
QRU 3.19

65
Where γ is the performance ratio, and


QRU = ∑@& U&" 3.20
@

Using this performance function will cause the network to have smaller weights and

biases, and this will force the network response to be smoother and less likely to over fit.

The problem with regularization is that it is difficult to determine the optimum value for

the performance ratio parameters.

One approach to this process of determining the optimum value of the performance ratio

is the Bayesian framework of David Mackay. Bayesian regularization otherwise called

automated regularization has been implemented using the matlab code trainbr.

In this research, the idea is to tackle problem of generalization or over fitting by

regularization technique. However, rather than use the Bayesian regularization approach

to determine the optimal value of the performance ratio parameter, it was done by

experimentation instead. The default value of performance ratio in matlab toolbox is 0.5,

so this value was increased or decreased until optimum value in terms of the model

forecasting accuracy was achieved. Such optimum value was then considered for this

experiment.

66
CHAPTER FOUR

EXPERIMENTAL RESULTS AND DISCUSSIONS

4.0. Introduction

Forecast results and statistical properties obtained from the application of the

developed Short Term Load Forecasting (STLF) ANN model on the load data of New

Haven Enugu transmission station, a typical Nigerian power system are presented and

discussed in this chapter.

The STLF results for the utility of New Haven Enugu, Nigeria produced by the ANN

structure proposed by this research were analyzed on the basis of the well-known

statistical index, mean absolute percentage error (MAPE) stated in equation 3.1 and

repeated below:

 |  
 
|
MAPE(%) = ∑
 ∗ 100 4.1
  

The errors have been calculated separately for the learning and testing data. Here we will

present only the testing errors related to the data not taking part in the learning /training

process since this information is the most important from the practical point of view.

4.1 Selection of Network Architecture and Parametric Values

Different network architectures were experimented with and their performances in

terms of MAPE, on the test data set, when other network parameters were fixed, were

noted and recorded as in table 4.1 below.

67
Table 4.1: Tested network architectures and their performances in terms of MAPE.

Layer MAPE (%) on Test Sets

architecture 1 2 3 4 5 6 7 Average

MAPE

5-1 19.46 16.98 18.87 17.65 16.44 19.31 18.01 18.10

120-1 5.78 5.85 6.21 3.77 3.39 5.11 5.18 5.04

3-2-1 19.46 16.98 18.87 17.65 16.44 19.31 18.01 18.10

5-7-1 19.46 16.98 18.87 17.65 16.44 19.31 18.01 18.10

5-11-1 19.46 16.98 18.87 17.65 16.44 19.31 18.01 18.10

5-17-1 12.03 10.94 12.50 11.13 9.31 12.39 11.19 11.36

5-35-1 6.74 7.22 7.95 5.47 4.58 6.95 6.55 6.49

15-35-1 5.60 5.98 6.36 3.83 3.57 4.78 4.95 5.01

22-38-1 5.58 5.77 6.12 3.73 3.64 4.64 4.80 4.90

27-45-1 5.57 5.78 6.06 3.66 3.70 4.72 4.75 4.89

35-50-1 5.57 5.89 6.00 3.70 3.76 4.79 4.72 4.92

30-48-1 5.57 5.83 6.04 3.68 3.74 4.75 4.73 4.91

28-38-1 5.58 5.73 6.08 3.69 3.66 4.68 4.77 4.88

35-45-1 5.57 5.86 6.02 3.68 3.74 4.77 4.73 4.91

22-42-1 5.57 5.74 6.11 3.72 3.65 4.65 4.78 4.89

40-60-1 5.58 5.96 5.98 3.74 3.81 4.83 4.70 4.94

68
DISCUSSION: From table 4.1, it can be seen that the model architecture that gave

optimal performance in terms of forecast error is 28-38-1 architecture, i.e., a three layered

ANN model with 28 neurons in the input /first layer, 38 neurons in the second/hidden

layer and a single neuron in the output layer.

It can be observed from table 4.1 that increasing or decreasing the number of neurons in a

layer by one or few neurons does not necessarily affect the model performance

significantly.

Different architectures could give the same results as is the case with 30-48-1 and 35-45-

1 architectures in our experiment. These architectures gave the same forecasting error of

4.91% for this model.

4.2: Choice of training algorithm

The time elapsed in training and testing the optimal network architecture as well

as the model performance with different training algorithms and for common conditions

of other model parameters was experimented with and documented, and partly presented

in table 4.2 below.

DISCUSSION: From table 4.2 it can be seen that scaled conjugate gradient algorithm

performed fastest during this work, but in terms of model performance conjugate gradient

algorithm which uses Polak-Ribiere restart technique or the one which uses Fletcher-

Reeves restart technique was the best. Since the difference in the time taken for the

training of this model using conjugate gradient algorithm which uses Polak-Ribiererestart
69
technique is not much when compared to the fastest training algorithm, namely scaled

conjugate gradient algorithm, choice of the former as the training algorithm for this

model made.

Table 4.2:Various training algorithms and their performances when used with our

model

S/No Algorithm Description of Time elapsed Performance in terms


algorithm in seconds of MAPE(%)
1 SCG Scaled conjugate 42.19 4.89
algorithm
2 CGB Conjugate gradient 50.51 4.89
using Powell-Beale
restart technique
3 CGF Conjugate gradient 160.40 4.88
using Fletcher-
Reeves restart
technique
4 CGP Conjugate gradient 70.12 4.88
using Polak-Ribiere
restart technique
5 OSS One step secant 501.78 4.89
algorithm-a quasi-
Newton-Raphson
algorithm

Tap delay line

When the length of the tap delay vector was varied during the experiment, the

effect on the network model performance was noted to be increased time of training

without significant improvement on the model performance. Table 4.3 below shows the

time elapsed and the model performance for varying lengths of time delay vector.

70
Table 4.3: Effect of time delay vector on model accuracy and training time

S/No Time delay vector Length of vector Time elapsed in Model

Sec Performance

MAPE (%)

1 1 1 15.42 18.10

2 0 1 138.72 4.88

3 [0 1] 2 118.69 4.88

4 [0:4] 5 137.64 4.89

5 [0:10] 11 138.63 4.90

Epoch: The epoch was set to a high initial value, 10,000 in this case, and the network

performance was monitored. Particularly the number of epoch during which training

stopped was observed, and with this as a guide the number of epoch was tuned down

until an optimal value of 3000 was noted.

Problem of generalization: as stated in the experimental procedure of this paper, the

problem of over fitting was solved by regularization technique just as the optimal value

of the performance ratio parameter was achieved by trial and error. Table 6 below shows

the effect of choice of performance ratio parameter on our model accuracy. From table 6,

it can be seen that the model forecasting accuracy was best with the value of performance

ratio parameter set to 0.01.

71
Table 4.4: Variation in performance ratio parameter with model accuracy

γ MAPE (%)

0.5 5.24

0.2 5.15

0.1 5.03

0.09 5.03

0.6 5.23

0.01 4.88

0.9 7.20

0.001 10.49

0.05 5.01

Results: Table 4.5 below shows the optimal values of various model parameters used in

this research while the test results when the trained model was tested on the 24-hourly

load curves of New Haven Enugu for the days of Friday 25th March 2011 through

Thursday, 31st March the same are depicted in figures 4.1 through 4.8. The results

obtained from testing the trained neural network on new data that never participated in

the training exercise for 24 hours of a day over a one week period are presented below in

graphical form (figures 4.1-4.8). Each graph shows a plot of both the actual and forecast

load values in MW against the hour of the day.

72
Table 4.5: Optimal values of our model parameters

Parameter Optimal value

Architecture/Structure-------------------- 28-38-1

Epochs--------------------------------------- 3000

Performance ratio------------------------ 0.01

Activation functions used:

in hidden layers-- Tansigmoid

in the output layer--- Purelin

Error tolerance----------------------------- 0.0001

Stopping criteria--------------------------- No. of iterations>=epochs or network error<=

tolerance

Training algorithm------------------------- Conjugate gradient algorithm using Polak-

Ribiere restart technique

Learning rate----------------------------- 0.001

Momentum constant--------------------- 0.75

73
plot of actual and predicted load values against time in hours
90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.1: Test result for Friday, 25th March 2011

plot of actual and predicted load values against time in hours


90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.2: Test result for Saturday, 26th March 2011

74
plot of actual and predicted load values against time in hours
90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.3: Test result for Sunday, 27th March 2011


plot of actual and predicted load values against time in hours
90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.4: Test result for Monday, 28th March 2011


75
plot of actual and predicted load values against time in hours
90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.5: Test result for Tuesday, 29th March 2011

plot of actual and predicted load values against time in hours


90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.6: Test result for Wednesday, 30th March 2011

76
plot of actual and predicted load values against time in hours
90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 5 10 15 20 25
time in hours

Figure 4.7: test result for Thursday, 31st March 2011


plot of actual and predicted load values against time in hours
90
rh- predicted values
80 bd-- actual values

70

60
load in Megawatt

50

40

30

20

10

0
0 20 40 60 80 100 120 140 160
time in hours

Figure 4.8: Test result for 7 days (Friday, 25th –Thursday, 31st March 2011
77
The percentage absolute mean error of this model on the test sets have been calculated

and tabulated in table 4.6.

Table 4.6: MAPE for the model on the test set

Test Days MAPE (%)

Fri, 25th March 5.58

Sat, 26th March 5.36

Sun, 27th March 6.13

Mon, 28th March 2.93

Tue, 29th March 2.97

Wed, 30th March 3.25

Thu, 31st March 3.72

Fri, 25th to Thu, 31st March 4.27

DISCUSSION: From table 4.6 above, it can be seen that the neural network showed

higher forecasting error in the days when people have specific start-up activities such as

Fridays or variant activities such as during Sundays

This is probably because of pick-up loads associated with such days.

With the aid of figure 4.8 the average MAPE for this model can be obtained as

4.27%.This connotes a high degree of forecasting accuracy for this model in spite of its

simplicity both in architecture and input variables. It is proper to note that forecasting

errors reported for various ANN load forecasting models in some literatures are in the

varied range of 12.8%-1.18%. The accuracy of the model presented in this paper seems
78
not to be the overall best so far. There is, however, no basis for direct comparison of

models results especially when it is noted that data used for the various experiments are

neither from the same sources nor the same in values.

So, it is reported for this short term load forecasting model, namely, a feed-forward

artificial neural network with input delay, trained using error back-propagation algorithm,

an average forecasting error of 4.27% when the model was trained and tested using one

month hourly load data obtained from New-Haven Enugu 132/33KV transmission station

for the month of march 2011.

79
CHAPTER FIVE

CONCLUSION AND SUGGESTIONS FOR FURTHER RESEARCH

5.1 Conclusion

This work and its results show that the ANN represents a powerful tool for

decision making in electric power utility companies.

The result of the feed-forward time-delay (NewFFTD) network model used for one day

ahead short term load forecast for New Haven Enugu transmission station, a typical

Nigerian Power System, shows that NewFFTD, which is a multilayer feed forward

network with time delay, has a good performance, and reasonable prediction accuracy

was achieved for this model.

Its forecasting reliabilities were evaluated by computing the mean absolute error between

the exact and predicted values. The results suggest that ANN model with the developed

structure can perform good prediction with least error and finally, this neural network

could be an important tool for short term load forecasting. Our experimental results also

show that a simple ANN-based prediction model appropriately tuned can outperform

other more complex models.

We conclude therefore by saying that this research is a novel attempt to deal with load

forecasting in the Nigerian power system by means of Artificial Neural Network with

emphasis on ANN model simplicity, input data homogeneity without compromising

forecasting accuracy.

80
5.2 Suggestions for Further Research

In spite of the delimitations of this work and the observations made on the course

of the experiment proper, we make the following suggestions for further studies:

 The neural network typically shows higher error in the days when people have

specific start-up activities such as Friday (for example on day 1 of the test set in

table 4.6), or variant activities such as during Sundays which are like holidays in

the Eastern part of the country (for example, on day 3 of the test set in table 4.6).

In order to have more accurate results, one may need to have more sophisticated

topology for the neural network which can discriminate start-up days from other

days. In other words, a model with special holiday encoding may perform this task

better.

 Determination of values of weights and biases that prompts fast convergence of

training algorithm is still an issue in load forecasting by means of neural networks.

So, a hybrid approach may be necessary to this effect. Use of genetic algorithms or

swarm optimization techniques for the determination of weights in a back

propagation network for short-term load forecasting may be of help in improving

neural network performance in load forecasting.

 Due to time constraint and financial limitations we narrowed our work to New-

Haven, Enugu, Nigeria transmission station load profile. It may be reasonable if

the model is tested on load data obtained from a larger part of the Nigerian Power

system.

81
 Finally, since the effects of exogenous variables on models accuracy is still in

contention today, development of a neural network model that in addition to pre-

historical load data, can take as input some other exogenous variables as input data

may be necessary for improved model forecasting accuracy.

82
REFERENCES

[1] Vadhera, S.S., Power System Analysis and Stability, Khana Publishers, NaiSarak,
Delhi, 2004.

[2] The Nigerian Dailies, Daily Sun Newspaper, p45, Monday January 10, 2010.

[3] Bunn, D.W, and Farmer, E. D., Review of Short-term Forecasting Methods in the
Electric Power Industry, New York: Wiley, pp13-30, 1985.

[4] Alfares, H.K., and Nazeeruddin, M., “Electric Load Forecasting” Literature survey
and classification of methods”, International Journal of System Science, Vol.
33(1), pp. 23-34, 2002.

[5] Haida, T., and Muto, S., to Electric Regression based Peak Load Forecasting Using a
Transformation Technique, IEEE Transactions on Power Systems, Vol.9, pp.1788-
1794, 1994.

[6] Srinivasan, D., and Lee, M. A., Survey of Hybrid Fuzzy Neural Approaches to
Electric Load Forecasting, Proceedings of IEEE International Conference on
Systems, Man and Cybernetics, Part 5, Vancouver, BC, pp. 4004-4008, 1995.

[7] Kalaitzakis, K., Stavrakakis, G. S., Anagnostakis, E. M. “short-term load forecasting


based on artificial neural networks parallel implementation”, Electric Power
Systems Research 63, pp.185-196, 2002.

[8] Bassi, D., and Olivares, O., “Medium Term Electric Load Forecasting Using TLFN
Neural Networks”, International Journal of Computers, Communications and
Control Vol. 1 N0. 2. pp. 23-32., 2006.

[9] Arroyo, D. O., Skov, M.K., and Huynh, Q., “Accurate Electricity Load Forecasting
with Artificial Neural Networks”, Proceedings of the 2005 International
Conference on Computational Intelligence for modeling, control and Automation,
and International conference on Intelligent Agents, Web Technologies and Internet
Commerce, 2005.

[10] Yi, M.M., Linn, K.S., and Kyaw, M., “Implementation of Neural Network Based
Electricity Load Forecasting”, World Academy of Science, Engineering and
Technology, 42, 2008.

[11] Chakraborty, K., Mehrota, K., Mohan, C. K., and Ranka, S., Forecasting the
Behaviour of Multivariate Time Series Using Neural Networks, vol5 pp.961-970,
1992.
83
[12] Kolarik, T., and Rudorfer, G., Time Series Forecasting Using Neural Networks,
Proceedings of the International Conference on APL., Antwerp, Belgium, pp.86-
94, 1994.

[13] Douglas V. H., Microprocessor and Interfacing: Programming and Hardware,


(2ndedn.), Tata McGraw-Hill Publishing Company Ltd, New Delhi, 1999.

[14] Siwek, K., Osowski, S., Szupiluk, R., “Esnemble Neural Network Approach for
Accurate Load Forecasting in a Power System”, International Journal of Applied
Mathematics & Computer Science, Vol. 19, No. 2, pp. 303-315., 2009.

[15] Marin, F. J., Garcia-Lagos F., Joya, G., and Sandoval. F., “Global Model for Short
Term Load Forecasting using Artificial Neural Networks”, IEEE proceedings on
Power systems Generation, Transmission and Distribution Vol. (2), March 2002.

[16] Lee, K., Cha, Y., Ku, C., A study on Neural Networks for Short Term Load
Forecasting, Proceedings of ANNPS ’91, Seattle, pp.26-30, July 1991.

[17] Kariniotakis, G. N., Kosmatopoulos, E., and Stavrakakis, G., Load Forecasting using
Dynamic High-order Neural Networks, Proc. IEEE-NTUA, Joint Int. Power Conf.,
Athens, Greece, pp.801-805, 1993.

[18] Drezga, I., and Rahman, S., Short-term Load Forecasting with Local ANN
Predictors, IEEE Transactions on Power Systems, vol.14(3), 1996.

[19] Kandil, M. S., El-Debeiky, S. M., and Hasanien, N. E., Overview and Comparison of
Long-term Forecasting Techniques for a Fast Developing Utility: Part 1, Electric
Power Systems, Res. 58, pp.11-17, 2001.

[20] Zhang, B. –L., and Dong, Z. –Y, An Adaptive Neural-wavelet Model for Short-term
Load Forecasting, Electric Power Systems, Res. 59, pp.121-129,2001.

[21] Chang, M. -Wei, Chen, B. –Juen, and Lin, C. –Jen, “EUNITE Network Competition:
Electricity Load Forecasting”, EUNITE Competition, Available:
http://neuron.tuke.sk/competition/index.php.

[22] Sarangi, P.K., Singh, N., Chanhan, R. K., and Singh, R., “Short Term Load
Forecasting Using Artificial Neural Network: A Comparison with Genetic
Algorithm Implementation”, Asian Research Publishing Network (ARPN) Journal
of Engineering and Applied Sciences Vol. 4 (9), November 2009.

[23] Chakrabarti, A., and Halder, S., Power system Analysis. Operation and Control,
(2ndedu), PHI learning Private Ltd, New Delhi, 2008.
84
[24] Adepoju, G. A., Ogunjujibe, S. O. A, and Alawode, K. O., “Application of Neural
Network to Load Forecasting in Nigerian Electrical Power System”, Pacific
Journal of Science and Technology, 8(1) pp. 68-72, 2007.

[25] Feinberg, E. A., Hajagos, J. T., and Genethliou, D., “Statistical Load Modelling”,
Proceedings of the 7th IASTED International Conference: Power and Energy
Systems, pp.88-99, Palm Springs, CA, 2003.

[26] Gavrilas, M., Neural Network based forecasting for Electricity Markets, Technical
University of Iasi, Romania, 2002.

[27] Weedy, B. M., and Cory. B.J., Electric Power Systems, (4thedn.), John Wiley and
Sons, New York, 1998.

[28] Matthewman, P. D., and Nicholson, H., Techniques for load prediction in electricity
supply industry, Proceedings of the IEEE, 115, 1451-1457, 1968.

[29] Abu El-Magd, M.A., and Sinha, N.K., Short term load demand modeling and
forecasting, IEEE transactions on Systems Man and cybernetics, 12, 370-382,
1982.

[30] Gross, G., and Galiana, F. D., Short term load forecasting, Proceedings of the IEEE,
75, 1558-1573, 1987.

[31] Moghram, I., and Rahman, S., Analysis and evaluation of five short-term load
forecasting techniques, IEEE Transactions on Power Systems, 4, 1484-1491, 1989.

[32] Pabla, A. S., Electric Power Distribution, (5thedn), Tata McGraw-Hill Publishing
Company Ltd, New Delhi, 2004.

[33] Gellings, C. W., “Demand Forecasting for Electric Utilities”, The Fairmont Press,
Lilburn, GA, 1996.

[34] Sullivan, R. L.: “Power System Planning”, McGraw-Hill International Book


Company, New York, 1977.

[35] Desphande, M. V.: “Elements of Electrical Power Station Design”, A. H. Wheeler


and Co. (P) Ltd., Allahabad (India), 1979.

[36] Mbamalu, G. A. N., and El-Hawary, M. E., Load forecasting via suboptimal
autoregressive models and iteratively recursive least squares estimation, IEEE
Transactions on Power Systems, 8, 343-348, 1993.

85
[37] Barakat, E. H., Qayyum, M. A., Hamed, M.N., and Al-Rashed, S.A., Short-term
peak demand forecasting in fast developing utility with inherent dynamic load
characteristics. IEEE Transactions on Power Systems, 5, 813-824, 1990.

[38] Papalexopoulos, A. D., and Hesterberg, T. C., A regression-based approach to short-


term load forecasting, IEEE Transactions on Power Systems, 5, 1214-1221, 1990.

[39] Haida, T., Muto, S., Takahashi, Y., and Ishi, Y., Peak load forecasting using
multiple-year data with trend data processing techniques, Electrical Engineering in
Japan, 124, 7-16, 1998.

[40] Varadan, S., and Makram, E. B., Harmonic load identification and determination of
load composition using a least squares method, Electric Power Systems Research,
37, 203-208, 1996.

[41] Hyde, O., and Hodnett, P. F., modeling the effect of weather in short-term electricity
load forecasting, Mathematical Engineering in Industry, 6, 155-169, 1997a.

[42] Hyde, O., and Hodnett, P. F., Adaptable automated procedure for short-term
electricity load forecasting, IEEE Transactions on Power Systems, 12, 84-94,
1997b.

[43] Nazarko, J., Estimating substation peaks from research data. IEEE Transactions on
Power Delivery, 12, 451-456, 1997.

[44] Al-Garni, A.Z., Ahmed, Z., Al-Nassar, Y.N., Zubair, S.M., and Al-Shehri, A.,
Model for electric energy consumption in Eastern Saudi Arabia, Energy sources,
19, 325-334, 1997.

[45] Charytonuk, W., Chen, M.S., and Van Olinda, P., Nonparametric regression based
short-term load forecasting, IEEE Transactions on Power Systems, 13, 725-730,
1998.

[46] El-Keib, A.A., Ma, X., and Ma, H., Advancement of statistical based modeling for
short-term load forecasting, Electric Power Systems Research, 35, 51-58, 1995.

[47] Infield, D. G., and Hill, D. C., Optimal smoothing for trend removal in short term
electricity demand forecasting, IEEE Transactions on Power Systems, 13, 1115-
1120, 1998.

[48] Mbamalu, G. A. N., and El-Hawary, M. E., Load forecasting via suboptimal
seasonal autoregressive models and iteratively reweighted least squares
estimation, IEEE Transactions on Power Systems, 8, 343-348, 1992.
86
[49] Lu, Q. C., Grady, W. M., Crawford, M. M., and Anderson, G. M., An adaptive non-
linear predictor with orthogonal escalator structure for short-term load forecasting,
IEEE Transactions on Power Systems, 4, 158-164, 1989.

[50] Grady, W.M., Groce, L. A., Huebner T.M., Lu, Q. C., and Crawford, M.M.,
Enhancement, implementation and performance of an adaptive load forecasting
technique, IEEE Transactions on Power Systems, 6, 450-456, 1991.

[51] McDonald, J. R., Lo, K. L., and Sherwood, P. M., Application of short-term adaptive
forecasting techniques in energy management for the control of electric load,
Transactions of the Institute of Measurement and Control, 11, 79-91, 1989.

[52] Park, J. H., Park, Y. M., and Lee, K. Y., Composite Modeling for adaptive short-
term load forecasting, IEEE Transactions on Power Systems, 6, 450-456, 1991b.

[53] Fan, J. Y., and Mcdonald, J. D., a Real time implementation of short-term load
forecasting for distribution power system, IEEE Transactions on Power Systems,
9, 988-993, 1994.

[54] Paarman, L. D., and Najar, M. D., Adaptive online load forecasting via time series
modeling, Electric Power Systems Research, 32, 219-225, 1995.

[55] Zheng, T., Girgis, A. A., and Makram, E. B., A hybrid wavelet-Kalman filter
method for load forecasting, Electric Power Systems Research, 54, 11-17, 2000.

[56] Aggarwal, K. K., Control Systems Analysis and Design, Khanna Publishers, 2B,
Nath Market, NaiSarak, Delhi, 2004.

[57] Liu, K., Subbarayan, S., Shoults, R. R., Manry, M. T., Kwan, C., Lewis, F. L., and
Naccarino, J., Comparison of very short-term load forecasting, IEEE Transactions
on Power Systems, 11, 877-882, 1996.

[58] Huang, S.R., Short-term load forecasting using threshold auto-regressive models,
IEEE Proceedings: Generation, Transmission and Distribution, 144, 477-481,
1997.

[59] Zhao, H., Ren, Z., and Huang W., Short-term load forecasting considering weekly
period based on periodical auto-regression, Proceedings of the Chinese Society of
Electrical Engineers, 17, pp. 211-213, 1997.

87
[60] Barakat, E. H., Al-Qassim, J. M., and Al-Rashed, S. A., New model for peak
demand forecasting applied to highly complex load characteristics of a fast
developing area, IEE proceedings-C, 139, 136-149, 1992.

[61] Chen, J. -F., Wang, W. -M., and Huang, C. M., Analysis of an adaptive time-series
autoregressive moving average (ARMA) model for short-term load forecasting,
Electric power systems Research, 34, 187-196, 1995.

[62] Elrazaz, Z. S., and Mazi, A. A., Unified weekly peak load forecasting for fast
growing power system, IEEE proceedings- C, 136,29-41, 1989.

[63] Juberias, G., Yunta, R., Garcia-Morino, J., and Mendivil, C., A new ARIMA model
for hourly load forecasting, IEEE Transmission and Distribution Proceedings, 1,
314-319, 1999.

[64] Yang, H. –T., Huang, C. -M., and Huang, C. –L., Identification of ARMAX model
for short term load forecasting: an evolutionary programming approach, IEEE
Transactions on Power Systems, 11, 403-408, 1996.

[65] Yang, H. –T., and Huang, C. -M., New short-term load-forecasting approach using
self-organizing fuzzy ARMAX models, IEEE Transactions on Power Systems, 13,
217-225, 1998.

[66] Ma, X., El-Keib, A. A., Smith, R. E., and Ma, H., Genetic algorithm based approach
to thermal unit commitment of electric power systems, Electric Power Systems
Research, 34, 29-36, 1995.

[67] Lee, D. G., Lee, B. W., and Chang, S. H., Genetic programming model for long-term
forecasting of electric power demand, Electric Power Systems Research, 40, 17-
22, 1997.

[68] Hsu, Y. Y., Fuzzy expert systems: an application to short-term load forecasting,
IEEE proceedings –C, 139, 471-477, 1992.

[69] Liang, R. H., and Hsu, Y. Y., Fuzzy linear programming: an application to
hydroelectric generation scheduling, IEEE proceedings on Generation,
Transmission and Distribution, 141, 568-574, 1994.

[70] Al-Anbuky, A., Bataineh, S., and Al-Aqtash, S., Power demand prediction using
fuzzy logic, Control Engineering Practice, 3, 1291-1298, 1995.

88
[71] Srinivasan, D., Chang, C. S., and Liew, A. C., Demanding forecasting using fuzzy
neural computation, with special emphasis on weekend and public holiday
forecasting, IEEE Transactions on Power Systems, 8, 343-348, 1992.
[72] Dash, P. K., Liew, A. C., and Rahman, S., Comparison of fuzzy neural networks for
the generation of daily average and peak load profiles, International Journal of
System Science, 26, 2091-2106, 1995a.

[73] DASH, P. K., Liew, A. C., Rahman, S., and Dash, S., Fuzzy and neuro-fuzzy
computing models for electric load forecasting, Engineering Applications of
Artificial Intelligence, 8, 423-433, 1995b.

[74] Mori, H., and Koboyashi, H., Optimal fuzzy inference for short-term load
forecasting, IEEE Transactions on Power Systems, 11, 3900-396, 1996.

[75] Dash P. K., Liew, A. C., andRahman, S., fuzzy neural network and fuzzy expert
system for load forecasting IEE proceedings: Generation, Transmission, and
Distribution, 143, 106-114, 1996.

[76] Ramirez-Rosado, I. J., and Dominguez-Navaro, J. A., Distribution Planning of


Electric Energy using Fuzzy Models, International Journal of Power and Energy
Systems, 16, 49-55, 1996.

[77] Chow, M.H. and Tram, H., Application of fuzzy logic technology for spatial load
forecasting, IEEE Transactions on Power Systems, 12, 1360-1366, 1997.

[78] Chow, M., Zhu, J., and Tram, H., Application of fuzzy multi-objective decision
making in spatial load forecasting, IEEE Transactions on Power Systems, 13,
1185-1190, 1998.

[79] Senjyu, T., Higa, S., and Uezato, K., Future load curves shaping based on similarity
using fuzzy logic approach, IEEE proceedings: Generation, Transmission and
Distribution, 145, 375-380, 1998.

[80] Mori, H., Sone, Y., Moridera, D., and Kondo, T., Fuzzy Inference models for short-
term load forecasting with tabu search, IEEE Systems, Man, Cybernetics
Conference Proceedings, 6, 551-556, 1999.

[81] Wu, H. -C., and Lu, C., Automatic fuzzy model identification for short-term load
forecast, IEEE Proceedings on Generation, Transmission and Distribution, 146,
477-482, 1999.

89
[82] Mastorocostas, P. A., Theocharis, J. B., and Bakirtzis, A. G., Fuzzy modeling for
short-term load forecasting using the orthogonal least squares method, IEEE
Transactions on Power Systems, 14, 29-36, 1999.

[83] Padmakumari, K., Mohandas, K. P., and Theruvengadam, S., Long-term distribution
demand forecasting using neuro fuzzy computations, Electrical Power and Energy
Systems Research, 21, 315-322, 1999.

[84] Srinivasan, D., Tan, S. S., Chang, C. S., and Chan, E. K., Parallel neural network-
fuzzy expert system for short-term load forecasting: system implementation and
performance evaluation, IEEE Transactions on Power Systems, 14, 1100-1106,
1999.

[85] Rahman, S., Formulation and Analysis of a rule based short term load forecasting
algorithm, Proceedings of IEEE, 78, 805-816, 1990.

[86] Rahman, S., Generalized Knowledge-based short-term load forecasting technique,


IEEE Transactions on Power Systems, 8, 508-514, 1993.

[87] Ho, K., Hsu, Y., Chen, C., Lee, T., Liang, C., Lai, T., and Chen, K., Short term load
forecasting of Taiwan power system using a knowledge based expert system,
IEEE Transactions on Power Ssytems, 5, 1214-1221, 1990.

[88] Rahman, S., and Shreshta, G., A priority vector based technique for Load
forecasting, IEEE Transactions on Power Systems, 6, 1459-1465, 1991.

[89] Rahman, S., and Hazim, O., Generalized Knowledge-based short-term load
forecasting technique, IEEE Transactions on Power Systems,8, 509-514, 1993.

[90] Rahman, S., and Hazim, O., Load Forecasting for Multiple Sites: development of an
expert system based-technique, Electric Power Systems Research, 39,161-169,
1996.

[91] Brown, R. E., Hanson, A. P., and Hagan, D. L., Spatial load forecasting using non-
uniform areas. IEEE transmission and Distribution Conference Proceedings, 1,
369-373, 1999.

[92] Dash, P. K., Dash, S., Rama Krishna, G., and Rahman, S., Forecasting of a load time
series using a fuzzy expert system and fuzzy neural networks, International
Journal of Engineering Intelligent Systems, 1, 103-118, 1993.

90
[93] Kim, K. –H., Park, J. -K., Hwang, K. -J., and Kim, S. -H., Implementation of hybrid
short-term load forecasting system using artificial neural networks and fuzzy
expert systems, IEEE Transactions on Power Systems, 10, 1534-1539, 1995.

[94] Mohamad, E. A., Mansour, M. M., El-Debeiky, S., Mohamad, K. G., Rao, N. D.,
and Ramakrishna, G., Results of Egyptian unified grid hourly load forecasting
using an artificial neural network with expert system interfaces, Electric Power
Systems Research, 39, 171-177, 1996.

[95] Bataineh, S., Al-Anbuky, A., and Al-Aqtash, S., Expert system for unit commitment
and power demand prediction using fuzzy logic and neural networks. Experts
Systems, 13, 29-40, 1996.

[96] Chiu, C. -C., Cook, D. F., Kao, J. -L., and Chou, Y. -C., Combining a neural
network and a rule-based expert system for short-term load forecasting
Computers and Industrial Engineering, 32, 787-797, 1997.

[97] Chiu, C. -C., Cook, D. F., and Kao, J. -L., Combining a neural network with a rule-
based expert system approach for short-term power load forecasting in
Taiwan,Expert Systems with Applications, 13, 299-305, 1997.

[98] Chandrashekara, A. S., Ananthapadmanabha, T., and Kulkarni, A. D., A neuro-


expert system for planning and load forecasting of distribution systems, Electrical
Power and Energy Systems Research, 21, 309-314, 1999.

[99] Vapnik, V. N., The Nature of Statistical Learning Theory, New York, Springer
Verlag, 1995.

[100] Christiani, N., and Taylor, J. S., “An Introduction to Support Vector Machines and
Kernel-Based Learning Methods” Cambridge University Press, Cambridge, 2000.

[101] Mohandes, M., Support Vector Machines for Short-Term Electrical Load
Forecasting, International Journal of Energy Research, 26:335-345, 2002.

[102] Chen, B. J., Chang, M. W., and Lin, C. J., “Load Forecasting Using Support Vector
Machines: A study on EUNITE Competition 2001”, Technical report, Department
of Computer Science and Information Engineering, National Taiwan University,
2002.

[103] Li, Y., and Fang, T., Wavelet and support vector machines for short-term electrical
load forecasting, Proceedings of International Conference on Wavelet Analysis
and its Applications, 1:399-404, 2003.
91
[104] Damborg, M. J., El-Sharkawi, M. A., Aggoune, M. E., And Marks, R. J., Ii,
Potential of artificial neural network to power system operation, Proceedings of
the IEEE International Symposium on Circuits and Systems, New Orleans, LA,
pp. 2933-2937, 1990.

[105] Ho, K., L., Hsu, Y., Y., and Yang, C.C., Short term load forecasting using a multi-
layer neural network with an adaptive learning algorithm, IEEE Transactions on
Power Systems, 7, 141-149, 1992.

[106] Park, D. C., and Osama, M., Artificial neural network based peak load forecasting,
IEEE Proceeding of the SOUTHEASTCON ’91, 1, pp.225-228, 1991.

[107] Park, D. C., El-Sharkawi, M. A., Marks, R. J., Atlas, L. E., and Damborg, M. J.,
Electric load forecasting using Artificial Neural Network, IEEE Transactions on
Power Systems, 6, 442-449, 1991a.

[108] Atlas, L., Connor, J., Park, D., Marks, R. J., Lippman, A., and Muhthusamy, Y., A
performance comparison of trained multilayer perceptrons and trained
classification trees.Proceedings of the IEEE International Conference on systems,
man, and cybernetics, pp. 915-141-920, 1989.

[109] Hsu, Y. Y., and Yang, L. -C., Design of artificial neural networks for short term
load forecasting. IEEE Proceedings-C, 138, 407-413,1992.

[110] Peng, T. M., Hubele, N. F., and Karady, G. G., Advancement in the application of
neural networks for short-term load forecasting, IEEE Transactions on Power
Systems, 7, 250-257, 1992.

[111] Peng, T. M., Hubele, N. F., and Karady, G. G., An conceptual approach to the
application of neural networks for short-term load forecasting, Proceedings of the
IEEE International Symposium on Circuits and Systems, New Orleans, LA, pp.
2942-2945, 1990.

[112] Peng, T. M., Hubele, N. F., and Karady, G. G., An adaptive Neural Network
approach to one-week ahead load forecasting, IEEE Transactions on Power
Systems, 8, 1195-1203, 1993.

[113] Ho, K., L., and Hsu, Y., Y., Short term load forecasting using a multi-layer neural
network with an adaptive learning algorithm, IEEE Transactions on Power
Systems, 7, 141-149, 1992a.

92
[114] Lee, K. Y., and Park, J. H., Short-term load forecasting using an artificial neural
network, IEEE Transactions on Power Systems, 7, 124-130, 1992.

[115] Chen, S. -T., Yu, D.C. and Moghaddamjo, A. R., Weather sensitive short-term load
forecasting using non-fully connected artificial neural network, IEEE Transactions
on Power Systems, 7, 1098-1105, 1992.

[116] Lu, C. N., Wu, H. T., and Vemuri, S., Neural network based short-term load
forecasting, IEEE Transactions on Power Systems, 8, 336-341, 1993.

[117] Djukanovic, M., Babic, B., Sobajic, O. J., and Pao, Y.H., 24-Hour load forecasting.
IEEE Proceedings-C, 140, 311-318, 1993.

[118] Papalexopoulos, A. D., Hao, S., and Peng, T. –M., An implementation of a neural
network based load forecasting model for the EMS, IEEE Transactions on Power
Systems, 9, 1956-1962, 1994.

[119] Dillon, T. S., Sestito, S., and Leung, S., Short term load forecasting using an
adaptive neural network, Electric Power and Energy Systems, 13, 186-192, 1991.

[120] Srinivasan, D., Liew, A. C. and Chen, S. P. J., A novel approach to electric load
forecasting based on neural networks, IEEE International Joint Conference on
Neural Networks, Singapore, 18-21, pp.1172-1177, November, 1991.

[121] Liu, X. O., Ang, B. W., and Goh, T. N., Forecasting of electricity consumption: a
comparison between an econometric model and a neural network model, IEEE
International Joint Conference on Neural Networks, Singapore, 18-21, pp. 1254-
1259, November 1991.

[122] Kalra, P. K., Srivastav, A., and Chaturvedi, D. K., Possible application of neural
nets to power system operation and control, Electric Power Systems Research, 25,
83-90, 1992.

[123] Azzam-Ul-asar, A., and McDonald, J.R., A. Specification of neural networks in


the load forecasting problem, IEEE transactions on control System Technology, 2,
135-141, 1994.

[124] Chen, C. S., Tzeng, Y. M., and Hwang, J. C., Application of Artificial neutral
networks to substation load forecasting, Electric Power Systems Research, 38,
153-160, 1996.

93
[125] AL-FUHAID, A.S., EL-SAYED, M.A.., and MAHMOUD, M.S, Neuroshort-term
forecast of the power system in Kuwait, Applied Mathematical Modeling, 21, 215-
219, 1997.

[126] Vermaak, J., and Botha, E. C., Recurrent neural networks for short-term load
forecasting, IEEE Transactions on Power Systems, 13, 126-132, 1998.

[127] McMenamin, J. S., and Monforte, F. A., Short-term energy forecasting with neural
networks, Energy Journals, 19, 43-62, 1998.

[128] Papadakis, S. E., Theocharis, J. B., Kiartzis, S. J., and Bakirtzis, A. G., Novel
approach to short-term load forecasting using fuzzy neural networks, IEEE
Transactions on Power Systems, 13, 480-489, 1998.

[129] DASH P. K. Satpathy, H.P. and Liew, A. C., Real-time short-term peak and
average load forecasting system using a self-organizing fuzzy neural network,
Engineering Applications of Artificial Intelligence, 11, 307-316, 1998.

[130] Kung, C. -H.,Devaney, M. J., Huang, C. -M., and Kung, C. -M., Adaptive power
system load forecasting scheme using a genetic algorithm embedded neural
network, Proceedings of the IEEE Instrumentation and Measurement Technology
Conference, part 1, St. Paul, MN, 18-21, pp. 308-311, May 1998.

[131] Chow, T. W. S., and Leung, C.T. Nonlinear autoregressive neural network model
for short-term load forecasting, IEE Proceedings: Generation, Transmission, and
Distribution, 143, 500-506, 1996.

[132] Choueiki, M. H., Mount-Campbell, C.A. and Ahalt, S., Implementing a weighted
least squares procedure in training a neural network to solve the short-term load
forecasting problem, IEEE Transactions on Power Systems, 12, 1689-1694, 1997.

[133] Neibur, D., Artificial neural networks in the power industry, survey and
applications, Neural Network World, 5, 951-964, 1995.

[134] Dorizzi, B., and Germond, A., Short term electrical load forecasting with artificial
neural networks, International Journal of Engineering intelligent Systems for
Electrical Engineering and Communications, 4, 85-99, 1996.

[135] Oonsivilan, A., and El-Hawary, M. E., Wavelet neural network based short-term
load forecasting of electric power system commercial load, Canadian Conference
on Electrical and Computer Engineering, 3, 1023-1028, 1999.

94
[136] Drenza, N., Sood, V., and Saad, M., Use of ANNs for short-term load forecasting,
IEEE Transactions on Electrical and Computer Engineering, 2, 1057-1061, 1999.

[137] Yoo, H., and Pimmel, R. L., Short term load forecasting using self-supervised
adaptive neural network, IEEE Transactions on Power Systems, 14, 779-784,
1999.

[138] Kandil, N., Sood, V., and Saad, M., Use of ANNs for short-term load forecasting,
Canadian Conference on Electrical and Computer Engineering, 2, 1057-1061,
1999.

[139] Leyan, X., and Chen, W. J., Artificial neural network short-term electrical load
forecasting techniques, TENCON 99: Proceedings of the IEEE Region 10
Conference, 2, pp.1458-1461, 1999.

[140] Nazarko, J., and Styczynski, Z. A., Application of Statistical and neural approaches
to the daily load profiles modeling in power distribution systems, IEEE:
Transmission and Distribution Conference, 1, 320-325, 1999.

[141] Ijumba, N. M., and Hunsley, J. P., Improved load forecasting techniques for newly
electrified areas, Africon 1999 IEEE, 2, 989-994, 1999.

[142] Sinha, A. K., and Mandal, J. K., Hierarchical dynamic state estimator using ANN-
based dynamic load prediction, IEEE Proceedings: Generation, Transmission and
Distribution, 146, 541-549, 1999.

[143] Drezga, I., and Rahman, S., Phase-space based short-term load forecasting for
deregulated electric power industry, Proceedings of the International Joint
Conference on Neural Networks, 5, 3405-3409, 1999a.

[144] Drezga, I., and Rahman, S. Short-term load forecasting with local ANN predictors,
IEEE Transactions on Power Systems, 14, 844-850, 1999b.

[145] Bakirtzis, A. G., Petridis, V., Kiartzis, S. J., Alexiadis, M. C., and Maissis, A. H.:
“A neural Network Short-Term Load Forecasting Model for the Greek Power
System’, IEEE Transactions on Power Systems, 11, pp.858-863, 1996.

[146] Taylor, J. W. and Buizza, R., Neural Network Load Forecasting With Weather
Ensemble Predictions, IEEE Transactions on Power Systems, 17:626-632, 2002.

95
[147] Ringwood, J. V., Bofelli, D., and Murray, F. T., Forecasting Electricity Demand on
Short, Medium and Long Time Scales Using Neural Networks, Journal of
Intelligent and Robotic Systems, Volume 31, Issue 1-3, pp.129-147, May 2001.

[148] Mosalman, F., Mosalman, A., Yazdi, H.M. and Yazdi M.M., “One day-ahead load
forecasting by artificial neural network”, Scientific Research and Essays Vol. 6
(13), pp. 2799,-2799, 4 July, 2011.

[149] Gowri, T. M. and Reedy, V. V. C., “Load Forecasting by a Novel Technique using
ANN”, ARPN Journal of Engineering and Applied Sciences, 3(2), pp.19-25, 2008.

[150] Wang, X. and Tsoukalas L. H., “Load Analysis and Prediction for Unanticipated
Situations Bulk Power Systems Dynamics and Control” VI. Cortina d’Ampezzo,
Italy, 2004.

[151] TheMathworks, Inc., “Back-propagation (Neural Network Toolbox)”. 2004.


http://www.mathworks.com

[153] Hayati, M., and Shirvany, Y., “Artificial Neural Network Approach for Short Term
Load Forecasting for Illam Region”, World Academy of Sciences, Engineering
and Technology, 28, 2007.

[154] Osowski, S., Neural Network for Information Processing, Warsaw University of
Technology Press, Warsaw, Poland, 2006.

96
APPENDIX
MATLAB CODE
clc;clear ALL;
tic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%the
daily load data for the month of March is read into the system using a
%three lettered variable names plus 2-digit numbers interpreted thus:
%the first two letters represent the day of the week; the third letter the
%month;the digits the date. Example,tum01 means tuesday march 1st etc
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%time=[1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.0 11.0 12.0 1.00 2.00 3.00 4.00
5.00 6.00 7.00 8.00 9.00 10.0 11.0 12.0];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
tum01=[42.5 38.8 30.5 39.1 58.0 64.7 82.0 78.8 61.2 60.2 56.4 55.6 48.1 47.8 53.0 56.5
58.8 63.7 70.5 72.3 78.6 68.7 50.9 48.8];
wem02=[40.4 34.2 29.5 36.5 54.0 58.7 78.3 74.2 69.9 56.1 54.5 52.3 42.4 40.1 46.2 50.2
53.1 59.8 65.2 70.8 80.3 71.1 58.2 48.3];
thm03=[49.3 41.9 36.4 43.7 58.3 65.2 85.0 71.5 70.1 69.5 52.2 50.4 46.3 44.9 51.0 54.7
57.3 66.1 69.3 74.6 77.4 62.3 57.4 55.2];
frm04=[41.1 37.4 33.3 41.1 49.1 55.1 77.2 74.9 71.0 69.4 62.3 56.0 49.7 43.9 57.0 61.0
65.2 66.9 71.5 75.6 79.5 70.3 51.4 43.5];
sam05=[38.2 33.3 30.3 38.8 55.4 62.7 75.9 71.6 65.6 60.2 46.7 44.2 41.9 38.7 55.1 65.3
67.5 69.4 73.3 74.7 76.6 62.6 53.8 49.3];
sum06=[35.3 30.8 26.2 35.6 43.6 68.6 56.6 54.2 52.2 50.1 48.9 50.2 52.2 54.1 55.9 58.7
59.3 61.0 63.9 67.6 69.3 65.7 48.7 43.2];
mom07=[45.4 44.7 35.0 39.9 45.5 57.7 86.7 71.9 68.5 65.2 58.5 53.8 52.6 50.2 58.6 59.6
66.1 69.5 69.8 70.9 81.1 77.3 52.2 50.1];
tum08=[43.1 39.7 31.1 34.3 39.8 44.2 63.3 78.7 70.0 51.3 46.5 41.2 40.9 40.6 42.4 46.8
55.1 58.2 66.4 68.9 72.3 72.4 69.3 57.4];
wem09=[50.2 47.2 35.5 38.9 41.7 68.1 79.8 74.4 68.3 65.1 61.5 57.6 58.8 50.9 54.2 63.9
66.1 67.2 70.8 76.2 78.2 67.4 58.4 55.3];
thm10=[41.9 38.9 34.7 45.7 47.7 51.8 86.1 79.5 71.2 65.2 61.7 58.5 54.2 52.9 59.2 62.1
63.2 64.4 72.1 75.4 79.1 73.1 69.4 53.4];
frm11=[40.4 30.6 28.9 39.5 38.4 49.1 79.7 78.3 74.4 68.8 61.9 59.5 51.3 50.9 58.2 60.6
69.4 70.9 75.8 77.0 83.9 80.7 65.7 45.2];
sam12=[32.3 28.0 21.5 33.3 42.5 51.7 73.6 72.4 71.6 64.6 59.5 58.4 53.5 50.9 57.3 59.5
63.7 65.3 68.5 74.2 77.4 60.5 48.9 33.6];
sum13=[35.4 31.2 24.3 29.1 40.7 59.4 53.0 50.0 48.3 45.4 43.7 40.8 47.1 52.6 55.0 58.4
65.1 68.6 69.5 69.4 71.2 66.3 55.1 41.4];
mom14=[47.9 35.5 31.4 33.7 43.3 57.6 80.6 77.6 71.7 61.3 55.2 52.8 52.3 50.5 52.9 55.7
61.7 65.9 69.3 71.4 75.1 70.2 62.3 59.5];

97
tum15=[43.6 41.2 33.1 38.6 44.4 48.3 70.0 73.4 71.5 60.1 53.5 46.1 39.1 33.1 38.1 46.0
66.9 68.9 69.7 70.5 72.8 62.5 52.6 50.6];
wem16=[45.5 42.5 32.5 47.8 55.8 60.5 79.1 76.1 73.2 65.8 61.9 56.1 48.6 46.9 57.8 58.9
63.7 68.7 69.1 73.7 79.9 73.4 60.1 55.5];
thm17=[48.7 44.1 34.1 44.1 52.2 64.0 84.5 80.8 77.6 72.6 63.4 60.7 53.4 52.8 54.6 66.9
65.7 69.8 70.3 74.4 82.7 76.5 58.8 53.2];
frm18=[47.2 46.8 35.6 38.9 49.6 72.9 86.9 79.6 71.7 64.5 49.3 47.4 45.7 45.4 50.7 55.1
59.1 62.3 69.8 74.3 77.6 73.6 57.6 56.4];
sam19=[43.4 32.3 30.7 46.3 54.3 66.8 75.7 74.2 71.4 66.6 58.8 54.6 52.3 49.3 53.4 54.4
66.3 68.4 70.2 74.3 81.8 63.8 59.1 49.2];
sum20=[33.1 31.4 29.6 32.3 45.4 63.2 60.0 59.8 55.6 54.9 47.8 51.4 55.0 57.5 57.7 58.2
61.6 67.3 69.4 72.2 72.0 68.1 45.3 33.2];
mom21=[44.5 40.5 32.5 34.3 39.7 52.1 63.5 75.4 74.4 69.7 65.4 63.3 60.7 51.8 64.5 68.8
69.4 70.2 70.2 76.3 80.1 71.3 63.1 46.3];
tum22=[44.2 40.9 37.9 37.9 38.3 48.7 68.3 66.8 61.2 59.4 58.3 55.6 50.4 44.2 50.1 56.4
65.5 72.5 77.4 79.9 80.7 70.8 50.3 41.7];
wem23=[40.4 38.2 31.1 39.1 45.9 46.9 83.2 77.6 72.4 65.4 61.5 53.2 49.2 46.1 56.4 57.2
59.4 64.9 69.4 70.1 74.6 72.3 52.5 46.9];
thm24=[45.3 39.6 32.4 43.7 49.6 59.9 70.2 69.2 68.6 67.6 58.4 52.1 49.2 46.8 47.7 51.7
55.8 59.6 61.1 75.1 78.8 72.5 66.4 50.5];
frm25=[42.9 41.3 36.2 37.1 48.3 63.5 73.3 70.8 61.5 58.8 52.0 50.4 46.9 45.6 51.2 53.4
60.3 63.4 66.7 70.8 73.6 67.1 59.2 46.8];
sam26=[47.7 37.2 40.1 41.3 51.5 54.6 69.5 67.3 63.8 60.9 57.8 56.5 51.3 52.2 57.1 59.8
63.2 66.8 70.6 72.1 76.2 69.8 52.7 49.9];
sum27=[38.2 35.6 33.0 41.9 48.2 57.3 59.5 58.0 53.9 50.3 47.9 52.3 55.0 59.4 59.9 60.6
61.6 63.5 65.3 66.0 72.1 57.6 44.1 41.9];
mom28=[45.1 42.3 35.7 38.1 46.8 50.6 65.3 69.7 65.2 66.3 62.9 60.1 57.5 53.9 60.2 63.7
64.2 67.2 67.7 69.1 75.9 63.8 56.1 46.3];
tum29=[43.7 40.2 38.0 40.7 42.7 53.5 67.1 64.2 62.7 60.2 58.4 56.0 54.0 50.9 53.8 56.3
61.7 66.9 70.8 74.0 75.4 64.1 54.7 45.4];
wem30=[44.5 40.0 37.0 42.0 43.3 48.9 73.5 68.6 70.1 66.3 63.4 52.3 50.5 49.1 50.7 52.1
60.1 67.3 69.6 72.6 73.2 67.1 51.1 48.9];
thm31=[48.0 38.8 35.9 44.9 47.6 56.4 70.9 66.8 66.3 64.6 53.9 50.7 49.8 48.0 46.5 49.2
54.9 63.6 66.2 70.3 72.7 65.2 56.9 52.5];
%single variable with variable name march_load is now created to capture
%the entire working data
march_load=[tum01;wem02;thm03;frm04;sam05;sum06;mom07;tum08;wem09;thm10;fr
m11;sam12;sum13;mom14;tum15;....

wem16;thm17;frm18;sam19;sum20;mom21;tum22;wem23;thm24;frm25;sam26;sum27;
mom28;tum29;wem30;thm31];
%%
%%
98
data=[march_load];
p1=[data(1,:); data(7,:)];t1= [data(8,:)];
p2=[data(2,:); data(8,:)];t2= [data(9,:)];
p3=[data(3,:); data(9,:)];t3= [data(10,:)];
p4=[data(4,:); data(10,:)];t4=[data(11,:)];
p5=[data(5,:); data(11,:)];t5=[data(12,:)];
p6=[data(6,:); data(12,:)];t6=[data(13,:)];
p7=[data(7,:); data(13,:)];t7=[data(14,:)];
p8=[data(8,:); data(14,:)];t8=[data(15,:)];
p9=[data(9,:); data(15,:)];t9=[data(16,:)];
p10=[data(10,:); data(16,:)];t10=[data(17,:)];
p11=[data(11,:); data(17,:)];t11=[data(18,:)];
p12=[data(12,:); data(18,:)];t12=[data(19,:)];
p13=[data(13,:); data(19,:)];t13=[data(20,:)];
p14=[data(14,:); data(20,:)];t14=[data(21,:)];
p15=[data(15,:); data(21,:)];t15=[data(22,:)];
p16=[data(16,:); data(22,:)];t16=[data(23,:)];
p17=[data(17,:); data(23,:)];t17=[data(24,:)];
p18=[data(18,:); data(24,:)];t18=[data(25,:)];
p19=[data(19,:); data(25,:)];t19=[data(26,:)];
p20=[data(20,:); data(26,:)];t20=[data(27,:)];
p21=[data(21,:); data(27,:)];t21=[data(28,:)];
p22=[data(22,:); data(28,:)];t22=[data(29,:)];
p23=[data(23,:); data(29,:)];t23=[data(30,:)];
p24=[data(24,:); data(30,:)];t24=[data(31,:)];
% we define the following submatrices which will be needed to forecast the
% loads of the days (i.e., the inputs to the neural network) and the corresponding days
which such data will be used
% to forecast (i.e., the targets)
%%
xx=1;yy=7;zz=8;hr=1:24;
% we present the training data set all at ago to the network by means of
% the loop below
forjj=1:25
p=[data(xx,:); data(yy,:)];t=[data(zz,:)];
ifjj==18
p=[p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17];
t=[t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 t17];
net.trainParam.passes=100;
else
ifjj>=19
break
end
99
end
%the input sets and targets are preprocessed and normalised thus:
[pn,meanp,stdp,tn,meant,stdt] = prestd(p,t);
%we create the network thus
net=newfftd(minmax(pn),[0 1],[28 38 1],{'tansig' 'tansig' 'purelin' },'traincgp');
% net=newelm(minmax(pn),[25 28 1],{'tansig' 'tansig' 'purelin'},'traincgb')
% net=newff(minmax(pn),[35 45 1],{'tansig' 'tansig' 'purelin' },'trains')
% we tune the network parameters below
net.biasConnect=[1;1;1];
net.performFcn='msereg'
% net.layersConnect=[1 0 0;1 1 0;0 1 0];
net.layerWeights{2,1}.delays=[0 1];
net.layerWeights{3,2}.delays=[0 1];
net.trainParam.passes=100;
net.performParam.ratio=0.01;
lp.lr=0.001;
lp.mc=0.75;
net.inputWeights{1,1}.learnFcn='learngdm'
net.layerWeights{2,1}.learnFcn='learngdm'
net.layerWeights{3,2}.learnFcn='learngdm'
net.trainParam.goal=1.0e-5;
net.trainParam.epochs=3000;
net.trainParam.show=100;
[net,tr]=train(net,pn,tn);
an=sim(net,pn);
a=poststd(an,meant,stdt);
error=(t-a)./t;
disp([t' a' error'])
perf=mae(error,net)
% figure(xx)
%plot(hr,a,'gd-',hr,t,'mp--');axis([0,25,0,90]);
xx=xx+1;yy=yy+1;zz=zz+1;
end
% the trained network is tested with new data set
xx=18;yy=24;zz=25;
forll=1:8
pnew=[data(xx,:);data(yy,:)];y=1:24;
ifll==8
break
end
pnewn=trastd(pnew,meanp,stdp);anewn=sim(net,pnewn);
anew=poststd(anewn,meant,stdt);
t=[data(zz,:)];
100
error=(t-anew);
e=(error)./t;
dd=abs(e);
figure(ll)
plot(y,anew,'rh-',y,t,'bd--'); axis([0,25,0,90]);
grid on
ylabel('load in Megawatt')
xlabel('time in hours')
title('plot of actual and predicted load values against time in hours')
legend('rh- predicted values','bd-- actual values','bl')
print -dtiffogbagu
disp([t' anew' error' e' dd']);disp([xx yyzz]);
perf=mae(e,net)
xx=xx+1;yy=yy+1;zz=zz+1;
% a simulink equivalent of the model is generated by the code below
end
pnew=[p18 p19 p20 p21 p22 p23 p24];
pnewn=trastd(pnew,meanp,stdp);anewn=sim(net,pnewn);
anew=poststd(anewn,meant,stdt);
t=[t18 t19 t20 t21 t22 t23 t24];
y=1:168;
error=(t-anew);
e=(error)./t;
dd=abs(e);
figure(ll)
plot(y,anew,'rh-',y,t,'bd--'); axis([0,170,0,90]);grid on
ylabel('load in Megawatt')
xlabel('time in hours')
title('plot of actual and predicted load values against time in hours')
legend('rh- predicted values','bd-- actual values','bl')
print -dtiffogbagu
disp([t' anew' error' e' dd']);disp([xx yyzz]);
perf=mae(e,net)
xx=xx+1;yy=yy+1;zz=zz+1;
toc

gensim(net,1)

101
THE RESEARCH DATA
Table 1: Daily-hourly load for the month of February, 2011
Hr/Day Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon
01 02 03 04 05 06 07 08 09 10 11 12 13 14
01.00 40.20 35.80 39.10 45.50 30.70 33.00 42.50 37.30 48.50 33.50 37.20 38.80 35.40 43.10
02.00 36.40 30.20 34.40 41.80 29.30 30.20 40.60 32.10 43.40 30.80 36.00 33.50 31.20 40.00
03.00 31.30 28.10 30.20 38.60 25.50 30.20 40.00 30.00 37.20 28.90 34.70 32.20 24.30 39.60
04.00 30.90 40.40 45.60 35.90 38.10 34.50 49.20 27.60 37.90 41.20 31.40 32.00 29.10 42.00
05.00 44.60 47.30 48.10 51.40 40.30 56.10 60.40 44.90 55.60 46.30 50.70 45.60 40.70 56.80
06.00 59.10 52.00 50.90 63.20 45.60 77.90 65.00 58.10 52.00 51.70 56.40 60.40 59.40 62.50
07.00 70.70 80.50 67.40 82.30 67.70 73.80 85.60 64.30 70.90 83.50 78.30 62.70 53.00 63.90
08.00 70.00 76.20 77.90 80.40 71.20 64.30 81.00 60.20 68.60 73.10 62.20 74.00 50.00 78.00
09.00 67.50 72.50 75.20 70.10 70.30 60.30 74.90 58.40 64.10 66.80 60.00 73.20 48.30 76.10
10.00 61.60 68.10 72.20 67.90 55.40 58.10 53.70 57.00 62.60 65.70 52.90 66.70 45.40 64.80
11.00 60.20 62.90 70.00 65.00 51.50 53.90 50.30 56.10 60.00 63.20 46.30 62.40 43.70 60.30
12.00 57.80 55.80 68.10 62.50 49.30 50.20 48.50 52.30 58.20 60.40 40.60 56.30 40.80 58.30
13.00 53.60 51.40 63.30 57.40 42.80 59.30 45.40 50.50 53.40 55.90 38.70 54.00 47.10 52.00
14.00 48.70 50.70 59.40 55.30 40.10 61.00 41.30 48.20 50.90 52.40 35.80 50.90 52.60 48.30
15.00 **** 55.90 56.10 48.80 45.00 60.40 51.90 45.70 55.10 40.20 44.60 57.30 55.00 46.60
16.00 **** 59.20 61.50 51.00 49.20 67.30 58.90 57.30 60.30 58.70 52.30 59.50 58.40 53.40
17.00 **** 61.10 66.20 58.30 57.60 70.00 63.30 64.00 66.20 67.00 58.00 63.70 65.10 66.20
18.00 **** 65.30 67.30 61.50 64.40 71.50 66.10 68.20 72.70 69.10 61.50 65.30 68.60 60.80
19.00 **** 71.70 67.10 64.70 70.50 74.90 69.00 73.40 78.10 69.00 68.90 68.50 72.20 69.40
20.00 **** 73.60 70.40 68.90 70.00 76.30 74.10 78.20 78.40 73.50 76.00 74.20 76.70 71.20
21.00 **** 78.90 85.30 72.10 73.80 69.20 76.50 81.30 79.00 64.40 70.20 77.40 61.50 66.70
22.00 45.20 56.30 80.00 60.20 58.00 65.60 51.40 72.40 65.60 44.90 48.10 60.50 40.00 54.00
23.00 43.70 43.30 50.70 47.40 42.90 38.70 50.00 46.20 53.50 40.80 42.70 48.90 40.00 51.20
24.00 36.40 38.70 41.40 46.20 32.30 35.20 40.60 43.10 50.30 38.40 40.00 33.60 33.10 42.80

102
Table 1 contd.
Hr/Day Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon
15 16 17 18 19 20 21 22 23 24 25 26 27 28
01.00 31.20 45.50 28.90 40.70 34.60 41.70 46.90 40.20 36.40 42.30 45.90 39.70 31.20 47.10
02.00 25.40 40.80 26.30 40.00 34.20 38.00 43.20 37.90 33.10 37.60 42.40 37.00 30.30 40.10
03.00 20.90 32.70 23.20 33.30 34.20 27.80 41.00 34.10 31.80 36.20 40.20 32.80 27.70 36.50
04.00 18.50 34.40 25.90 56.70 46.90 22.00 48.10 39.00 39.10 41.30 45.00 36.80 36.20 43.50
05.00 27.70 48.90 34.10 52.80 49.00 38.60 52.60 44.70 45.90 43.30 46.20 48.70 39.10 49.40
06.00 47.80 55.60 61.30 54.00 52.70 59.00 59.00 46.20 52.40 63.50 48.30 52.60 65.30 51.80
07.00 69.00 80.10 77.20 65.90 58.30 51.40 77.00 71.40 60.70 76.10 66.10 72.30 52.60 81.60
08.00 58.10 78.40 70.90 68.20 61.40 50.00 72.00 70.30 74.30 65.30 73.40 70.10 45.20 73.40
09.00 53.20 64.30 70.30 66.40 73.10 47.60 70.50 68.50 65.20 61.30 70.40 67.40 42.60 68.00
10.00 50.60 61.20 70.00 64.10 66.50 41.30 65.40 64.30 61.20 55.30 66.40 62.70 40.10 65.10
11.00 50.00 57.90 64.20 60.80 59.10 38.00 62.20 62.10 58.40 52.70 50.90 57.80 47.90 62.60
12.00 46.30 53.60 60.10 56.20 48.40 35.60 60.10 57.30 56.90 50.10 45.30 56.50 52.30 57.30
13.00 41.90 50.10 59.20 52.30 50.80 47.30 54.30 53.20 50.40 44.50 48.50 51.30 55.00 54.10
14.00 40.70 48.30 55.60 50.00 49.70 53.20 53.60 49.10 43.20 40.90 44.20 52.20 59.40 49.70
15.00 47.80 49.00 50.40 45.30 56.20 58.90 48.40 53.40 45.10 51.00 49.40 57.10 59.90 58.10
16.00 55.20 53.50 62.70 49.10 59.00 64.70 51.70 55.20 47.80 56.30 53.20 59.80 60.60 60.30
17.00 59.20 56.10 65.00 54.90 63.40 66.30 58.20 61.40 53.20 59.50 62.10 63.20 61.60 63.40
18.00 61.50 58.90 68.30 57.20 66.10 61.50 65.10 66.30 60.40 65.30 69.70 66.80 61.10 66.00
19.00 67.30 64.00 71.90 61.00 70.80 68.20 66.80 69.00 64.10 67.60 72.30 70.60 66.00 71.30
20.00 69.40 72.20 76.70 66.20 73.10 68.00 70.50 74.60 69.30 72.40 78.10 72.10 69.30 72.50
21.00 72.60 81.30 78.20 69.10 65.90 70.20 82.70 75.30 70.40 83.60 79.20 75.90 66.40 83.60
22.00 62.50 60.70 50.30 56.20 51.30 40.60 63.00 68.30 51.70 65.30 72.50 63.40 60.30 70.30
23.00 52.60 49.30 44.10 47.90 50.00 37.30 49.20 56.10 50.30 61.80 65.30 47.80 50.20 65.40
24.00 50.60 47.20 35.00 40.10 39.70 32.20 48.70 50.40 47.60 50.30 48.70 42.60 37.70 53.20

NB: **** means not available or missing load value.

Table 2: Daily-hourly load for the month of March, 2011


Hr/Day Tue 01 Wed 02 Thu 03 Fri 04 Sat 05 Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed
06 07 08 09 10 11 12 13 14 15 16
01.00 42.50 40.40 49.30 41.10 38.20 35.30 45.40 43.10 50.20 41.90 40.40 32.30 **** 47.90 43.60 ****
02.00 38.80 34.20 41.90 37.40 33.30 30.80 44.70 39.70 47.20 38.90 30.60 28.00 **** 35.50 41.20 42.50
03.00 30.50 29.50 36.40 33.30 30.20 26.20 35.00 31.10 35.50 34.70 28.90 21.50 **** 31.40 33.10 32.50
04.00 39.10 36.50 43.70 41.10 38.80 35.60 39.90 34.30 38.90 45.70 39.50 33.30 **** 33.70 38.60 47.80
05.00 58.00 54.00 58.30 49.10 55.40 43.60 45.50 39.80 41.70 47.70 38.40 42.50 **** 43.30 44.40 55.80
06.00 64.70 58.70 65.20 55.10 62.70 68.60 57.70 44.20 68.10 51.80 49.10 51.70 **** 57.60 48.30 60.50
07.00 82.00 78.30 85.00 77.20 75.90 56.60 86.70 63.30 79.80 86.10 79.70 73.60 **** 80.60 70.00 79.10
08.00 78.80 74.20 71.50 74.90 71.60 54.20 71.90 78.70 74.40 79.50 78.30 72.40 **** 77.60 73.40 76.10

103
09.00 61.20 69.90 70.10 71.00 65.60 52.20 68.50 70.00 68.30 71.20 74.40 71.60 **** 71.70 71.50 73.200
10.00 60.20 56.10 69.50 69.40 60.20 50.10 65.20 51.30 65.10 65.20 68.80 64.60 **** 61.30 60.10 65.80
11.00 56.40 54.50 52.20 62.30 46.70 48.90 58.50 46.50 61.50 61.70 61.90 59.50 **** 55.20 53.50 61.90
12.00 55.60 52.30 50.40 56.00 44.20 50.20 53.80 41.20 57.60 58.50 59.50 58.40 **** 52.80 46.10 56.10
13.00 48.10 42.40 46.30 49.70 41.90 52.20 52.60 40.90 58.80 54.20 51.30 53.50 **** 52.30 39.10 48.60
14.00 47.80 40.10 44.90 43.90 38.70 54.10 50.20 40.60 **** 52.90 50.90 **** **** 50.50 33.10 46.90
15.00 53.00 46.20 51.00 57.00 55.10 55.90 58.60 42.40 54.20 59.20 58.20 **** **** 52.90 38.10 57.80
16.00 56.50 50.20 54.70 61.00 65.30 58.70 59.60 48.80 63.90 62.10 60.60 **** **** 55.70 46.00 58.90
17.00 58.80 53.10 57.30 65.20 67.50 59.30 66.10 55.10 66.10 63.20 69.40 **** **** 61.70 66.90 63.70
18.00 63.70 59.80 66.10 66.90 69.40 61.00 69.50 58.20 67.20 64.40 70.90 **** **** 65.90 68.90 68.70
19.00 70.50 65.20 69.30 71.50 73.30 63.90 69.80 66.40 70.80 72.10 75.80 **** 69.50 69.30 69.70 69.10
20.00 72.30 70.80 74.60 75.60 74.70 67.60 70.90 68.90 76.20 75.40 77.00 **** 69.40 71.40 70.50 73.70
21.00 78.60 80.30 77.40 79.50 76.60 69.30 81.10 72.30 78.20 79.10 83.90 **** 71.20 75.10 72.80 79.90
22.00 68.70 71.10 62.30 70.30 62.60 65.70 77.30 72.40 67.40 73.10 80.70 **** 66.30 70.20 **** 73.40
23.00 50.90 58.20 57.40 51.40 53.80 48.70 52.20 69.30 58.40 69.40 65.70 **** 55.10 62.30 **** 60.10
24.00 48.80 48.30 55.20 43.50 49.30 43.20 50.10 57.40 55.30 53.40 45.20 **** 41.40 59.50 **** 55.50

104
Table 2 contd.
Hr/Day Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
01.00 48.70 47.20 43.40 33.10 44.50 44.20 40.40 45.30 42.90 47.70 38.20 45.10 43.70 44.50 48.00
02.00 44.10 46.80 32.30 31.40 40.50 40.90 38.20 39.60 41.30 37.20 35.60 42.30 40.20 40.00 38.80
03.00 34.10 35.60 30.70 29.60 32.50 37.90 31.10 32.40 36.20 40.10 33.00 35.70 38.00 37.00 35.90
04.00 44.10 38.90 46.30 32.30 34.30 37.90 **** 43.70 37.10 41.30 41.90 38.10 40.70 42.00 44.90
05.00 52.20 49.60 54.30 45.40 39.70 38.30 **** 49.60 48.30 51.50 48.20 46.80 42.70 43.30 47.60
06.00 64.00 72.90 66.80 63.20 52.10 48.70 46.90 59.90 63.50 54.60 57.30 50.60 53.50 48.90 56.40
07.00 84.50 86.90 75.70 60.00 63.50 68.30 83.20 70.20 73.30 69.50 59.50 65.30 67.10 73.50 70.90
08.00 80.80 79.60 74.20 59.80 75.40 66.80 77.60 69.20 70.80 67.30 58.00 69.70 64.20 68.60 66.80
09.00 77.60 71.70 71.40 55.60 74.40 61.20 72.40 68.60 61.50 63.80 53.90 65.20 62.70 70.10 66.30
10.00 72.60 64.50 66.60 54.90 69.70 59.40 65.40 67.60 58.80 60.90 50.30 66.30 60.20 66.30 64.60
11.00 63.40 49.30 58.80 47.80 65.40 58.30 61.50 58.40 52.00 **** **** 62.90 58.40 63.40 53.90
12.00 60.70 47.40 54.60 51.40 63.30 55.60 53.20 52.10 50.40 **** **** 60.10 56.00 52.30 50.70
13.00 53.40 45.70 52.30 55.00 60.70 50.40 49.20 49.20 46.90 **** **** 57.50 54.00 50.50 49.80
14.00 52.80 45.40 49.30 57.50 51.80 44.20 46.10 46.80 45.60 **** **** 53.90 50.90 49.10 48.00
15.00 54.60 50.70 53.40 57.70 64.50 50.10 56.40 47.70 51.20 **** **** 60.20 53.80 50.70 46.50
16.00 66.90 55.10 54.40 58.20 68.80 56.40 57.20 51.70 53.40 **** **** 63.70 56.30 52.10 49.20
17.00 65.70 59.10 66.30 61.60 69.40 65.50 59.40 55.80 60.30 **** **** 64.20 61.70 60.10 54.90
18.00 69.80 62.30 68.40 67.30 70.20 72.50 64.90 59.60 63.40 **** 63.50 67.20 66.90 67.30 63.60
19.00 70.30 69.80 70.20 69.40 70.20 77.40 69.40 61.10 66.70 **** 65.30 67.70 70.80 69.60 66.20
20.00 74.40 74.30 74.30 72.20 76.30 79.90 70.10 75.10 70.80 **** 66.00 69.10 74.00 72.60 70.30
21.00 82.70 77.60 81.80 72.00 80.10 80.70 74.60 78.80 73.60 76.20 72.10 75.90 75.40 73.20 72.70
22.00 76.50 73.60 63.80 68.10 71.30 70.80 72.30 72.50 67.10 69.80 57.60 63.80 64.10 67.10 65.20
23.00 58.80 57.60 59.10 45.30 63.10 50.30 52.50 66.40 59.20 52.70 44.10 56.10 54.70 51.10 56.90
24.00 53.20 56.40 49.20 33.20 46.30 41.70 46.90 50.50 46.80 49.90 41.90 46.30 45.40 48.90 52.50

105

Вам также может понравиться