Вы находитесь на странице: 1из 6

Power Systems Voltage Stability Using

Artificial Neural Network


Mohamad R. Khaldi, Member, IEEE

Abstract--The steady-state operation of maintaining voltage


stability is done by switching various controllers scattered all
over the network. When a contingency occurs, whether forced or
unforced, the dispatcher is to alleviate the problem in a
minimum time, cost, and effort. Persistent problem may lead to
blackout. The dispatcher is to have the appropriate switching of
controllers in terms of type, location, and size to remove the
contingency and maintain voltage stability. Wrong switching
may worsen the problem and that may lead to blackout. This
work proposed and used an Artificial Neural Network (ANN) to
assist the dispatcher in the decision making. The ANN is used in
the static voltage stability to map instantaneously a contingency
to a set of controllers where the types, locations, and amount of
switching are induced. The work proposes the type and
architecture of the ANN to be used and the training data size.
Index Terms--Neural network applications; Power system
control; Reactive power control; Voltage control.

I. INTRODUCTION

CHEDULED maintenance, natural forces, severe load


variations, and/or outages are classified as disturbances
that often cause electromechanical oscillations [1] and can
drive a power system to an abnormal steady state operation.
Following a disturbance, a power systems stable steady state
operating condition is disrupted. Stability here is in reference
to the bus voltage profile being within the prescribed 15% pu
operational limits.
Reactive power compensation devices are placed in key
locations so that they can be used to control the bus voltage
profile. An operation engineer (or a dispatcher) coordinates
the compensation devices when a disturbance causes the
systems operating state to shift to an unstable but controllable
state. Therefore, one of the most important problems facing
power utilities is to coordinate the reactive power
compensation devices to maintain an acceptable bus voltage
profile while keeping operational cost minimum and assuring
systems stability to disturbances. In practice, the dispatcher
makes a decision on the location and the number of
compensators to be rescheduled and also the amount of

M. R. Khaldi is with the Department of Electrical Engineering, University


of Balamand, P.O. Box 100, Tripoli, Lebanon (e-mail: m.khaldi@ieee.org).

978-1-4244-1762-9/08/$25.00 2008 IEEE

compensation needed. The sequence, the timing, and the


amount of switching are critical to avoiding damaging devices
that ultimately leads to voltage collapse. Therefore, an
Artificial Intelligent (AI) is in justifiable need to aid in the
decision-making process.
Numerical optimization techniques have been used in
power systems planning, contingency analysis, and control
[2]-[4]. However, classical techniques are limited only to
problems that they are quantitative in nature. On the other
hand, AIs are systems who are capable of handling
quantitative and qualitative problems [5]. The transformation
of knowledge coupled with data processing is the
quintessence of what is so called a Knowledge-Based System
(KBS). Accordingly, a KBS is chosen as a Computer-Aided
Software Tool (CAST) to aid the operator in reaching a
remedial action to the voltage problem. KBS, as one form of
AI, are used in power systems for load management and
voltage stability [6],[7]. Another form of AI is Artificial
Neural Network (ANN) who has gained popularity and is
used in various domains. Security assessment [8], voltage
prediction problem [9], and load forecasting [10] are few
examples of ANN applications to power systems. Recently
ANN is used online for the static voltage stability problem
[11].
ANN was proven to be effective in many areas such as
medicine, engineering, and business, to name a few. In
addition, ANN was used in applications such as Known and
Unknown Function Mapping; Image Processing; Pattern
Detection and Clustering; Filters Design; Data Segmentation,
Compression, Data Mining; Control Systems; Power Systems;
Optimization and Scheduling; and more. Despite its success,
the design of an appropriate ANN for a specific application is
not systematic and sometimes it requires trial-and-error
approach. For instance, a supervised learning Multi-Layer
Perceptron (MLP) ANN requires a training data set. There are
many unanswered questions such as,
1. What will be the best ANN type; Feed-Forward,
Recurrent, Probabilistic, or other networks?
2. What will be the best or optimal size of the training
data set?
3. How many hidden layers the ANN requires for the
application to work best?
4. In a given hidden layer, what will be the optimal
number of neurons?
There are many attempts to answer some of the abovementioned questions. In fact, no one can claim that what

works for an area or an application, does necessarily work


for another area or application. In this work, we will try to
answer some of the questions in the context of static voltage
stability of a power system running at steady-state.
It sounds obvious that larger the training data set, the
better will be the ANN performance. However, in reality
this is not the case. Huang et al found an optimal training
data set for financial forecasting by using the mean-changepoint test and they showed that it is not necessarily that a
larger training data set produces better forecasting
performance [12]. In this work will also show that it is not
necessarily that a larger training data set leads to a better
result.
There are many attempts to determine the optimal
number of neurons per hidden layer. Singular Value
Decomposition and systematic approach are used to find the
smallest possible number of neurons in a single hidden
layer feedforward ANN [13]-[17].

Note that the subscripts

A. Power System Network Model


The admittance matrix of the interconnected power
system, Ybus , can be constructed by

Ybus = ( Yprm + Ychg ) Yprm

(1)

where, for an n n square matrix A, then B = ( A ) is an


n 1 vector whose elements are the sum of the corresponding

neglected for simplicity.

Separating real and imaginary parts and collecting terms, (3)


expressed in a matrix-vector form results in,
dP J P J P V d

+ h.o.t.
dQ = J
(4)
d V
N Q J Q V 

dW
dX

x
C n n . Clearly, J C 2 n2 n is the
y
Jaccobian matrix whose entries are the partial derivative of the
active and reactive powers with respect to the phasor voltage
magnitudes and the phasor voltage angles. The sensitivity
matrices of the total injected complex power with respect to
magnitude and phase angle of the voltage profile is shown in
(5) and (6), respectively.
S
j
j
= e
Y* V * + ( V ) Y* e
(5)
V

Where,

J xy 

( )

and
II. PROBLEM FORMULATION

( )bus are

( )

{(

( )}

S
= j ( V ) V * Y* Y * V *

(6)

For detail derivation of the systems model and the sensitivity


matrices, please refer to [18]. After neglecting the higher
order terms, (4) can be written as,
dX = J 1dW
(7)
The solution of (7) is obtained iteratively

X(k + 1) = X(k ) + J 1

( Wsch W k )

(8)

matrix whose diagonal elements are the elements of n 1


vector B. Furthermore, Yprm and Ychg are the primary and the

where k is the iteration index and Wsch R 2 n1 is a known


vector whose entries are the scheduled complex powers and
W k R 2 n1 is a vector whose entries are the calculated real

half charging matrices, respectively.

and reactive powers, using (2), at the kth iteration.

B. Mismatch Power Equation


Using the vectorized approach, the mismatch power
equation
*
*
Sbus = ( Vbus ) Ybus
Vbus
(2)

C. The Load-Bus Voltage Profile


The PQ or Load-Bus voltage magnitude profile computed in
(8) is clearly affected by the PV or Generation-Bus voltage
magnitude profile, the reactive bank compensation, and the
settings of the under-load tap changing transformers.
Consequently, the following function:

rows of A. Furthermore, C = ( B ) is an n n diagonal

where Sbus = Pbus + jQbus C


complex power and

n 1

is

Vbus = Vbus e

the
j bus

total
C

n 1

injected

Vl = f Vg , Yb , t

is the

voltage profile. Equation (2) represents a vectorized set of


highly coupled nonlinear equations. Thus, the power flow
problem is to solve (2) for the PQ-bus voltage profile, the
required reactive power for the PV buses, and the complex
power for the slack bus. Thus far, analytical solution to (2)
has not found yet! Normally, numerical methods are used to
find a solution - if it exists. Newton-Raphson method, a fast
and an efficient numerical approach that is commonly used to
solve (2), is based upon the Taylor series expansion with
respect to the voltage magnitudes and the voltage phase angles
about a nominal steady-state operating point,
S
S
dS=
d V+
d + h .o .t .
(3)
V

where, Vl R

l 1

and Vg R

g 1

(9)
represent the Load-Bus

and the Generation-Bus voltage magnitude profiles,


respectively. Yb R b1 and t R t1 represent the
susceptance of the static reactive power (VAR) compensators
and the tap settings of the under-load tap changing
transformers. Vg , Yb , and t are viewed as the compensators
or the controllers. The function in (9) is highly nonlinear and
coupled set of equations and it is very difficult if not
impossible to find analytically.
When a contingency occurs (i.e., stable but abnormal state),
some of the Load-Bus voltage magnitudes fall outside an
allowable operational limit of 15% pu. The static control

problem of voltage stability can be stated as follows: Select


and switch a compensator or a group of compensators so that
the contingency is lifted.
An ANN is trained to map a profile of controllers settings

ANN

Power
System

Fig. 1. Simultaneous voltage control of a power system using ANN.

to alleviate a contingency and to put back instantaneously the


power system into operation, Fig. 1. [11].
III. ARTIFICIAL NEURAL NETWORK
A. ANN Overview
An ANN, which simulates the behavior of human brain, is
an interconnected network of neurons. A neuron is a summing
junction of n weighted inputs and a weighted bias to yield a
single output. A set of neurons forms a layer. A typical
architecture of ANN would have inputs, one output layer, and
one or more hidden layers. This type of ANN is known as
Multi-Layer Perceptron Feed-Forward (MLPFF). Fig. 2.
shows the architecture of a two-layer MLPFF ANN.
The input layer receives information from an external source
(in our case, the power system) and passes it to the hidden
layer where the information will be processed. The hidden
layer passes the processed information to the output layer
where the latter sends the results out to an external receptor,
which is the power system as shown in Fig. 1. Every input is
connected to every neuron in the hidden layer. A weight is
assigned to every connection and a bias point is added to
every neuron. Similarly, there are weighted connection
between the hidden and the output layers. Bias points are
added to every neuron in the output layer.
The most important part in the development of ANN is the
f

v1

h1

o1

training or learning phase. In this phase, the weights are


adjusted accordingly to every training set of input-output
pattern. This process is repeated so that desired outputs are
obtained for given sets of inputs. When the weights are
adjusted, a recall phase begins. In the recall or remember
phase several training sets of inputs are presented to the ANN
and the outputs are observed. Refinement and adjustment of
weights takes place in this phase to assure reliability and
robustness of the ANN.
There are two main architectures of an ANN, FeedForward and Recurrent/Feedback Networks. Examples on the
Feed-Forward are the Single-Layer Perceptron, Multi-Layer
Perceptron, and Radial Basis Functions. Competitive,
Kohonen Self Organizing Map (SOM), Hopfield, and
Adaptive Resonance Theory (ART) models are examples for
Recurrent/Feedback Networks. The training techniques of an
ANN can be either supervised (inputs and target outputs are
required) or unsupervised (target outputs are not required).
B. ANN Mathematical Modeling
The two-layer MLPFF ANN, shown in Fig. 2., has an input
vector, v = [v 1 v 2 " v n ] R n 1 , that represents the
T

load-bus voltage magnitudes and

c = [c1 c 2 " c m ] R
T

b1

v3
1

o2

c1

o h1 = o1h1 o 2h1 " o hh11 R h1 1 . Moreover, the weights of


additional input to every neuron of constant one or simply the
biases
for
the
hidden
b 2h1

b m = b1m

" b mm R m 1 . The functions f

b 2m

b h1

Fig. 2. Architecture of a two-layer MLPFF ANN.

h1

and

represent the transfer functions of the 1st hidden and the


output layers, respectively. The weights, not shown in Fig. 2.,
of the connection between the inputs and the hidden layer and
between the hidden layer and output layer are
The outputs of the hidden and output layers are computed
as shown in (10) and (11), respectively.

bm

c = f m w h1m o h1 + b m

vn
o h1

" b hh11 R h1 1 , and output layer,

o h1 = f h1 w nh1 v + b h1

h1

layer, b h1 = b1h1

b2

, that represents the controllers

organized in the following order: generators, capacitive banks,


inductive banks, and under-load-tap-changing transformers.
The outputs of the 1st hidden layer is represented by

b1
h1

an output vector,

w nh1 R n h1 and w h1m R h1 m , respectively.

v2

m 1

cm

)
)

(10)
(11)

And the outputs in terms of the inputs will be as follows,

( (

c = f m w h1m f h1 w nh1 v + b h1

)) + b )
m

(12)

The transfer (or activation) functions are limiting functions.


Threshold, piecewise linear, sigmoid, and the Gaussian are
few examples of popular functions.
The error between the kth desired output or target, ykt , and
the network kth output, ckt , at the instant t is shown in (13).

Ekt = ykt ckt


(13)
It is a common practice to use the Least Squared Error (LSE)
to define the total error.
2
1T m
E = E kt
(14)
2
t =1 k =1

Where T represents the instances that are used for training and
m is the total number of output neurons.
Thus, in order to find the minimum value of E, it is
necessary to solve an optimization problem that finds all the
weights and biases. Since the function E is known analytically
and it is differentiable, it is possible to use gradient-based
methods like steepest descent or more efficient conjugate
gradients method.
The weights and the biases are updated as such,
E
w ih (k + 1) = w ih (k )
(15)
w ih w ih ( k )
where is the learning rate also known as the coefficient of
proportionality that determines the size of the step in the
steepest descent algorithm. Similar expressions to (15) are
used for w ho , b h , and b o .

IV. SIMULATION

A. Test Power System


To illustrate the effectiveness of the proposed ANN voltage
control, the IEEE 30-bus power system, shown in Fig. 3., is
considered. The control devices are six generators at buses 1,
2, 5, 8, 11, and 13; four capacitive banks at buses 7, 15, 17,
20; four ULTC transformers between buses 6 and 9 (line 11),
6 and 10 (line 12), 4 and 12 (line 15) , and 28 and 27 (line
36). Thus, the total number of compensators which constitutes
the number of neurons of the output layer is 14. The total
number of inputs to the ANN is 24 which constitute the load
buses.
B. Data Set Selection
Should a contingency occurs, either an experienced
dispatcher or an AI expert system would recommend the type
and location of the controllers; and the amount of switching of
the selected controllers. For a given set of combination of
controllers and their settings, the voltage profile is found by
running the power flow algorithm. Thus, to generate inputs
and outputs data (simply a data set), it is required to go
through several contingencies (cases) and tries to find their
remedial solutions (targets). However, this way is time
consuming, especially for large power systems, and it is not
comprehensive. Instead, we propose to work backward; the
controllers will be switched randomly from their minimum to
their maximum values with changeable incremental values.
And, for each set of controllers, the voltage profile is
calculated. Since we are interested in stable operation, all sets
of controllers that lead to voltage profiles outside the

Fig. 3. The IEEE 30-Bus Power System.

allowable limits, will not be included in the data set. The data
set then will be used to train, validate, and test the ANN.
No one knows exactly how large the data set is ought to be;
in fact this is still an unanswered question. If the data set is
too small, the ANN may lead to an underfit and if the data set
is too large then the ANN is said to memorize or overfit. To
shed some light over this issue, we generated 1000
contingencies (i.e., the voltage profile is outside 15% pu)
and ran the algorithm shown in Fig. 4.
Start with Data Set
of size 500

Train, Validate,
and Test the ANN

Create a
Contingency
Increase the data
set by 500
Apply to ANN

No
No

Calculate the MSE


and the percent hit

All
Contingencies
Completed?

Yes

Max. Data Set


Size Reached?

Yes

Stop

Fig. 4. Algorithm of testing ANN with different Training Data Set


Dimension.

The data set ranges from 500 to 10500 of different pairs of


input/output vectors. Fig. 5. shows that although the Mean
Squared Error (MSE) was relatively higher at lower data set,
but the percent hit is 100 % for all data set sizes. The percent
hit means that the ANN was able to generate a set of
controllers that brought back the voltage profile to its normal
conditions. Therefore, one can conclude that for this particular
problem, the size of the data set is not of critical issue. As a
rule-of-thumb, one would select a data set size that leads to
100 % hit with the least MSE.

C. Network Topology
In an MLPFF ANN, the number of hidden layers and the

-3

ANN Performance

x 10

101
% Hit

MSE
100.5

100

99.5

2.5

1000

2000

3000

4000

5000 6000 7000


Data Set Size

8000

9000 10000

Fig. 5. The ANN Performance with different Training Data Set Dimension.

number of neurons per hidden layer are still being researched


heavily. Some say that an MLPFF ANN with one hidden layer
with large enough neurons works well [19]. The number of
the hidden neurons was recommended to be,
2
m hl = ( n + m )
(16)
3
Where m hl , n , and m are the numbers of the neurons in the l
hidden layer, inputs, and outputs, respectively. Thus the
topology the MLPFF ANN to be used is n : m h1 : " : m hl : m .
The IEEE 30-bus power system would have 24 inputs and
14 outputs; consequently, the number of neurons of the first
hidden layer was chosen to be 26 in accordance with (16).
Thus, the MLPFF ANN topology is 9:26:9. Other researchers
recommend that the number of hidden neurons must be,
m hl = c nm
(17)
Where c is an arbitrary coefficient which is dependent on
the nature of application [13]. In accordance with (17), the
number of hidden neurons will be 19 if c is assumed to be 1.
A data set of size 1000 pairs of input/output vectors is
selected in accordance with the above discussion in section
IV.B. 25% of the data set used to train the MLPFF ANN is
-3

ANN Performance

x 10
8

100

% Hit

90

MSE

Faulted
ANN

1.07
7

1.06
1.05

80

6
5

60
50

40

30

1.04
Mean Squared Error

70
Percent Hit

1.08

1.03
Voltage Level

99

Mean Squared Error

Percent Hit

3.5

used for validation and 10% is used for testing. The tansigmoid transfer function was used for both the hidden and the
output layers. The Levenberg-Marquardt optimization
algorithm is used to train the network that updates the weights
and biases values.
Since there are no precise rules in selecting the number of
hidden neurons, we have selected a network with one hidden
layer with 1 neuron (i.e., 24:1:14) and then increased the
number of neurons by one until 10 neurons is reached, then an
increase by 5 neurons until 100 neurons is reached. For every
ANN architecture, the network is trained, validated, tested,
and then assessed. The results are shown in Fig. 6. where the
percent hits and the MSEs are plotted versus the number of
neurons in the 1st hidden layer. If we choose as a criterion
the number of neurons that leads to 100% percent hit and with
least MSE, Fig. 6. clearly shows that an increase in the
number of neurons in the hidden layer does not necessarily
leads to a better results as the percent hit was noticeably
deteriorated. As a result, the best MLPFF ANN architecture is
24:10:14 which is not what was recommended in (16) and
(17).
To test the effectiveness of the trained network, 1000 new
contingencies (i.e., they are not part of the training data set)
were fabricated. When one of the new contingency is
presented to the trained network, an instant solution is
deduced. Fig. 7. shows a sample of the faulted voltage profile
after a contingency and the repaired voltage profile after the
implementation of the trained networks solution. The
recommended settings of the controllers are shown in Table I.
Choosing the appropriate training data set size and the
number of neurons in the hidden layer is critical if the ANN is
to be trained online. The training time per epoch increased
almost exponentially with the increase of the training data set
size. The same observation is true for the training time per
epoch with respect to the number of neurons in the hidden
layer, Fig. 8.. The Neural Network Toolbox of Matlab is
used and the simulation is carried out on a PC with 2.66 GHz
Intel Core2 Duo Processor and 2.66 GHz 4 GB RAM.

1.02
1.01
1
0.99
0.98

20
10

0
10

20

30
40
50
60
70
80
Number of Neurons in the Hidden Layer

90

0
100

Fig. 6. The ANN Performance with different Number of Neurons in the 1st
Hidden Layer.

0.97
0.96
0.95
0.94

10

13
16
19
Bus Number

22

25

28

30

Fig. 7. The faulted (dashed-line) and the corrected (solid-line) voltage


profiles.

TABLE I
THE TRAINED NETWORK RECOMMENDED SETTINGS
OF THE CONTROLLERS THE CASE OF FIG. 7.

[4]

Type

Location

Action

Amount

Generator 1
Generator 2
Generator 3
Generator 4
Generator 5
Generator 6
Capacitor 1
Capacitor 2
Capacitor 3
Capacitor 4
ULTC 1
ULTC 2
ULTC 3
ULTC 4

Bus 1
Bus 2
Bus 5
Bus 8
Bus 11
Bus 13
Bus 7
Bus 15
Bus 17
Bus 20
Line 11
Line 12
Line 15
Line 36

Decrease
Decrease
Decrease
Increase
Increase
Decrease
Increase
Increase
Decrease
Decrease
Decrease
Decrease
Increase
Decrease

1.68%
1.37%
0.13%
1.53%
0.47%
1.18%
5.42%
19.03%
12.51%
6.77%
5.84%
0.93%
2.68%
3.32%

45

[5]
[6]
[7]
[8]

[9]

40

Time, seconds/epoch

35

[10]

30
25

[11]

20
15
10

[12]

5
0
10

20

30
40
50
60
70
80
Number of Neurons in the Hidden Layer

90

100

Fig. 8. The training time per epoch with respect to the number of neurons in
the hidden layer.

V. CONCLUSION
A Multi-Layer Perceptron Feed-Forward Artificial Neural
Network with one hidden layer was used to instantaneously
map a solution to a faulted power system. For the 30-bus
IEEE power system, it was shown that the best network
architecture was 24:10:14. This network led to 100% percent
hit and relatively small MSE. The work also showed that for
this particular case, the size of the training data set does not
have to be excessively large. Although, the results
demonstrate the effectiveness of the proposed work, however
it has the following shortcomings. The architecture of the
trained network depends on the topology of the power system.
The trained network utilizes all available controllers, where a
problem may be eliminated with one or two controllers.
Although, one may choose not to consider certain controllers,
the trained network still can not prioritize the switching of the
controllers.
VI. REFERENCES
[1]

[2]

[3]

M. R. Khaldi, A. K. Sarkar, K. Y. Lee, and Y. M. Park, The Modal


Performance Measure for Parameter optimization of Power System
Stabilizer. IEEE Transactions on Energy Conversion, vol. 8, no. 4, pp.
660-666, December 1993.
K. R. C. Mamandur, Emergency Adjustments to VAr Control Variables to
Alleviate Over-Voltages, Under-Voltages and Generated VAR limit
violations, IEEE Transactions on Power Apparatus and Systems, vol. PAS101, no. 5, May 1982, pp 1040-1047.
J. Zaborszky, G. Huang, and K. W. Lu, A Textured Model for
Computationally Efficient Reactive Power Control and Management, IEEE

[13]
[14]

[15]
[16]

[17]
[18]

[19]

Transactions on Power Apparatus and Systems, vol. PAS-104, July 1985,


pp. 1718-1727.
K. Iba, H. Suzuki, K. I. Suzuki, and K. Suzuki, Practical Reactive Power
Allocation/Operational Planning Using Successive Linear Programming,
IEEE/PES Winter Meeting 1987, paper no. 055-7.
K. Tomsovic, and D. M. Faleo, Advanced power system controls using
intelligent systems, IEEE Power Engineering Society Summer Meeting
2000, vol. 1, pp. 336 33.
M. R. Khaldi, An Intelligent Cognitive Expert System for Voltage
Control in Power Systems, Proceedings of 2003 IEEE Conference on
Control Applications (CCA 2003), vol. 1, pp. 319-24, June 2003.
C. C. Liu and K. Tomsovic, An Expert System Assisting Decision-Making
of Reactive Power/Voltage Control, IEEE Transactions on Power Systems,
vol. 1, no. 3, August 1986, pp. 195-201.
M. Aggoune, M. A. El-Sharkawi, D. C. Park, M. J. Damborg, and R. J.
Marks II, Preliminary Results on Using Artificial Neural Networks for
Security Assessment, Proceedings of the IEEE Conference on Decision
and Control, 1989, pp. 252-258.
K. C. Hui, and M. J. Short, Voltage Security Monitoring, Prediction
and Control by Neural Networks, IEE International Conference on
Advances in Power System Control, Operation and Management,
November 1991, pp. 889-894.
K. Y. Lee, Y. T. Cha, and J. H. Park, Short-Term Load Forecasting
Using Artificial Neural Network, IEEE Transactions on Power
Systems, vol. 7, pp. 1-8, February 1992.
M. R. Khaldi, Neural Networks and Static Voltage Stability in Power
Systems, IEEE International Conference on Industrial Technology
(IEEE ICIT2008), April 21-24, 2008, Sichuan University, Chengdu,
China, Paper ID: ZD-007498.
W. Huang, Y. Nakamori1, S. Wang, and H. Zhang, Select the Size of
Training Set for Financial Forecasting with Neural Networks, Lecture
Notes in Computer Science 3497, Springer-Verlag Berlin Heidelberg,
pp.879-884, 2005.
M. Hayashi, A Fast Algorithm for the Hidden Units in a Multilayer
Perceptron. Proceedings of 1993 International Joint Conference on
Neural Networks, pp. 339-342, 1993.
H. Su, B. Zhao, and S. Xia, A Construction Method of Feedforward
Neural Network for Selecting Effective Hidden Nodes, Proceedings of
the IEEE International Conference on Neural Networks, Vol. 3, 9-12
Jun 1997, pp. 1901 1906.
S. Tamura, Method of Determining an Optimal Number of Neurons
Contained in Hidden Layers of a Neural Network, U.S. Patent 5 596
681, Jan. 21, 1997.
E. J. Teoh, K. C. Tan, and C. Xiang, Estimating the Number of Hidden
Neurons in a Feedforward Network Using the Singular Value
Decomposition, IEEE Transactions on Neural Networks, Vol. 17, No.
6, pp. 1623-1629, November 2006.
E. Rigoni, A. Lovison, Automatic sizing of neural networks for
function approximation, IEEE International Conference on Systems,
Man and Cybernetics, pp. 2005-2010, 7-10 Oct. 2007.
M. R. Khaldi, Sensitivity Matrices for Reactive Power Dispatch and
Voltage Control of Large-Scale Power Systems, WSEAS Transactions
on Circuits and Systems, issue 9, vol. 3, pp. 1918-1923, November
2004.
S. Lawrence, C. L. Giles, and A. C. Tsoi, What Size Neural Network
Gives Optimal Generalization? Convergence Properties of
Backpropagation. U. of Maryland Technical Report CS-TR-3617.

VII. BIOGRAPHIES
Mohamad R. Khaldi (M1992) was born in Tripoli,
Lebanon, on April 4, 1962. He received the B.S. and
M.S. degrees in Electrical Engineering from the
State University of New York at Binghamton, in
1987 and 1989, respectively. He earned his Ph.D.
from the Pennsylvania State University in Electrical
Engineering in 1995. He has been with the faculties
of Engineering of Pennsylvania State University,
Notre Dame University, and the University of
Balamand where he is currently an Associate Professor in Electrical
Engineering. His main interests include systems and control, and power
systems.

Вам также может понравиться