Вы находитесь на странице: 1из 7

EE0102

NEURAL NETWORK APPLICATIONS IN POWER ELECTRONICS AND MOTOR


DRIVES

1 MR.J.HARISH REDDY 2 MR.MD.IMTIAZ

Abstract:

Artificial intelligence (AI) techniques, particularly the neural networks, are recently having
significant impact on power electronics and motor drives. Neural networks have created a new and
advancing frontier in power electronics, which is already a complex and multidisciplinary technology
that is going through dynamic evolution in the recent years. This paper gives a comprehensive
introduction and perspective of neural network applications in the intelligent control and estimation for
power electronics and motor drives area. The principal topologies of neural networks that are currently
most relevant for applications in power electronics have been reviewed including the detailed
description of their properties. Both feed forward and feedback or recurrent architectures have been
covered in the description. The application examples that are discussed in this paper include nonlinear
function generation, delay less filtering and waveform processing, feedback signal processing of vector
drive, space vector PWM of two-level and multilevel inverters, adaptive flux vector estimation, and
some of their combination for vector-controlled ac drive. From the current trend of the technology, it
appears that neural networks will find widespread applications in power electronics and motor drives in
future.

Key words:

Back propagation network, induction motor drive, intelligent control and estimation, neural network,
perceptron, recurrent network, space vector PWM.

Conclusion:

This paper gives a broad and comprehensive discussion on neural network principles and their
applications in power electronics and motor drives, which includes several application examples. With
the current trend of R&D, it is expected that viable and inexpensive neural network ASIC chips will be
available in future, and this will promote widespread neural network applications in power.

1. III Year EEE, 2. III Year EEE,


CVR COLLEGE OF ENGG CVR COLLEGE OF ENGG
(harishredish@yahoo.co.in) (sriram_valluri@yahoo.com)
Introduction: F(S) F(S) Hard limit F(S)

+1 +1 +1

The model of an artificial neuron that Bias

closely matches the biological neuron is shown in 0 S 0 S 0 S

Fig. 1(b). Basically, it has op-amp summer-like


structure. The artificial neuron (or simply neuron) -1 -1

is also called processing element (PE), neurode, (a)Linear bipolar (b)Threshold (c)Signum

node, or cell. Each input signal (continuous


variable or discrete pulses) flows through a gain F(S) F(S) =1

or weight (called synaptic weight or connection +1 +1

strength) which can be positive (exciter) or F(S)=


1
1 + e-S 0.5
=1
eS - e-S
 =2
F(S)=
negative (inhibitory), Integer or noninteger. The eS + e-S =0.5

summing node accumulates all the input-weighted 0 S 0 S

signals, adds to the weighted bias signal b , and


then passes to the output through the nonlinear -1

(or linear) activation or transfer function (TF) as (d) Sigmoidal or log -sigmoid (e) Hyperbolic tan or tarnsig

shown.
III. STRUCTURES OF ARTIFICIAL NEURAL NETWORK
1) Activation Functions: Several common
type activation functions used in artificial neuron A.. Network Models
are shown in Fig. 2. These are defined, The interconnection of artificial neurons results in NNW
respectively, as linear (bipolar), threshold, (Often called neurocomputer or connectionist system in
signum, sigmoidal (or log-sigmoid), and literature), and its objective is to emulate the function of a
hyperbolic tan (or tan-sigmoid). Another type of human brain in a certain domain to solve scientific,
function that is often used is the Gaussian engineering, and many other real-life problems. The
function, but is not included here. The magnitude structure of biological neural network is not well-understood,
of these functions varies between 0 and 1, or -1 to and therefore, many NNW models have been proposed. A
+1 as indicated. The linear function can be few NNW models can be listed from the literature [1], [2] as
unipolar or bipolar. With slope of infinity, it follows.
transforms to threshold or signum function, 1) Perceptron
respectively. The sigmoidal and hyperbolic tan 2) Adaline and Madaline
functions are commonly used in power electronic 3) Backpropagation (BP) Network
system. Their mathematical expressions are 4) Radial Basis Function Network (RBFN)
included in the respective figure, where “Alfa “ is 5) Modular Neural Network (MNN)
the gain or coefficient that adjusts the slope or 6) Learning Vector Quantization (LVQ) Network
sensitivity. Both these functions are differentiable, 7) Fuzzy Neural Network (FNN)
and the derivative df/ds is maximum at S=0. All 8) Kohonen’s Self-Organizing Feature Map (SOFM)
these functions are squashing in nature, i.e., they 9) Adaptive Resonance Theory (ART) Network
limit the neuron response between the asymptotic 10) Real-Time Recurrent Network
values. Note that nonlinear activation function 11) Elman Network
contributes to the nonlinear transfer 12) Hopfield Network
characteristics of neuron that permit nonlinear 13) Boltzmann Machine
Input–output mapping of NNW which will be 14) Recirculation Network
discussed later. However with linear activation 15) Brain-State-In-A-Box (BSB)
function, this nonlinearity is lost. 16) Bi-Directional Associative Memory (BAM) Network
Generally, NNWs can be classified as feed forward
and feedback or recurrent types. In the feed forward
class, the signals flow only in the forward direction
(see Figs. 3 and 4), whereas in recurrent neural
network (RNN), the signals can flow in forward as
well as backward or lateral direction (see Fig. 5). A
network can be defined as static or dynamic,
depending on whether it emulates static or dynamic
system. A NNW is characterized by input–output
mapping property. For static mapping, the feed
forward NNWs are important, whereas for dynamic or
temporal mapping, the RNNs are important [1]. The
detailed description of all the NNW topologies is
beyond the scope of this paper. Only a few
topologies that are most relevant for power electronic
Systems will be discussed. Currently, around 90% of
NNW applications use feed forward architecture;
particularly the back propagation network is most
popular [1]. Therefore, description of this topology Fig. 3. (a) Perceptron network. (b) Illustration of pattern
and its application will be emphasized in this paper classification boundary for the upper perceptron

W1,1= -1. and W1, 2=+1 The shaded area in the figure
B. Perceptron Network classifies the P1and P2 inputs that give +1output, whereas
the unshaded area gives 0 output. The “decision” boundary
A perceptron is a simple type of single-layer
line is denoted by . The bias shifts the boundary line
feedforward NNW which is used for classification of
horizontally with the slope remaining the same, whereas the
input signal patterns that are linearly separable. It
input weights alter its slope. For more than two inputs, the
was invented by Frank Rosenblatt in late 1950s [1].
classification boundary is given by a hyperplane. The
The general structure of perceptron network (multiple
boundary hyperplane for each perceptron can be designed to
perceptrons) with S neurons that use hard-limit
be different by assigning different weights and biases. A
unipolar (or threshold) TF is shown in Fig. 3(a). The
supervised training algorithm (perceptron learning rule) with
general input signal vector p has elements which is
the help of computer program [2] can design all the weights
multiplied by the weight matrix W (RXS) before
for the desired classification boundaries. The NNW training
summation. An element of W is given by Wi,j, where
will be discussed later. A simple application of perceptron
the weight connects j input element with the i th
network may be to isolate healthy signals in a power
neuron. The resulting Wp vector is added with the
electronic system from the noisy signals and use for control
bias vector b to constitute the vector n=Wp+b. Each
and monitoring. It can also be trained to solve complex
bias signal of a perceptron is generated from a
Boolean functions [2] with logical input signals and
source +1 through a weight as shown. The output
identification of drive system response characteristics It may
vector is then given by a=hardlim(Wp+b). The weight
be mentioned here that Adaline and Madaline
is normally indicated by a dot in each link, but is not
(multipleAdalines) NNW has the same structure as that of
shown here. The pattern classification property for
perceptron except the activation function is bipolar linear.
the upper perceptron only is explained in Fig. 3(b) for
P1and P2 inputs, where
The linear NNW can give linear input-output mapping only. , theoretically, three-layer NNW can solve all the mapping
Besides using for pattern classification as discussed above, it problems, occasionally more than three layers are
can be used for linearizing nonlinear functions (linear function used. The NNW has analogy with biological neural network, as
approximation)or pattern association. mentioned before. Like a biological network, where the memory or
intelligence is contributed in a distributed manner by the synaptic
junctions of neurons, the NNW synaptic weights contribute similar
C. Backpropagation Network distributed intelligence. This intelligence permits the basic input–
output mapping or pattern recognition property of NNW. This is
The feedforward multilayer backpropagation topology, also defined as associative memory by which
shown in Fig. 4, is most commonly used in power electronics when one signal pattern is impressed at the input, the
and motor drives. The name “backpropagation” comes from the corresponding pattern is generated at the output. This pattern
method of supervised training used for the NNW shown by the generation or pattern recognition is possible by adequate training
lower two blocks in the figure. This will be explained later. The of the NNW. The discussion on training and application examples
network is normally called multilayer perceptron (MLP) type, given later will make the concepts adequately clear One distinct
because of its evolution from the single-layer perceptron feature of neurocomputation is that it is very fast when massive
discussed before, but its TF can be different from threshold parallel computation is done with the help of ASIC chip. An ASIC
function. The MLP type NNW is very powerful computationally chip can have embedded read-only memory/random-access
compared with perceptron NNW. It uses error backpropagation memory (ROM/RAM),
supervised training algorithm which was first described by Paul where the weights can be stored and TFs can be implemented by
Werbos in 1974. Subsequently, Rumelhart Hinton, Williams, a look-up table. This is unlike digital signal processing
McClelland, Parker, LeCun, etc., further made contributions to (DSPP/microcomputer-based sequential computation which
this method [1]. The example NNW shown in the figure, has
three input signals ( , , ), and two output signals ( and ). The
circles represent the neurons which have associated TF (not
shown), and the weights are indicated by dots (often omitted).
The network shown has three layers: (a) input layer, (b) hidden
layer, and (c) output layer. With five neurons in the hidden layer
as indicated, it is normally defined as 3–5–2 network. The input
layer is nothing but the nodes that distribute the signals to the
middle layer Therefore, the topology is often defined as two-
layer network The bias source is normally coupled to both
hidden and output layer neurons, although it is shown here for
the hidden layer only for simplicity. Although the network
handles continuous variables, the input and output signals can
be continuous, logical, or discrete bidirectional. If the I/O signals
are bipolar,
the hidden layer neurons usually has hyperbolic tan TF and the
output layer has bipolar linear TF. On the other hand, for
unipolar signals, these TFs can be sigmoidal and unipolar
linear, respectively. The signals within the NNW are processed
in per unit (pu) magnitude, and therefore, input signals
normalization and output signals descaling or
denormalization are used as shown. Although Fig.11.8 THREE-LAYERBACKPROPAGATIONNETWORK(3-5-2)
is slow. Because of parallel computation, the computation is 2)Online Training: In the offline training method, as discussed
fault-tolerant, i.e., deterioration of a few weights and/or above, the NNW weights remain fixed or nonadaptive
missing links will not significantly deteriorate the output signal during its operation. In many applications, such as power electronic
quality. Besides, NNW has noise or harmonic filtering property systems, the NNW has to emulate nonlinear and time varying
which functions, where the functions might change depending on the plant
will be discussed later. operating condition and parameter variation. In such cases, the
NNW requires continuous training online. This type of NNW is called
1) Backpropagation Training: adaptive because of online variation of the

A NNW requires supervised training by example data rather


than the traditional programming in a computer system. This is Delayed feedback

similar to the training of a biological neural network. Z-1


Backpropagation is the most popular .
.
training method for a multilayer feedforward network. In .
Z-1
the beginning, input–output example data patterns can be
1 W11
obtained from the experimental results, or from simulation Y1(k) . Q1
. W 1N 1 F(Q1) Y 1(k+1)
study if mathematical model of the plant is available. YN (k)
.
. TF
Analytical data patterns can also be used. An initial NNW N
. Output
configuration is created with the desired input and output layer . signals
X1(k) 1
neurons dictated by the number of respective signals, a Input . QN
. Y(N+M)1 N F(QN) YN(k+1)
hidden layer with a few neurons, signals .
X M(k) TF
M Y (N+M)N
and appropriate TFs. Small random weights are selected
so that neuron outputs do not get saturated. With one input
pattern, the output is calculated (defined as forward pass) and y(k)
x(k)
compared with the desired or target output pattern. With the
PLANT
calculated error (sum-squared-error (SSE) or mean-squared-
error (MSE), as described later), the weights are then altered
in the backward direction by backpropagation algorithm until
the error between the output pattern and the desired pattern is +
very small ANN y'(k)

and acceptable (see Fig. 4). A round trip (forward and reverse -
e(k)
passes) of calculations is defined as an epoch. Similar training
is repeated with all the patterns so that matching occurs for all
the patterns. At this point, the network is said to have been
trained satisfactorily to perform useful functions. If the error Fig. 5. (a) Structure of real-time recurrent network (RNN). (b) Block
does not converge sufficiently, it may be necessary to diagram for training.
increase number of neurons in the hidden layer, and/or add
extra hidden layer(s). It has been demonstrated that a three- weights or the structure. Fast and improved version of
layer NNW can solve practically all pattern matching backpropagation algorithm (such as L-M algorithm) can be used for
problems. Instead of selecting one pattern at a time in online training by high-speed PC or DSP to tune the weights, if the
sequence, batch method training can be used, where all the process speed is not very fast. Fortunately, DSP speed has
patterns are presented simultaneously and final weight improved dramatically and cost of memory has fallen sharply in If the
updates are made after processing all the patterns. range of the plant parameter variation is known ahead of time, the
The weight adjustment for the minimization of error uses the NNW can be trained offline with the nominal parameters, and then
standard gradient descent technique, where the weights are the tuning of the weights can be done online by a high-speed DSP .
iterated one at a time starting backward from the output layer. Random weight change (RWC) algorithm was proposed to enhance
Consider that a network is being trained with the input pattern the speed of online learning
number , and the squared output error for all the output layer
neurons of the network
epochs required for the training are included in the figure title. A
NEURAL NETWORK APPLICATIONS small deviation in frequency and distortion of input square-wave will
give tolerable error at the output. Generalizing the waveform
The history of neural network applications in power electronics processing, it can be said that any arbitrary waveform can be
and motor drives is very recent, and hardly extends beyond transformed to arbitrary output wave at single or multiphase with the
the last 10–15 years, as mentioned before. In this section, help of NNW, and the input–output amplitude tracking relation can
several example applications which are mainly based on the be programmed to be linear or nonlinear with arbitrary functional
author’s own research will be discussed briefly because of the relation. The study also suggests that NNW can be used as zero-
length constraint of this paper. The details of the NNW design crossing detector (ZCD) for a distorted wave.
and their performances for these applications can be found. 2) Three-Phase PWM Waves at Variable Frequency: It is
possible to have delayless filtering of three-phase PWM waves of a
A)Delayless Filtering and Waveform voltage-fed inverter at variable frequency with the help of NNW , as
Processing: shown in Fig. 16. The fundamental voltages at zero phase-shift
angle permit correct estimation of feedback signals for vector control
1) Single-Phase Square Wave at Constant Frequency: A of ac drive. The PWM line voltage waves , , and are generated by
NNW can be used for delayless filtering of harmonic-rich sinusoidal PWM technique, where only wave is shown. ThePWM
waveforms in power electronics, as well as waveform waves are initially filtered by identical LPFs as shown to convert the
processing , as shown in Fig. 15. A traditional low-pass filter discrete waves into continuous waves before processing through the
(LPF) causes frequency-sensitive phase delay and amplitude NNW. In Fig. 16(b), theNNWinputwaves, and the corresponding
attenuation at the output. A square-wave line current which is desired and estimated waves are shown for 10 kHz carrier frequency
usually encountered for a thyristor or diode rectifier with highly ( ), whereas Fig. 16(c) shows results at kHz. The MSE and the
inductive load is considered in this case, although other types number of epochs required in the training are included in the figure
of waveforms are also possible. The wave (Va ) and its title. In the later case, the MSE failed to
auxiliary form (Va` ) through an arbitrary LPF are given as
input to the NNW as shown. The output (Vaf L0 ) comes out
as sine wave at the same frequency and proportional to the
square-wave amplitude, but is locked at 0 phase angle. The
auxiliary wave is needed because with the constant amplitude
square-wave input, variable amplitude sine wave output is not
possible. Multiphase balanced output waves (in this case
three-phase) can be generated easily by programming the
NNW. Such waveform processing may be important for
sinusoidal PWM inverter or cycloconverter, where the
command can be generated from a simple square-wave
signal. The NNW was trained offline with actual and data at
the input and the desired output(generated by prior FFT
analysis of the input square-wave).The backpropagation NNW
(2–16–3) uses bipolar linear TF at the input and output layers,
but hyperbolic tan TF in the hidden layer. The MSE and the
number of

Fig.11.26 SINGLE-PHASE SQUARE-WAVE DELAYLESS FILTERING AND MULTIPHASING AT


OUTPUT
(a) THREE-PHASE OUTPUT, (b) SIX-PHASE OUTPUT
Fig.11.29 THREE-PHASEPWMWAVEDELAYLESSFILTERINGAT60 HZ.
(a) NNWINPUTANDOUTPUTWAVESATfc =10 kHz
(b) NNWINPUTANDOUTPUTWAVESATfc =1.0kHz

Z-1
converge below the large value shown. In the later case, the y1(k+1)
Recurrent NNN
Z-1
estimated waves indicate more distortion. The test results for
60 Hz fundamental frequency are shown only, although the W11
W21
v dss''
y1(k)
performance was tested to be satisfactory at variable W 22
 dss
ds
s

y2(k)
frequency in the range of 5–60 Hz. Note that in this case the vds s''
u(k) W 13

NNW compensates the phase delay and attenuation of LPF y 2(k+1)

(that vary with Z -1

o
fundamental frequency) besides filtering the PWM waves.With Z
-1 90 -N

a more complex structure of NNW and extensive training, the W 11


harmonic quality at the output can be improved. W 21
W 22
 qss 1.0
s'' W11
vqs
Induction Motor Vector Drive With Neural Network-Based
W13 0.8 W22
0.6

SVM and Flux Estimation: 0.4

W 11 W13 W 21 W 22 0.2
W21
0 W13
The backpropagation NNW-based SVM and RNN-based

0 628.3 1256.6
Frequency(e) (r/s)
stator flux vector estimation described in Figs. 20 and 29,
(b)
respectively, are integrated in a core EV drive shown in Fig.
31 . As usual, the torque control is considered in the outer

Feedforward
loop with close loop stator flux control. The flux loop requires NNW

injection of decoupling current as shown. The synchronous


current control loops generate the voltage commands to the
vector rotator which correspondingly generates the voltage e
vector for the SVM modulator. (Estimated)
(a)

Fig. 30. (a) Hybrid neural network for stator flux vector
estimation. (b) RNN weights as function of frequency.

Вам также может понравиться