Академический Документы
Профессиональный Документы
Культура Документы
Centre for Resource and Environmental Studies, Australian National University, Canberra,
Australia
Available online: 21 May 2007
To cite this article: PETER YOUNG & ANTHONY JAKEMAN (1980): Refined instrumental variable methods of recursive timeseries analysis Part III. Extensions, International Journal of Control, 31:4, 741-764
To link to this article: http://dx.doi.org/10.1080/00207178008961080
1. Introduction
I n the first two parts of this paper (Young and Jakeman 1979 a, Jakeman
and Young 1979 a ) we have been concerned with the description a n d comprehensive evaluation of the refined instrumental variable (IV) approach t o
time-series analysis for single input, single output (SISO) and multivariable
dynamic systems described by discrete-time series models. In this, the third
and final part of the paper, we consider how the refined I V method can be
extended in various directions to handle continuous time-series models and
discrete or continuous time-series models with time-variable parameters. We
also discuss briefly other extensions including off-line and on-line adaptive
methods of designing state reconstruction (Kalman) filters for stochastic
systems ; the development of IV estimation procedures for specific time-series
models, such as the multiple input-single output transfer function model ; and
finally the estimation of parameters in multivariable system models in those
situations where the observation space is less than the dimension of the model
space. For convenience, in a11 cases except the latter, we shall consider refined
I V estimation algorithms with non-symmetric matrix gains. Bearing in mind
the results of the first two parts of the paper, however, it is clear t h a t symmetric
matrix gain alternatives could be implemented and it is likely that, a t least for
reasonable sample size, they would perform in a similar manner.
742
2.
(ii)
(iii)
YV) = x ( t ) + f (1)
where' s is the differential operator, i .e. sx(t) = dx(t)/dl (Ioosely interpreted here
as the Laplace operator) ; A , B, C and D are polynomials in s of the following
form,
and [(t) is a continuous-time ' white noise ' process. It is well known (e.g.
AstrGm 1970, Jazwinski 1970) that theoretical and analytical difficulties
arise because of the use of continuous white noise in mathematical models of
dynamic systems, particularly transfer function formulations such as eqn.
( 1 ) (ii). In the present context, this difficulty is manifested in the form of
practical problems associated with the recursive estimation of the parameters in
the C and D polynomials characterizing the noise model. For the moment,
however, it is convenient to assume a continuous-time model of the form 1 (ii)
although, as we shall see, i t is necessary in practice to evaluate the noisecomponents of the model in discrete-time in order t o circumvent estimation
problems.
2.1. Discrete and continuous time recursive algorithms
It is clear that the model (1) is algebraically equivalent to the discrete-time
SISO model discussed in previous parts of this paper. Let us consider,
therefore, the situation whcre we wish t o implement the estimation algorithm
in discrete-time using sampled data from the continuous-time system ; we will
refer to this as CD (continuous-discrete) analysis (Young 1979 a). Using a n
approach similar to that used in previous parts of the paper, i t is then possible
to obtain estimates of the parameters in the continuous-time model polynomials
Here y = [yo,, yo,, ..., yOTlTand u = [u,,, uo2,..., uOTIT,where the first zero
subscript on u and y indicates that the variables are, respectively, the basic
input and output variables (i.e. the ' zeroth ' derivatives of u and y), while the
second subscript i = l , 2 , ..., T denotes the sampled values of the variables at
time ti, i.e. y(t,) and u(t,).
'noise
w~(si .
~(t)
:
YO*(
I'
As
I state
or
-
t1 +
-
r.5thte.--
1)
! reconstruct~oni
Iv
z!
-&GoRITw
.,
recurswe
or
iteraiive update
A~SI -
a c
... I
I-
D[s)A(s)
-- ki(t) C(sl
kit)
filter
r
u(t) variable
f ~ l t e r s u*.dt)
aux~l
iary
model
-
REFIND
a, = a,-,
+ z , * ~P k - l ~ k * ] - l ( ~ k *a,-,T
- P,-,ak*[sz
- yOk*)
or
(ii)
a, =
- P,lt,*{~,*~
- yOk*)
and
(iii)
P, = P,-,-
Pk-,g,*[e2
+z
~ pk-lltk*]-l~k*T
* ~
Pk-l
where
744
the Jariables are filtered by adaptive ' prefilters ' C/BA again as shown in
Fig. 1. The i = 1, 2 , ..., n subscript on the variables within the square brackets
in (5) denotes the ith time-derivative of the variabIe, while the k subscript
outside the brackets indicates that the enclosed variables are all sampled a t the
kth sampling instant.
.
This algorithm has close similarity with the I V algorithm suggested some
years ago by Young (1969),the only difference lies in the nature of the prefilters ;
in the previous algorithm, these were termed ' state variable filters ' and were
introduced mainly t o avoid direct differentiation of noisy signals. I n this
sense, the function of the present filters is identical : their presence means that
it is not the direct derivatives of the variables y(l), i ( t ) and u(t)that are required
for estimation but the derivatives of the filtered variables y*(t), j.*(t) and u*(t).
And these filtered derivatives, unlike the direct derivatives, are physically
realizable as a product of the filtering operation (Young 1964, 1969). Of
course the prefilters here do more than just avoid differentiation of noisy
signals ; they also represent the mechanism for inducing asymptotic statistical
efficiency.
In the present case, the ' optimal ' prefilters are defined in terms of estimates
of the a priori unknown polynomials A , C and D. It is necessary, therefore,
to define some adaptive procedure for synthesizing the prefilters as the estimation proceeds. I n the situation where C = D = 1.0, i.e. ( ( t ) is white noise, t h e
adaptive synthesis of the prefilters 1/A is fairly straightforward : both the
prefilter and auxiliary model parameters can be updated either recursively or
'
iteratively as shown in Fig. 1, exactly as .in the discrete-time model case
described in P a r t I of this paper. When the noise [ ( t ) is coloured (i.e. C # 1.0 ;
and/or D # 1.0), however, the situation is not so straightforward : in contrast t o
the discrete-time model situation, it is not easy to construct a similarly motivated recursive estimator for the continuous-time noise model parameters since
the derivatives of the white noise e ( t ) do not exist in theory.
While it may be possible to solve this noise estimation problem by considering either band-limited noise or purely autoregressive noise (where
derivatives of e ( t ) do not occur), we feel that i t may be better to consider a
hybrid approach. Here, the noise is estimated in purely discrete-time (DD)
terms by the use of the AML or refined AML algorithms described previously.
This does not create any implementation problems because the noise model is
only required for adaptive prefiltering operations, which can easily be carried
out in discrete-time when using CD analysis. The general implementation
in this case is shown in Fig. 2 (a) and the detailed structure of the derivative
generating filters l/A(s) is illustrated in Fig. 2 (b). It should be noted here
that the filter in Fig. 2 ( b ) is similar to the ' state variable filter ' suggested by
Kohr (see, e.g. Kohr and Hoberock 1966) : the only difference is that the
coefficients ti,, i = 1, 2, . . . , n are not constant, as in the Kohr case, but are
aduplively adjusted, either iteratively or recursively, as the estimation proceeds.
Up t o this point we have assumed that, while the algorithm (4) is implementcd in discrete-time, the signals y(t),4(t)and u ( t )are available in continuoustime form so that they can be passed through the continuous-time prefilters
prior to sampling. I n practice, however, it could well be that both input and
output signals are naturally in sampled data form. This difficulty can be
circumvented, albeit in an approximate manner, by assuming that the signals
remain constant over the sampling interval and passing them directly into the
continuous-time filters. In other words, the sampled data are converted to a
continuous time ' staircase ' form prior to filtering. I n this manner, the
prefilters perform a n additional, useful, ' interpolation ' role and provide
estimates ' of the continuous-time filtered variables.
' and I
! x
-
r - ' '
'
i.-.J
.J
(a1
.- -
I S v F (see block
and
I. hold
Y
I hold I
u:(t)
above 1n(a1)
u:-l(t)
I:
4u; ( t )
I
I
u.' ( t )
- uE-,(t)
I .:
am->
=-
I
I
I_
-.
- -
( b)
746
(ii)
which is a continuous-time equivalent of the discrete-time recursive algorithm.
Algorithms of this form are also discussed by Solo (1978).
Note that i t would be difficult to implement the estimation algorithm (6)
for other than C = D = 1.0 because of the difficulty in estimating the C and D
polynomials (unless we once again consider some hybrid mechanization which
would be rather impractical). Thus when f(t) is not white noise, the estimates
produced by the algorithm will not have any optimal properties. They will,
however,. be consistent, asymptotically unbiased and, on the basis of previous
experience, they should be reasonably efficient (see Jakeman 1979). Note also
that we can reduce the computational complexity further by replacing '(t) in
(6) by a simpler stochastic approximation (SA) gain (e.g. Young 1976). This
would be a continuous-time equivalent of the SA algorithms discussed in $ 7.
2.2. Experimental results .
The CD approach to the continuous-time model estimation discussed in the
previous section has been evaluated by Monte Ca;rlo simulation analysis applied
to two systems described by second order differential equations. In the first
case, the system was of the form
with u(t) chosen as a random binary signal with levels plus and minus 1.0.
I n the second, the system was modified to
with K = 0-781, w, = 1-6, 5 = 0.5 and u(t) chosen as the following combination
of three sinusoidal signals,
u(t) =sin (0.5wdt)+ sin (w,t) +sin (l.5wdt)
c2).
where w , is the damped natural frequency of the system i.e. w , = o,Z/(1In both of these examples, the noise l(t) was simulated white noise adjusted
to give several different signal/noise ratios S (defined as in Young and Jakeman
1979 a).
Number of samples
Downloaded by [Ohio State University Libraries] at 05:18 19 June 2012
Parameter
True
value
100
500
Sampling rate
Parameter
True
value
PJ40
Pel8
- model
model
ovtput C ( t )
at)=b,l~(t-~l
1 + b,stGs
time ( hours)
Dynamic characteristics
Model
Parameter
estimates
a, = - 0.9724
7.575
1.0
bo= 0.0276
a, = 35.9622
bo= 1.00
7.624
1-0
Table 1 (c).
Figure 3 shows the observed and estimated dye concentration in a river,
where the estimated concentration is generated by a second order differential
equation model estimated using algorithm 4. The data used in this exercise
wcre collected during dye tracer experiments carried out on the Murrumbidgee
River system in Australia (Whitehead et al. 1978). Here, as can be seen from
Fig. 3 , i t was not possible t o maintain a completely regular sampling interval
but T, is approximately half an hour ( P , / 3 0 ) . This demonstrates how d a t a
with irregular sampling intervals can be used, provided the longest sampling
interval does not lead to serious interpolation errors and estimation bias.
Here @ and 'I are assumed known and possibly time variable matrices, while
q k is a discrete white noise vector with zero mean .and covariance matrix Q
which is independent of the ' observational ' white noise source e,, i.e.
(i) Iklkvl
=
(ii)
Pklk-,= ~ f i , - , @+~r Q F T
(iii )
correction
on receipt
sample ( i ~ )
{ ~ k Sk~k-l-~k*)
* ~
f i = PkIk-l-PkIk-I~k*[@
+
bkIk-I~k*]-l
~
k pklk-l
*
~
Equations (8) (i) to (iv) constitute the refined I V algorithm for estimating
stochastically variable parameters in a discrete time-series model of a SISO
t It will be noted that the derivation of this algorithm is made a little more
obvious if the symmetric gain matrix form of the refined IV algorithm is utilized.
?,' from this algorithm is a somewhat closer approximation to P than f),
in the nonsymmetric gain case (see Young and Jakeman 1979 a).
750
The R W model (9) was first used in the early nineteen sixties (see, e.g. Kopp
and Orford 1963, Lee 1964). The IRW model (11) was suggested in the
parameter estimation context by Norton (1975) who has used it successfully
in a number of practical applications. The SRW model (10) is of more recent
origin (Young and .Kaldor 1978) and seems to provide .a good compromise
between models (9) and ( l l ) ,although it requires the specification of one additional parameter, the smoothing constant u = 1 / ~ , where
,
i-, is the approximate
exponential smoothing constant in sampling intervalst.
All of the models (9) to (1 1) are non-stationary in a statistical sense and so
they allow for wide variation in the parameters. Their different characteristics
are described fully by Norton (1975) and Young and Kaldor (1978). P u t
simply, the progression from model (9) through (10) to (11) allows for greater
overall variation in the estimated parameters for any specified covariance
matrix Q, accompanied by greater ' smoothing ' of the short-term variations.
Jn the case where more general @ and '
I matrices are considered i t may
often be possible to assl~methat, for physical reasons, the variations in the
parameter are correlated with the variations in other measured variables
affecting the system. For example, the parameters in a n aerospace vehicle are
known t o be functions of variables such as dynamic pressure, Mach number,
altitude, etc. (Young 1979 b). Or again, the numerator coefficients i n a
T,=
ak*= akPl*+ T k - 1
(13)
we see that, upon substitution from (13) into (12), the variations in ak are given
by a Gauss-Markov m'odel such as (7) with @ = 0,= TkTk-,-I and r = I?,= T k .
It is clearly possible, therefore, t o utilize the refined I V algorithm (8) with
0, and Q in the prediction eqns. (8) (i) and (ii) defined accordingly. Such a n
approach has been used previously with other recursive algorithms b y Young
(1969, 1979 b).
The implementation of the algorithm defined by eqns. 8 (i) t o (iv) offers
several problems. I n particular, the equations imply the parallel implementation of the refined AML algorithm and its interactive use with the refined I V
algorithm, as described by Young and Jakeman (1979 a). This introduces
considerable complexity and, for the present paper, we have once again
implemented only the special case where El, is white noise, i.e. C(z-1) = D(z-1) =
1.0. Here, the full refined AML is not required and the prefilters (nominally
e/Ab)are defined as 1/A^. This simpler algorithm works very well and
seems to give good results even if 5, is coloured noise. Moreover, the algorithm
in this form has also been modified further to allow for an off-line ' smoothing '
solution in which the recursive estimate a t any sampling instant k, is a conditional estimate ti,,,, based on the whole data set of N sampIes. The smoothing
algorithm is an extension of Norton's work (Norton 1975) within a n I V context
and it requires both forward and backward recursive processing of the d a t a
(Young and Kaldor 1978, Kaldor 1978, Gelb 1974).
Finally, i t should be remarked that the above approach to time variable
parameter estimation can be extended straightforwardly both to the multivariable and continuous-time situations. Such extensions are fairly obvious
and so they are not considered in detail in the present paper.
3.1. Experimental results
The IV algorithm (8) has been applied to the follpwing second order discrete
time system
752
and
-0.35 ; k = 1 , ..., 60
alk
-0.6 ; k=61,
..., 100
Figures 4 (a), ( b ) and (c) show the estimation results obtained when a n IRW
model (11) is assumed, with Q selected as a diagonal matrix with elements
0.001, 0, and 0-001 respectively. Both the recursive filtering and smoothing
estimates are shown in all cases (for the constant parameter a,, the smoothed
estimate is, of course, itself a constant). I t is interesting to note t h a t very
similar results t o these were obtained using the SRW model with a = 0.9 and
the diagonal elements of Q set to 0.05, 0, and 0.05, respectively.
It is clear t h a t in this example where step changes in parameters occur, the
smoothing algorithm is not pa,rticularly appropriate, since i t attempts to
provide a smooth transition where abrupt changes are actually being encountered. I n practice, however, i t is quite'likely that smoother changes in
parameters will often occur and it is here that the smoothing algorithm will have
maximum potential. But i t should be emphasized that the smoothing
algorithm used.here is a n off-line procedure and is computationally expensive
in comparison to the filtering algorithm (8). On line, ' fixed lag ' smoothing
algorithms (Gelb 1974) could be developed, however, if c i r c u m s t a n ~ sso
demanded.
III
-recursive
estimte
5'0
number of sarqdes k
1DO
Figure 4. Time variable parameter estimation for second order, stochastic system.
I n particular, i t can be shown that the state estimate 'jCk, is generated theoretically from a relationship of the following form,
(16)
'jCk='k~
where
zk=[Nl<lk!
and
...
N n < l k : N1<2k!
p~ = I
Here
<,,and
<2k
~ T aT
... ! N n < 2 k l
i PTl
(I7)
(18)
while N i are n x n matrices, i = 1, 2, ..., n, composed of the numerator coefficients of [I- Fz-1]-16,,
where Si is the ith unit vector and F is the following
matrix
764
P . Young and A. J a k e m n
t This would seem to satisfy Kalman's requirement (1960),that ' the two problems
(parameter and state estimation) should be optimized jointly if possible '.
Figures 5 (a)and ( b ) show the results obtained when the off-line Kalman
filter design procedure is applied to the following model
which is simply the ' innovations ' or Kalman filter description of model 5,
considered in Part I of the paper. These results were obtkined using Monte
Carlo analysis with ten random simulations and the figures show the ensemble
averages of the two state variable estimates compared with the true state
variables generated by the model. The variance associated with the ensemble
averages was quite small as shown by the standard error bounds marked on the
la1
-true value
lo\
--- estimte
50
100
number of samples, k
756'
plots.
gk is the optimally filtered output of the system ik,which corresponds with the
optimal one step ahead prediction of the output.
'
This model can be considered as the ' dynamic ' equivalent of regression
analysis, with the regression coefficients replaced by transfer functions. I n
this sense, such a model has wide potential for application.
Considering the two input case for convenience, we note from (24) that the
white noise source e , is defined as
I t is now straightforward to show that (25) can be written in two GEE forms.
First, if a single star superscript is utilized to denote prefiltering by CIDA,,
then .
Here
elk is defined as
=Yk-&k
where gZkis the output of the auxiliary model between the second input u,,
and the output, i.e. it is that part of the output ' explained ' by the second input
alone. Similarly, e, can be defined in terms of ,t2,where
tlk
f2k =Yk
(28)
I n this case,
ek= A,(2k** - B2u2,**
where the double star superscript denotes prefiltering by CIDA,.
P9)
758
By decomposing the problem into the two expressions (26) and (29), we have
been able to define two separate GEE'S which are linear in t2ie unknown moae?
parameters for each transfer function in turn. Now let us define,
(ik*=[fi,k-L
- * uik*,
.., xi,k-n
%
,
...I
Ui,k-n*]T
Algorithm (30) is used twice for i = 1, 2, but when i = 2, the single star superscripts are replaced by double star superscripts. The adaptive prefiltering is
then executed in the same manner as for the SISO case, with the refined AML
algorithm providing estimates of the C and D polynomial coefficients. T h e
extension to the general case of m inputs is obvious. There is also a symmetric
gain version of (30) with gik*Tbeing replaced by gik* everywhere except within
the braces.
5.3. The tanks-in-series model
where m is the number of tanks in series and t ( t ) is a noise term. Using a GEE
approsch,it is possible to obtain refined I V estimates of a and b for different
m and so identify and estimate the tanks-in-series model. We will not discuss
this in detail here since i t is done elsewhere (Jakeman and Young 1979 b), b u t
a n example of its use will be described in the next section.
5.4. Experimental results
Jakeman et a2. (1979) have evaluated the M I S 0 transfer function model
estimation procedure using both simulated and real data. Figure 6 compares
the deterministic output of a MIS0 air pollution model obtained in this manner
with the measured data. Here the d i t a are in the form of atmospheric ozone
measurements a t a ' downstream ' location in the San Joaquin Valley of California. These are modelled in terms of two ' input ' ozone measurements a t
model output , i t
50
100
number of mrrQles-,k
True
value
10
30
760
(i)
(ii)
(iii)
(ii)
(34)
where ?,*, Sk* and yk* are defined in a similar nianner to their counterparts
in equation (21) of P a r t I1 of the paper, except that the prefilters are now
D-1C HA-' and y, is made dimension p where necessary by appending zeros
after the qth element. The refined AML algorithm and the recursive algorithm
for Q remain the same but note that = y, - Hgk.
The algorithm (34) is perfectly general for multivariable systems in which
the output observations are a linear combination of the systein variables. It
includes as a special case, therefore, the discrete state-space model with
deterministic input vector u, and output measurement noise (not necessarily
white), but no system noise. I n this case x, is the state of the system while
ek
)I
- A ( ) and
B(z-1) = B,(z-1)
Furthermore, a similar algorithm consisting simply of the refined AML equations could,be developed in the purely stochastic case, i.e. when (33) (i) is
removed so t h a t the model becomes
(i)
(ii)
C(z-l)Ek= D ( r l ) e k
'
(35)
~ k = ~ E vk
k+
and
D(z-l)
= Dl(+)
The algorithm '(34) has not yet been evaluated satisfactorily. Like any
multivariable estimation algorithm, it is not only computationally quite
complex, but also likely to be a very sensitive algorithm, particularly when t h e
data base is small : we have seen in the ordinary MIMO situation (Jakeman
and Young 1979 a),for example, t h a t the symmetric gain algorithm often fails
to converge in such a small sample situation and it is almost certain that (34)
will prove even more sensitive.
It is well known that the recursive least squares and related algorithms,
such as those discussed in this series of papers, can be interpreted as special
examples of matrix-gain, multi-dimensional, stochastic approximation (SA)
procedures (Tsypkin 1971, Young 1976). It is clearly possible, therefore, to
modify any of the refined IV algorithms to form simpler, scalar gain alternatives. While such SA procedures are computationally efficient, they will
not usually possess the rapid convergence characteristics and low sample
statistical efficiency of their matrix gain equivalents. They may prove
advantageous, however, where data are plentiful but computational load must
be kept to a minimum.
I n the basic SA algorithms, the matrix gain p, is replaced by a scalar gain
which obeys the conditions of Dvoretzky (see, e.g. Tsypkin 1971). I n the
discrete-time case, the best known gain sequence of this type is y, = y/k when y
is a constant scalar : in other words, the gain sequence is made a monotonically
decreasing function of the recursive step number, k. I n the continuous-time
case the best known example is simply the continuous-time equivalent of y,.
Such SA algorithms can also be modified (normally heuristically) to allow
for variation in the estimated parameters : this is achieved by restricting the
monotonic decrease in gain in some manner, usually by making y, or y(t)
approach a constant yo exponentially as k or t approaches infinity. This
modification is based on a partial analogjr with the behaviour of the f', matrix
in algorithm (8) when a RW model (9) for parameter variations is used to define
@ and I? (Young 1979 d).
The simple SA versions of the refined IV algorithms cannot be recommended for general application since their rates of convergence can be intolerably low (see, e.g. Ho and Blaydon 1966). But i t is possible to consider
somewhat more complicated algorithms which represent a compromise between
the simplicity of basic SA and the complexity of the fully recursive matrix-gain
algorithms. A simple example would be the following modification t o the
refined IV algorithm given by eqn. (4) (i) in Part I of the paper (Young and
Jakeman 1979 a, p. 4)
ak=
- YkfjkDfik*{zk*T
-yk*)
(36) .
Here f ' , ~
is a 2n + 1, diagonal matrix with elements ( $ k - i * ) - 2 , i = 1 , 2, ... , n,
~=
, 0,1, .. . , n ; while y, is a SA sequence, say y/k. I n other
and ( u , - ~ * ) - j
words, the gain matrix pk is replaced by a purely diagonal ' approximation '
y,P,D, so that the computational burden is proportional to n rather than n2
for the full refined IV algorithm.
Algorithms such as (36) seem to work reasonably well (see, e.g. Kumar and
Moore 1979). As might be expected, their performance seems t o fall somewhere between that of the full algorithm and the scalar gain equivalent. I n
general, however, the simpler algorithms should only be used when this is
necessitated by computational limitations, as for example in on-line applications using special low storage capacity microprocessors.
8. Self adaptive control
Perhaps the prime motivation for the development of recursive estimation
algorithms during the early nineteen sixties was their potential use in
762
self-adaptive control. Of late, this, early stimulus has been revived with the
development of the ' self-tuning regulator ' (STR) based on recursive least
squares (RLS) estimation (e.g. Astrom and Wittenmark 1973, Clarke a n d
Gawthrop 1975):. - I n the STR the effect of the noise induced asymptotic bias
on the RLS parametric estimates is cleverly neutralized by embedding the
algorithm within a closed adaptive loop which automatically adjusts the
estimates and the control law t o yield either minimum variance regulation or
some other objective, such as closed loop pole assignment (Wellstead et a2.
1979).
The concept of the STR can be contrasted with the earlier concept of self
adaptive control by ' identification and synthesis ' (SAIS), where the objective
is to explicitly obtain unbiased parameter estimates and then to separately
synthesize the control law using these estimates (e.g. Kalman 1958, Young
1965 b, Young 1979 b).
While the STR seems to possess good practical potential, there are certain
situations where the alternative of SAIS will have some advantages. For
example, the stability of the adaptive loop in the STR is not easy to ensure a
priori because of the close integration between the recursive algorithm and the
control law, and the highly non-linear nature of the resulting closed loop system.
On the other hand, the separation of estimation .and synthesis in the SAIS
system means that the question of convergence and stability is largely one of
ensuring the identifiability of the system under closed loop control. This will
always be possible provided an external command input is present which is both
' sufficiently exciting ' and statistically independent of the noise in the closed
loop. The requirement for such an input can be problematical, however, in t h e
pure regulatory situation, where the STR clearly comes into its own.
I n cases where the SAIS procedure seems advantageous, the refined I V
algorith rn provides the best, currently available recursive estimation strategy :
i t is robust, can be applied in continuous or discrete-time and its results can
be used for either deterministic o r stochastic control system design. The
efficiency of such an SAIS strategy is demonstrated in the self adaptive
autostabilization system described by Young (1979 b) : here the recursive
estimation is used to synthesize a deterministically designed control system
based on closed loop pole assignment using state variable feedback. This
system achieved tight 'control of the simulated missile over the whoIe of the
mission, which included a difficult boost phase where parameters changed
rapidly by factors ,of up to 30 in 5 sec.
I n the case of adaptive, stochastic control by SAIS, the present paper has
shown that the refined I V approach provides an added bonus : the single IV
AML algorithm yields not only the estimates of the model parameters but also
estimates of the state variables, which can then be used in state variable feedback control. And in the discrete-time, linear case, such an SAIS system could
be considered optima'lly adaptive, since the state variable estimates would
then, as we have seen, be optimal in a Kalman sense.
9. Conclusions
1
This is the third of three papers which have described and comprehensively
evaluated the refined I V approach to time-series analysis. I n the present
paper, we have seen how this approach can be extended in various important
c.
This paper was completed while the authors were visitors in the Control
and Management Systems Division of the Engineering Department, University
of Cambridge.
REFERENCES
ASTROM,
K. J . , 1970, Introduetwn to Stochastic Control Theory (New York : Academic
Press).
ASTROM,
K. J., and WITTENMARK,
B., 1973, Automutica, 9, 185.
BOX,G. E. P., and JENKLNS,
G. M., 1970, Time Series Analysis (San Francisco:
Holden Day).
P. J., 1975, Proc. Instn elect. Engrs, 122, 929.
CLARKE,D. W., and GAWTHROP,
GELB,A. (ed.), 1974, Applied Optimal Estimutwn (Boston : M I T Press).
Ho, Y. C., and BLAYDON,
C., 1966, Proceedings of the N . E.C. Conference, U.S.A.
JAKEMAN,
A. J., 1979, Proc. I P A C Symp. on Identification and System Parameter
E s t i m t w n , Darmstadt, Federal Republic of Germany.
JAKEMAN,
A. J., STEELE,L. P., and YOUNG,
P. C., 1978, Rep. No. AS/R26, Centre
for Resource and Environmental Studies, Australian National University ;
1979, Rep. No. AS/R35.
JAKEMAN,
A: J., and YOUNG,
P. C., 1979 a, Int. J. Control, 29,621 ; 1979 b, Rep. No.
AS/R36, Centre for Resource and Environmental Studies, Australian National
University ; 1979 c, Rep. No. AS/R37 (submitted to Electron. Lett.).
JAZWINSKI,A. H., 1970, Stochastic Processes and Filtering Theory (New York :
Academic Press).
JOHNSTON,
J., 1963, Econometric Methods (New York : McGraw-Eill).
KALDOR,J . , 1978, M.A. Thesis, Centre for Resource and Environmental Studies,
Australian National University.
KALMAN,
R. E., 1958, Trans. Am. Soc. mech. Engrs, 80-D, 468 ; 1960, T r a m . Am.
Soc. mech. Engrs, 82-D, 35.
KALMAN,
R. E., and BUCY,R. S., 1961, Trans. Am. Soc. mech. Engrs, 83-D, 95.
S., 1962, A . I . E . E . Trans. A p p . Ind., 80 11, 378.
KAYA,Y., and YAMAMURA,
L. L., 1966, Proc. J.A.C.C., p. 616.
KOHR,R. H., and HOBEROCK,
KOPP,R. E.,and ORFORD,R. J . , 1963, A I A A J., I, 2300.
KREISSELMEIER,
G., 1977, I E E E Trans. autom. Control, 22, 2.
KUMAR,R., and MOORE,J. B., 1979, Automatics (to appear).
LEE, R. C. K., 1964, Optimal Estimation, Identification and Control (Boston : MIT
Press).
LEVADI,V. S., 1964, International Conference on Microwaves, Circuit Theory and
In.formation Theory, Tokyo.
LJUNO,L., 1976, System Identification: Advances and Case Studies, edited by R. K.
Mehra and D. G. Lainiotis (New York : Academic Press).
J., 1975, Proc. lnstn elect.. Engrs, 122, 663.
NORTON,