Вы находитесь на странице: 1из 11

Neurocomputing 2 (1990) 17 - 27 17

Elsevier

Stock price prediction using neural networks:


A project report
E. Sch6neburg

Expert Informatik GmbH, Roenneberg Str. 5_4, D-IO00 Berlin (West) 41, FRG

Abstract. We analyzed the possibility of predicting stock prices on a short-term, day-to-day basis with the help of aeural net-
works by studying three important German stocks chosen at random (BASF, COMMERZBANK, MERCEDES). We examined
the use of PERCEPTRON, ADALINE, MADALINE and BACK-PROPAGATION networks.
The results were encouraging. Within short prediction time spans (10 days), we achieved a very hight degree of accuracy of
up to 90%. With a BACK-PROPAGATION network we carried out an absolute-value prediction. The network was thereby
able to recognize on its own an obvious heuristic and showed a behaviour similar to the exponential smoothing algorithm.
The results we achieved led us to expect that neural networks could considerably improve the prognosis of stock prices (and
more generally, the prognosis of semi-chaotic time series) in the future.
Nevertheless considerable improvements are needed in the theory of neural networks, as practicable methods to support the
design of neural networks for specific applications are not available yet.

1. I n t r o d u c t i o n B a n k s , f i n a n c i a l institutions, large-scale in-


vestors a n d s t o c k b r o k e r s t h e r e f o r e n o w m o r e t h a n
I n t h e c o u r s e o f the p a s t t w o years, t h e i n t e r n a - ever face the p r o b l e m o f having to b u y a n d resell
t i o n a l f i n a n c i a l w o r l d has s u f f e r e d t w o serious at a p r o f i t a m a x i m u m n u m b e r o f stocks within the
s t o c k e x c h a n g e collapses. S u r p r i s i n g l y h o w e v e r , s h o r t e s t p o s s i b l e time. In this r e g a r d , a t i m e s p a n
these crashes d i d n o t have the d e v a s t a t i n g effects o f o n l y a few h o u r s b e t w e e n b u y i n g a n d selling
o n the i n t e r n a t i o n a l e c o n o m y w h i c h c o u l d have stocks is n o t u n u s u a l .
b e e n e x p e c t e d f r o m a h i s t o r i c a l p o i n t o f view. A U n d e r these c o n d i t i o n s , w h a t are the possibili-
c e r t a i n s e p a r a t i o n o f the s t o c k e x c h a n g e f r o m the ties for p r e d i c t i n g s t o c k prices with t h e help o f d a t a
e c o n o m i c s i t u a t i o n seems to have e m e r g e d , the processing and computer programs?
s t o c k e x c h a n g e no l o n g e r being t h e direct C o n v e n t i o n a l (i.e., n o n a d a p t i v e ) p r e d i c t i o n p r o -
b a r o m e t e r o f e c o n o m i c d e v e l o p m e n t in s o m e g r a m s are w o r k i n g best in s i t u a t i o n s where the real
areas. course o f stock prices c a n be s m o o t h e n e d for the
This p a r t i a l d i s a p p e a r a n c e o f the n a r r o w cause p r e d i c t i o n b y m e a n s o f statistical p r o c e d u r e s
a n d t i m e r e l a t i o n s h i p b e t w e e n the s t o c k e x c h a n g e w i t h o u t loss o f p r e d i c t i o n relevance. In the case o f
a n d the a c t u a l e c o n o m i c f l u c t u a t i o n s m a k e s the m e d i u m - t e r m i n v e s t m e n t , this is realistic a n d
p r e d i c t i o n o f stock prices even m o r e difficult t h a n useful, f o r instance, for p r i v a t e investors w h o
it h a d b e e n until n o w . In a d d i t i o n to this, s t o c k u s u a l l y keep their stocks for a n u m b e r o f weeks o r
t r a d i n g has b e c o m e a b l e a n d f o r c e d to react to m o n t h s . L a r g e - s c a l e investors a n d b r o k e r s can use
p o l i t i c a l a n d o t h e r r e l e v a n t events faster a n d faster such p r o c e d u r e s o n a limited scale o n l y . T h e y
d u e to the ever increasing use o f d a t a p r o c e s s i n g m a k e t h e i r p r o f i t s , for t h e m o s t p a r t , n o t f r o m a
t e c h n o l o g y a n d to the i m p r o v e m e n t o f c o m - m a x i m u m price increase o f fewer s t o c k s over a
m u n i c a t i o n systems. l o n g e r p e r i o d o f t i m e , as is the case for p r i v a t e in-

0925-2312/90/$03.50 © 1990 - - Elsevier Science Publishers B.'v.


18 E. Schi~neburg / Stock price prediction

vestors, but rather as a rule from a relatively smal 1 (We do, not criticize adaptive algorithms for
price increase of a great number of stocks in an ex- statistical prediction since most of them are using
tremely short time. ADALINE-like neural network algorithms in-
Under these conditions, conventional prediction troduced by Widrow and Hoff; see [4], [1, p. 199
methods can only be used with a great amount of ff.] and [2].)
effort. The price fluctuations are for the most part
too minimal to be successfully recognized and 1.1. The alternative: Neural networks
predicted with the statistical procedures which
smoothened the course of stock prices for prediction In the current situation, neural networks offer a
purposes. Statistical smoothing usually disregards genuine alternative to conventional nonadaptive
just that factor which is most important for predic- prediction techniques. They have two great advan-
tions in short periods of time: small upward or tages over conventional methods: (1) they can
downward price fluctuations. recognize " o n their own" implicit dependencies
A further and very important disadvantage of and relationships in data, and (2) they can "learn"
conventional nonadaptive program techniques is to adapt their behavior (their prediction) quickly
that for the analysis of the data, all relevant in- and without complication to changed conditions.
fluencing factors must already be known in ad- These important capabilities of neural networks
vance. Conventional programs must " k n o w " all solve, at least in principle, some of the problems
parameters and influencing factors, i.e., they must mentioned above.
contain the parameters along with their The following results show that it is possible to
assessments and weightedness a priori in their pro- make surprisingly good predictions, even with
gram code. The designer of this type of program relatively simple and almost "old-fashioned"
must therefore know and determine exactly which types of neural networks. These results are part of
influencing factors he thinks are important and to a comprehensive study, which we carried out in
what extent the wants to consider them. preparation for a contract to develop neural net-
In the case of the economic development and in- works for stock price prediction for a major Ger-
tertwinement of today's world, such as we are man bank (see [31).
witnessing with the upheavals in Eastern Europe, it
is practically impossible to recognize all the rele-
vant influencing factors and explicitly formulate 2. The results
them. The fundamental analysis is basically
restricted by the incredible abundance and com- For the purpose of the following study, we have
plexity of information to be considered. No one assumed that stock prices do not represent com-
can determine today what relevant factors will af- pletely chaotic time series. This is assumed from
fect the world economy in the next few years. Too time to time. We have been working on the basis
many factors are unsure. of the axiom that stock prices show, at least in
As a result, it is highly improbable that a con- part, certain tendencies and trends which may be
ventional program should be able to correctly very difficult to recognize but nevertheless exist.
recognize and predict, even with only approximate We have also presupposed that price
accuracy, the trends implicitly contained in the predictions--even in a very restricted form--are
economic data of the future. This would only be possible and useful using only methods of
possible if the program were constantly adapted to technical stock analysis. We have until now not
the current situation, which is as a rule very costly considered fundamental analytical methods.
and difficult, and which would probably take more We have analyzed several classical network types
time than the life-span of the trends themselves. in terms of their suitability for stock price predic-
E. SchOneburg / Stock price prediction 19

tion. In particular, we have examined A D A L I N E , • G = major ( > 107oof stock price) variations
MADALINE, PERCEPTRON and BACK- in relation to the previous day,
P R O P A G A T I O N networks. Our aim was mainly a • the prices of the last 10 days in the case of
short-term rise-fall prediction for the next day ( in BACK-PROPAGATION.
the case of the A D A L I N E , M A D A L I N E and In our studies, the element K has for the most
P E R C E P T R O N networks) and an absolute-value part not been directly necessary for prediction, as
prediction ( B A C K - P R O P A G A T I O N network) for it is only the relative variation in prices, and not
the next day. the absolute price value, which is of significance
As data for training the networks, we used the for price predicition.
prices of three randomly chosen major German Figure 1 shows some of the linearizations of in-
shares (BASF, C O M M E R Z B A N K and MERCE- put data we examined and their distribution on the
DES) over a period of 42 days (from 9 February input vector. The predictions were strongly depen-
1989 until 14 April 1989). After the 42 training dent on the linearization chosen, as shown by the
days, predictions were made for up to a maximum result charts below.
of 56 days.
The input vector had a breadth of 40 elements 2.1. T h e A D A L I N E network
for most o f the networks (the BACK-
P R O P A G A T I O N network had 10 input elements), Training was over a period of 42 days, predic-
i.e., for each learning step 40 (10) data items were ting over 19. A comparison of the dependencies
fed in parallel into the networks' input layer. These between various linearizations and the learning
are the input information items we used: cycles (between 2000 and 3000 presentations of
• K = the current day stock price, data to be learned were " s h o w n " to the network)
• VV = the absolute variation o f the price in is presented in Table 1.
relation to the previous day, Our best result, with the BASF stock (lineariza-
• R V = the direction o f this variation (rise, tion (c), 2500 learning steps), was an accuracy rate
fall), of 79°7o for a rise-fall prediction for the following
• RG = the direction of the variation from two day. The C O M M E R Z B A N K stock, with lineariza-
days previously, tion (d) and also 2500 learning steps, was predicted

Different Spllttings of the I n p u t v e c t o r s

W RV RG G

a) [ 21 E

12 II -

c) 16 [ 16

d) 16 I 10

, II 9

Fig. 1. Different splitting of the input vectors.


20 E. SchOneburg / Stock price prediction

with a maximum accuracy of 74o70. In contrast, the We found the dependency of the networks'
results for the MERCEDES stock were rather prediction capacity on the planned prediction
modest. The maximum accuracy achieved here period and on the distance in time from the actual
with linearizations (a) and (d) and 2000 learning date to be of particular interest. The results are
steps, and with linearization (a) and 3000 learning shown in Table 2, Fig. 2, Table 3, Figs. 3 and 4.
steps, was only 58°70. Figures 2 - 4 are to be interpreted in the follow-

Table 1
Prediction accuracy (070) dependent on linearization and number of learning cycles with the ADALINE network

BASF COMMERZBANK MERCEDES

2000 2500 3000 2000 2500 3000 2000 2500 3000

Linearization (a) 58 58 58 47 68 68 58 53 58
Linearization (b) 47 68 63 63 58 58 42 53 53
Linearization (c) 63 79 74 68 63 63 53 53 53
Linearization (d) 68 68 68 63 74 68 58 53 53
Linearization (c) 47 63 63 58 63 63 53 53 53

Table 2
Prediction accuracy (070) dependent on linearization and prediction time for BASF stock with the ADALINE network

Prediction period in days

10 20 30 40 50 58

(a) 70,0 55.0 46,7 40,0 42,0 41,4 49,2


(b) 70,0 65,0 56,7 50,0 50,0 48,3 56,7
(c) 80,0 70,0 60,0 55,0 60,0 56,9 63,7
(d) 80,0 75,0 70,0 57,5 58,0 55,2 66,0
(e) 70,0 60,0 50,0 40,0 38,0 37,9 49,3
74,0 65,0 56,7 48,5 49,6 47,9 57,0

Table 3
Prediction accuracy (°7o) dependent on linearization and prediction time for COMMERZBANK stock with the ADALINE network

Prediction period in days

I0 20 30 40 50 58

(a) 60,0 65,0 63,3 52,5 54,0 51,7 57,8


(b) 80,0 65,0 63,3 60,0 62,0 58,6 64,8
(c) 70,0 65,0 56,7 47,5 54,0 53,4 57,8
(d) 90,0 75,0 60,0 52,5 58,0 56,9 65,4
(e) 70,0 65,0 60,0 52,5 56,0 53,4 59,5
74,0 67,0 60,7 53,0 56,8 54,8 61, I
E. Sch6neburg / Stock price prediction 21

ing way: the thin line represents the training placed in front of the bold line. It is included to
period, the bold line the prediction period and the allow a comparison between the training data and
dotted line the average prediction succes in °70 of the recall data. The straight line represents the
the several 10 day periods. The thin line should be overall trend in prediction accuracy.)

BASF- Stock-Prediction
DM %
320 100
315 - - 90
310 80
3on : ' : - ~ ..... 70

300 60
295 50
290 ........ 40

280 -- 20

275 10
270 l : l l l l l l , ' , : ' , l l ~ ' , : ] [ l l l i l , : : l l l l l l l ' , : l l l l l l ] l ' , ' , : : l l l ' , ' , ' , l l l l l : 0
19.04.89 05.08.89 22.06.89 07.06.89 21.06.89 06.07.89

Fig. 2. BASF stock prediction with the ADALINE network: Training (thin line), prediction (bold line), average prediction (dotted line),
and overall trend (straight line).

Commerzbank Stock-Prediction
DM %
280 100
275 ~ / 90
2 7 0 - . . . . . . . . . . 80

260 60
255 ..~ . . . 50
250 40
245 30
240 20
235 10
230 I I ' , = : : l l ' , ' , ' , : l : : = , ' , ' , l ' , = l i l l = = ' , : l t t l l l t l l l t l i = = , I p l l : i = = l : r r ' l ' r 0
19.04.89 06.06.89 22.06.89 07.06.89 21.08.89 06.07.89

Fig. 3. COMMERZBANK stock prediction with the ADALINE network: Training (thin line), prediction (bold line), average prediction
(dotted line), and overall trend (straight line).
22 E. Sch6neburg / Stock price prediction

Mercedes- Stock-Prediction
DM %
590 100

580 90

570 //~ 80

,o o,,g,,,,,,
.......... ,,,
,o
550------_____ _ _ ~ ~ A \ A . eo

530 \ \A . J .. " ~ / . ~ "~ .....


v,,\ t-- ,o

510 ~ ~ 20
5OO 10
4g01 t t I I I I I I I I I I I I I I I I I III '11, : I : : : : ', ', : ', 'l I I I I : : : ', ', ', I '1 '1 ~ ~1, I I I ', ', ', ',0
19.04.89 05.06.89 22.06.89 07.06.89 21.06.89 05.07.89

Fig. 4. MERCEDES stock prediction with the A D A L I N E network: Training (thin line), prediction (bold line), average prediction (dot-
ted line, and overall trend (straight line).

It must be noted that, although the success rate M A D A L I N E networks with 3 to 21 A D A L I N E


drops along an almost linear pattern as the time elements (due to combinatorial considerations, we
distance to the actual date increases (which was to only used odd numbers o f A D A L I N E PE's). At
be expected), it was nevertheless possible to make first, we worked with a fixed linearization (a) and
very good predictions for some ten-day periods put the network through 4000 learning steps.
even though the time distance to the actual date The results of these tests (based on a rise-fall
was very large. This is probably because the net- prediction for 19 days) can be seen in Table 4. For
work can make good predictions for periods which all three stocks, 17 A D A L I N E processors in the in-
have occurred in a similar form in the past and for termediate layer were a local optimum (as we
which the future then was similar to the future to guess). The relationship between the number of
be predicted. In these cases the network was able A D A L I N E processors needed, the basic lineariza-
to recognize similar historical situations relatively tion and the prediction time are presented in a
well. chart (Table 5), using the C O M M E R Z B A N K stock
as an example. We have, however, not yet analyz-
2.2. The M A D A L I N E networks ed these interrelationships mathematically, as they
are quite complex.
A M A D A L I N E network can be seen as a com-
bination of A D A L I N E networks, in which there 2.3. The P E R C E P T R O N network
are A D A L I N E neurons in the intermediate layer of
the network which generate a result by using the The P E R C E P T R O N network showed the worst
majority function in the output layer. For a prediction capacity o f all, a fact which we had ex-
M A D A L I N E network, therefore, it is necessary to pected in view o f the well-known limitations o f this
determine how many A D A L I N E processors are to type o f network. The best result was 68°70 ac-
be used in the intermediate layer. We tested curacy, but it was surprisingly enough achieved
E. SchOneburg / Stock price prediction 23

very often for the M E R C E D E S stock, which was Determining optimal learning coefficients was
not very successfully predicted by the other net- difficult. The final tests were carried out with lear-
work types. ning coefficients cl = 0.01 and c2 = 0.05, as they

Table 4
Prediction accuracy (°7o) with the M A D A L I N E network

ADALINE PE's BASF COMMERZBANK MERCEDES

3 47 68 58 58
5 58 58 63 60
7 53 63 58 58
9 58 63 47 56
11 58 74 63 65
13 58 68 58 62
15 68 58 53 60
17 68 74 63 69
19 58 74 47 60
21 53 63 47 54

58 66 56 60

Table 5
Relationship between n u m b e r of A D A L I N E processors, basic linearization and prediction time for C O M M E R Z B A N K stock with the
M A D A L I N E network

COMMERZ- Prediction period in days


BANK
10 20 30 40 50 58

(a) 11 80 70 63 60 58 57 65
(a) 17 70 75 63 63 56 57 64
(a) 19 80 70 60 63 56 57 64
(d) 7 70 65 60 55 56 57 61
(d) 11 80 65 53 50 56 57 60

76 69 60 58 56 57 63

Table 6
Prediction accuracy (o7o) with the P E R C E P T R O N network

cl = 0.01, c2 = 0.05 BASF COMMERZBANK MERCEDES

2500 3000 3500 2500 3000 3500 2500 3000 3500

Linearization (a) 58 58 58 47 47 47 63 63 63
Linearization (b) 42 42 42 53 58 63 58 42 47
Linearization (c) 58 58 58 47 47 47 68 63 63
Linearization (d) 47 47 47 58 58 58 68 68 68
Linearization (e) 68 52 52 42 42 42 63 68 68
24 E. Sch6neburg / Stock price prediction

achieved the best results. Linearizations (a)-(e) There are two reasons for this time shift. Firstly,
were used, and stock prices for 43 days learned the network was trained wiht a "window" of 10
2500 to 3500 times. Afterwards, 19 predictions days being placed on the learning data to predict
were assessed; the results are presented in Table 6. the eleventh day. The "window" was then shifted
one day to the right and the new set of data was
2.4. The B A C K - P R O P A G A T I O N network then learned.
Secondly, and interestingly, the network itself
To train the BACK-PROPAGATION network, has apparently discovered the following heuristic:
the input data had to be transformed (scaled) to > > take for the next-day prediction the stock
the real interval [0.1, 0.9] first. Finding the best price of the last day and modify it only slightly
topology for the BACK-PROPAGATION net- < < . This is evident in the fact that the prediction
work required much work. The best results pro- varies only slightly from the previous day's stock
duced the topology with the corresponding price, and creates the impression of a time shift of
transfer functions as shown in Fig. 5. the prediction in relation to the actual stock price.
As learning rules, we used the cum-delta rule as This apparent time shift does also occur when
well as the delta rule; however, only the connec- using exponential smoothing of first or second
tions to the output layer learned with the cum-delta order for prediction purposes instead of neural
rule. This led to slightly better results than with the networks. The network therefore discovered the
exclusive use of one of the two rules. The summa- importance of the last day's stock price for the
tion function was the simple summation and the prediction which is the essence of the exponential
output function was direct. We used the following smooting algorithm. It used the 10 day data win-
figures for the learning coefficients: cl (learning dow only to a minor extent as a support for the
rate) = 0.6 and c2 (momentum) = 0.9. prediction.
A partial result of our test series for 60 days of The simple heuristic detected by the network
training and 40 recall figures (predictions) can be often leads to quite good results, e.g. in weather
seen in Figs. 6 and 7. It can be seen that the forecasting, since as a rule a weather change is less
predicted figures exhibit a time shift of almost one probable than a steady weather pattern. This is
day. also true for stock prices. The critical point with

output - bias (1 neuron)

hidden la~,er~3 hidd~'eenlayer 4 (5 + 5 neurons)


I I
hidden layer 1 hidden layer 2 (5 + 5 neurons)

input
J (10 neurons)

sigmoid
Z
sigmoid sine
I I
sigmoid sine
~linear j

Fig. 5. Topologyand transfer functionsof the BACK-PROPAGATIONnetwork.


E. Sch6neburg / Stock price prediction 25

Commerzbank Prognosis
using B a c k p r o p a g a t i o n

280

20

260 ~

250 ~ ~ ~

230

220 i rrllllrlllllr iiiiiill]l iii;rrr FII r r r l l l r ~ p l l l l r r i i ~ l r r r l l l ~


18,4.89 02.6.89 11,6.89 19.5.89 31.5.89 09.6.89 20.6,89 29.6.89 13.7.89

Real D a t a 4-- Predicted Data

Fig. 6. COMMERZBANK stock prediction with the BACK-PROPAGATION network.

Mercedes Prognosis
using B a c k p r o p a g a t i o n

580

5,o

560 ~ -

550 +.--~ ÷ ~ - -

510 + ~

5 0 0 T I I [ [ I I I ] I I r ~ I I I P ] I = ~ I r I I I I I r I I r i i i r i

18.5.89 26.6.89 02.6.89 09.6.89 16.6.89 23.6.89 30.6.89 07.7.89 13.7.89

Real D a t a ÷ Predicted Data

Fig. 7. MERCEDES stock prediction with the BACK-PROPAGATION network.

this simple heuristic is, however, the > > slight 3. Further tests
variation < < from the previous day's price as a
recommendation for the following day. The quali- The results presented above were encouraging
ty with which this > > slight variation < < is and have led us to further tests. We are currently
recognized and characterized is the quality with concentrating on the following areas:
which the prediction can be improved compared (1) The selection of other, perhaps more suitable
with the obvious heuristic. input information for the networks (e.g. variation
26 E. SchOneburg / Stock price prediction

quotients from previous day's prices as input or in- > > C < < . For the simulations described above,
formation about stock price behavior on the basis some 5000 lines of C code were necessary. For the
o f reference stocks from the same field of analysis of the performance of complex networks
business--key word: field dependencies, etc.). We we are currently testing neurocomputers and
are also currently examining the possibility of transferring our simulator to transputer hardware.
designing neural networks as constraint-satisfac- (6) We have estimated the statistical relevance of
tion networks for fundamental analytical pur- our results. Since we had an accuracy of 63 °70 using
poses. 285 predictions in total the probability of this out-
We have estimated that the amount of work come is as low as 1/100 000 which is not too bad.
necessary to design such a network would be about On the 10 to 20 days basis the results were even bet-
the same as the amount of work required for the ter as we had an average of 71,3°70 correct predic-
acquisition of knowledge about stock behavior to tions on 300 predictions in total.
be used in an expert system for this task. We The statistical relevance of these results is yet to
believe that this would be very time-consuming and be confirmed. The test series were too short to per-
expensive. mit statistical assessment. In 1990 we will carry out
(2) The study of further network types in terms comprehensive statistical analyses over a con-
of their suitability for price predictions (among tinuous prediction period of 10 months.
others, counter propagation, Hopfield, Kohonen, (7) In order to become less dependent on the net-
ARTx). The first analyses with Hopfield networks work types and the various parameters, we are cur-
required considerable effort for a suitable trans- rently exploring the possibility of determining the
formation and preparation o f the input and the weighting between the neurons by means of a
definition of an energy function. Other tests using genetic algorithm and an evolution theoretical ap-
feature extraction with Kohonen-like networks proach. We will be reporting on our progress in
have not yet produced results worthy to mention. suitable time.
Nevertheless we presume that competitive self-
organizing networks are most suitable for predic-
tions. We do, however, intend to systematically ex- 4. Problems
amine other types of networks in terms of their
suitability, as we are not sure how much complex We cannot deny that we have been faced with
networks are better as compared to simpler ones. complex problems in some of our attempts to
It could be that the increase in prediction capacity determine the suitability o f neural networks for
of the more complex networks is relatively small, prediction of stock prices; for some of these, there
in which case the work involved would probably is no solution in sight, not even in theory.
no longer be worth in practical use. Most of the time spent in the project was related
(3) The discovery of > > best < < network to the difficulties of finding the > > best < < net-
types and topologies and the determination o f rele- work topology and setting the corresponding net-
vant network parameters (e.g. the number of work parameters (see point (3) in Section 3). As a
layers and neurons, learning coefficients, connec- rule even slight parameter changes caused major
tions of layers, threshold values, learning rules and variations in the behavior o f almost all networks.
strategies, etc.). There is no theory available yet which could be us-
(4) The improvement of the theoretical means at ed as a guideline to find > > best < < networks.
our disposal as aids in designing and analyzing net- There is a real practical need for a theory of
works. equivalence classes of neural networks and the
(5) The development of distinct, problem-ori- development of a partial order on these classes
ented network simulators in sequential and parallel defining the relation of complexity of networks
E. Sch6neburg / Stock price prediction 27

and minimal elements (i.e., networks with the Professor Eberhard Sch6neburg,
s m a l l e s t n u m b e r o f n e u r o n s n e c e s s a r y t o s o l v e cer- born in 1956, studied mathematics,
computer science, physics and
tain kinds of problems etc.). As classification philosophy at the Berlin Free Univer-
t h e o r y in l o g i c s h o w s , this is p o s s i b l e e v e n f o r c o m - sity and Technical University from
1979 to 1984. After the completion of
plex mathematical t h e o r i e s . It s h o u l d at l e a s t in his studies (degree: Dipl. Math.), he
principle be possible for subclasses of neural net- worked as software and systems
engineer for SIEMENS AG in Berlin
w o r k s . W i t h o u t a n y t h e o r y o f this k i n d at h a n d and Boca Raton, Fla, USA
o n e is a l m o s t a l w a y s f o r c e d t o p r o c e e d u s i n g a (1984 - 1986). At SIEMENS, his area
of involvement consisted of the
time-consuming and laborious trial-and-error development of intelligent self-test
strategy. and diagnosis modules for real time applications and the
automatic generation of programs.
Professor Sch6neburg accepted an offer to DORNIER to
take over the direction of the department for new technologies,
expert systems and computer security in 1986. In 1988 he found-
References ed Expert Informatik GmbI-I. This company specializes in ex-
pert systems applications in banks and in the production area
(solvency analyses, production planning), and was the first
[1] M. Hiittner, Prognoseverfahren und ihreAnwendung (De company in Germany to be asked to develop neuronal networks
Gruyter, Berlin, 1986). for a major bank; it also rates among the leaders in the develop-
[2] P. Mertens, ed., Prognoserechnung (Physica-Verlag, ment of anti-computer-virus sytems.
With the chair of artificial intelligence at the Furt-
Wiirzburg, 1981). wangen/Schwarzwald Technical College, Professor
[3] E. Sch6neburg, N. Hansen, M. Gantert and M. Reiner, Sch6neburg's scientific interest has concentrated on applica-
Kurzfristige Aktienkursprognose mit Neuronalen Netzen, tions of nonclassical logic in robot technology, in the prediction
Expert Informatik GmbH, Berlin, 1989. of semi-chaotic time series with neuronal networks (dissertation
topic) and in expert systems security. Professor Sch6neburg has
[4] B. Widrow and M.E. Hoff, Adaptive switching circuits, just completed writing a German textbook on neural networks
in: Anderson and Rosenfeld, eds., Neurocomputing (MIT containing a software simulator in > C < (to appear in 1990).
Press, Cambridge, MA, 1988) 126 ff.

Вам также может понравиться