Академический Документы
Профессиональный Документы
Культура Документы
Lecture Notes
Copyright 2006 by Wolfgang Marquardt Lehrstuhl fr Prozesstechnik u RWTH Aachen University Templergraben 55 D - 52056 Aachen Germany Tel.: Fax: E-mail: WWW: +49 (0) 241 - 80-94668 +49 (0) 241 - 80-92326 secretary@lpt.rwth-aachen.de http://www.lpt.rwth-aachen.de
Dieses Skript ist urheberrechtlich geschtzt und darf nur zur Benutzung im Rahmen der u Vorlesung ,,Introduction to Simulation Techniques an der RWTH Aachen kopiert werden. Jede weitergehende Nutzung bedarf der ausdrcklichen schriftlichen Genehmigung. u In diesem Skript werden Materialien anderer Autoren zu Ausbildungszwecken verwendet. Dies impliziert nicht, dass die Materialien frei von Copyright sind. Dieses Skript zur Vorlesung ,,Introduction to Simulation Techniques wurde nach bestem Wissen erstellt. Jedoch kann keine Garantie fr Richtigkeit der gemachten Angaben sowie Freiheit von u Tippfehlern ubernommen werden. Der Stoumfang der Prfungen im Fach ,,Introduc u tion to Simulation Techniques richtet sich nach den Darstellungen in Vorlesungen und Ubungen, nicht nach diesem Skript.
The copyright of these lecture notes is reserved. Copies may only be made for use within the lecture Introduction to Simulation Techniques at RWTH Aachen University. Any further use requires a written permission. In these lecture notes, materials of other authors are used for educational purposes. This does not imply that these materials are free of copyright. These notes for the lecture Introduction to Simulation Techniques have been created to the best knowledge of the authors. However, correctness of the given information as well as absence of typing errors cannot be guaranteed. The assessment load for examinations in Introduction to Simulation Techniques conforms to the presentations in lectures and exercises, not to these notes.
Preface
This manuscript accompanies the lecture Introduction to Simulation Techniques which may be attended by students of the master programme Simulation Techniques in Mechanical Engineering, students of the Lehramtsstudiengang Technische Informatik, students of Mechanical Engineering whose major course of study is Grundlagen des Maschinenbaus as well as a 3rd technical elective course in Mechanical Engineering. This lecture was oered for the rst time in the summer semester 2001. The manuscript aims at minimizing the eort of taking notes during the lecture as far as possible and tries to represent the basics of simulation techniques in a compact manner. The topics treated in the manuscript are very extensive and can therefore be discussed only in a summarizing way in an one-term lecture. Well-known material from other lectures is only covered briey. It is presupposed that the reader is familiar with the basics of numerics, mechanics, thermodynamics and programming. Above all Martin Schlegel contributed to the success of this manuscript, both with critical remarks and helpful comments as well as the continuous revision of the text and the gures. Aidong Yang did a lot of work in polishing the rst English version of this manuscript. Beyond that Ngamraht Kitiphimon and Sarah Jones have to be mentioned, who provided the rst German and English versions of the manuscript, respectively. My thanks to all of them. The lecture is based on the lecture Simulationstechnik oered by Professor M. Zeitz at the University of Stuttgart. I would like to express cordial thanks to him for his permission of using his lecture notes. Despite repeated and careful revision of the manuscript incorrect representations might not be excluded. I am grateful for each hint about (possible) errors, gaps in the material selection, didactical weaknesses or unclear representations, in order to improve the manuscript further. These lecture notes are also oered on the homepage of Process Systems Engineering (http://www.lpt.rwth-aachen.de), where it can be downloaded by any interested reader. I hope, that with the publication on the internet a faster correction of errors can be achieved. I would like to ask the readers to submit suggestions for changes and corrections by email (simulationtechniques@lpt.rwth-aachen.de). Each email will be answered.
Ralph Schneider
Contents
1 Introduction 1.1 What is Simulation? . . . . . . . . . . . . . . . . . . 1.2 Simulation Procedure . . . . . . . . . . . . . . . . . 1.3 Introductory Example for the Simulation Procedure 1.3.1 Problem . . . . . . . . . . . . . . . . . . . . . 1.3.2 Abstraction . . . . . . . . . . . . . . . . . . . 1.3.3 Mathematical Model . . . . . . . . . . . . . . 1.3.4 Simulation Model . . . . . . . . . . . . . . . . 1.3.5 Graphical Representation . . . . . . . . . . . 1.3.6 Analysis of the Model . . . . . . . . . . . . . 1.3.7 Numerical Solution . . . . . . . . . . . . . . . 1.3.8 Simulation . . . . . . . . . . . . . . . . . . . 1.3.9 Applications of Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . 4 . 5 . 5 . 5 . 5 . 7 . 7 . 8 . 10 . 12 . 12 15 15 19 20 20 23 24 24 26 27 27 28 30 31 32 34 35 36 36 37 38 38
2 Representation of Dynamic Systems 2.1 State Representation of Linear Dynamic Systems . . . . . . 2.2 The State Space . . . . . . . . . . . . . . . . . . . . . . . . 2.3 State Representation of Nonlinear Dynamic Systems . . . . 2.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Generalized Representation . . . . . . . . . . . . . . 2.4 Block-Oriented Representation of Dynamic Systems . . . . 2.4.1 Block-Oriented Representation of Linear Systems . . 2.4.2 Block-Oriented Representation of Nonlinear Systems
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
3 Model Analysis 3.1 Lipschitz continuity . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Solvability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Stationary States . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Jacobian Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Linearization of Real Functions . . . . . . . . . . . . . . . . . . . 3.6 Linearization of a Dynamic System around the Stationary State . 3.7 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . 3.8 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 One state variable . . . . . . . . . . . . . . . . . . . . . . 3.8.2 System matrix with real eigenvalues . . . . . . . . . . . . 3.8.3 Complex eigenvalues of a 2 2 system matrix . . . . . . . 3.8.4 General case . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
Contents
3.9 Time Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.10 Problem: Sti Dierential Equations . . . . . . . . . . . . . . . . . . . . . . 43 3.11 Problem: Discontinuous Right-Hand Side of a Dierential Equation . . . . . 44 4 Basic Numerical Concepts 45 4.1 Floating Point Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Rounding Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5 Numerical Integration of Ordinary Dierential Equations 5.1 Principles of Numerical Integration . . . . . . . . . . . . 5.1.1 Problem Denition and Terminology . . . . . . . 5.1.2 A Simple Integration Method . . . . . . . . . . . 5.1.3 Consistency . . . . . . . . . . . . . . . . . . . . . 5.2 One-Step Methods . . . . . . . . . . . . . . . . . . . . . 5.2.1 Explicit Euler Method (Euler Forward Method) . 5.2.2 Implicit Euler Method (Euler Backward Method) 5.2.3 Semi-Implicit Euler Method . . . . . . . . . . . . 5.2.4 Heuns Method . . . . . . . . . . . . . . . . . . . 5.2.5 Runge-Kutta Method of Fourth Order . . . . . . 5.2.6 Consistency Condition for One-Step Methods . . 5.3 Multiple-Step Methods . . . . . . . . . . . . . . . . . . . 5.3.1 Predictor-Corrector Method . . . . . . . . . . . . 5.4 Step Length Control . . . . . . . . . . . . . . . . . . . . 53 53 53 55 57 59 60 61 61 62 63 63 64 65 65 67 67 68 69 70 71 71 72 74 77 77 78 79 79 80 80
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
6 Algebraic Equation Systems 6.1 Linear Equation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Solution Methods for Linear Equation Systems . . . . . . . . . . 6.2 Nonlinear Equation Systems . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Solvability of the Nonlinear Equation System . . . . . . . . . . . 6.2.2 Solution Methods for Nonlinear Equation Systems . . . . . . . . 6.2.2.1 Newtons Method for Scalar Equations . . . . . . . . . 6.2.2.2 Newton-Raphson Method for Equation Systems . . . . 6.2.2.3 Convergence Problems of the Newton-Raphson Method 7 Dierential-Algebraic Systems 7.1 Depiction of Dierential-Algebraic Systems . . . . . 7.1.1 General Nonlinear Implicit Form . . . . . . . 7.1.2 Explicit Dierential-Algebraic System . . . . 7.1.3 Linear Dierential-Algebraic System . . . . . 7.2 Numerical Methods for Solving Dierential-Algebraic 7.3 Solvability of Dierential-Algebraic Systems . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . . . . . . . . . . . . . . Systems . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
VI
Contents
8 Partial Dierential Equations 8.1 Introductory Example . . . . . . . . . . . . . . 8.2 Representation of Partial Dierential Equations 8.3 Numerical Solution Methods . . . . . . . . . . 8.3.1 Method of Lines . . . . . . . . . . . . . 8.3.1.1 Finite Dierences . . . . . . . 8.3.1.2 Problem of the Boundaries . . 8.3.2 Method of Weighted Residuals . . . . . 8.3.2.1 Collocation Method . . . . . . 8.3.2.2 Control Volume Method . . . . 8.3.2.3 Galerkin Method . . . . . . . . 8.3.2.4 Example . . . . . . . . . . . . 8.4 Summary . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
85 85 86 87 87 88 90 91 93 93 93 94 97 99 100 100 101 101 101 102 103 104 106 108 108 108 110 110 112 113 113 113 113 114 115
9 Discrete Event Systems 9.1 Classication of Discrete Event Models . . . . . . . . . . . . . 9.1.1 Representation Form . . . . . . . . . . . . . . . . . . . 9.1.2 Time Basis . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 States and State Transitions . . . . . . . . . . . . . . . 9.2 State Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Representation of Graphs and Digraphs with Matrices 9.3.2.1 Models for Discrete Event Systems . . . . . . 9.3.2.2 Simulation Tools . . . . . . . . . . . . . . . . 9.4 Automaton Models . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Discrete Time Petri Nets . . . . . . . . . . . . . . . . 9.5.2 Simulation of Petri Nets . . . . . . . . . . . . . . . . . 9.5.3 Characteristics of Petri Nets . . . . . . . . . . . . . . 9.5.3.1 Reachability . . . . . . . . . . . . . . . . . . 9.5.3.2 Boundedness and Safety . . . . . . . . . . . . 9.5.3.3 Deadlock . . . . . . . . . . . . . . . . . . . . 9.5.3.4 Liveness . . . . . . . . . . . . . . . . . . . . . 9.5.4 Continuous Time Petri Nets . . . . . . . . . . . . . . . 10 Parameter Identication 10.1 Example . . . . . . . . . . . . . . . . . . 10.2 Least Squares Method . . . . . . . . . . 10.3 Method of Weighted Least Squares . . . 10.4 Multiple Inputs and Parameters . . . . . 10.5 Recursive Regression . . . . . . . . . . . 10.6 General Parameter Estimation Problem 10.6.1 Search Methods . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
VII
Contents
10.6.1.1 Successive Variation of Variables . . . . . . . . . . . . . . . 124 10.6.1.2 Simplex Methods . . . . . . . . . . . . . . . . . . . . . . . 125 10.6.1.3 Nelder-Mead Method . . . . . . . . . . . . . . . . . . . . . 126 11 Summary 11.1 Problem Denition . . . . . . . . . . . . 11.2 Modeling . . . . . . . . . . . . . . . . . 11.3 Numerics . . . . . . . . . . . . . . . . . 11.4 Simulators . . . . . . . . . . . . . . . . . 11.4.1 Application Level . . . . . . . . . 11.4.2 Level of Problem Orientation . . 11.4.3 Language Level . . . . . . . . . . 11.4.4 Structure of a Simulation System 11.5 Parameter Identication . . . . . . . . . 11.6 Use of Simulators . . . . . . . . . . . . . 11.7 Potentials and Problems . . . . . . . . . Bibliography 127 127 128 130 130 131 131 131 131 134 134 135 137
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
VIII
1 Introduction
1.1 What is Simulation?
Simulation (virtual reality, the experiment on the computer) is also called the third pillar of science next to theory and experiment. We all know examples of simulation techniques from the area of computer games, e.g. the ight simulator (see Fig. 1.1).
Figure 1.1: Flight simulator as an example of a simulator. In this example the reality is represented in the form of a mathematical model. The model equations are solved with a numerical algorithm. Finally, the results can be visually displayed. A more rigorous denition of (computer aided) simulation can be found in Shannon (1975, p. 2): Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behavior of the system and its underlying causes or of evaluating various designs of an articial system or strategies for the operation of the system. The VDI guideline 3633 (Verein Deutscher Ingenieure, 1996) denes simulation in the following way:
1 Introduction
Simulation is the process of emulating a system with its dynamic processes in an experimental model in order to receive some knowledge, which is transferable to the reality. In a broader sense, simulation means the preparation, the execution, and the evaluation of aimed experiments by means of a simulation model. With the help of simulation the temporal behavior of complex systems can be discovered (simulation method). Examples of application areas where simulation studies are used are:
ight simulators, weather forecast, stock market, war gaming, software development, exible manufacturing, chemical processes, power plants.
Simulation became well-known in connection with the book of ?, which presented and interpreted a world model in the seventies. The obtained simulation results predicted, that with the continuation of those days economy and population growth, only a few decades were needed to lead to the exhaustion of raw material resources, world wide undernourishment, environmental destruction, and pollution, and thereby to a dramatic population breakdown.
As Fig. 1.2 shows, you should be aware of the dierences between reality and its representation by the computer. This is because modeling is an intended simplication of reality through abstraction. Essentially, it is not reality that is represented on the computer, but solely an approximation! According to the approximation used, dierent models are obtained. This becomes clear through the denition of a model by Minsky (1965) and is illustrated in Fig. 1.3: To an observer B, an object M is a model of an object A to the extent that B can use M to answer questions that interest him about A.
Figure 1.3: Models denition by Minsky (1965). Although reality is not completely reproducible, models are useful. Reasons for this are that (computer) simulations (also called simulation experiments) are usually
simpler, faster, less dangerous to people, less harmful for the environment, and much more economical
than real experiments. For the signicance of modeling and simulation, the following two quotations should be mentioned: Karl Ganzhorn, IBM (IBM Nachrichten, 1982) Models and simulation techniques are considered the most important area of future information processing in technology, economy, administration, society, and politics.
1 Introduction
Ralph P. Schlenker, Exxon (Conference of Foundations of Computer-Aided Process Operations, 1987) Modeling and simulation technology are keys to achieve manufacturing excellence and to assess risk in unit operation. [...] As we make our plant operations more exible to respond to business opportunities, ecient modeling and simulation techniques will become commonly used tools.
Figure 1.4: Simulation procedure. Afterwards, we will deal with methods of parameter estimation (identication, optimization) in the context of adjusting models to experiments. Simulation techniques are not subject to any concrete eld of application. Rather, it treats a methodology, which can be widely used in many applications. The examples in the lecture and the tutorials will illustrate this interdisciplinary character.
1.3.1 Problem
At rst, the real world problem has to be formulated, for which we want to nd answers with the help of simulation. This example deals with the determination of the vertical movement of a motor vehicle during a period of time through a ride over a wavy road. (see Fig. 1.5)
()
()
=
( )= ( )= ( )
1.3.2 Abstraction
Generally it is not our intention or within our capability to capture all aspects of the real world problem. For this reason, an appropriate selection of the eects to be considered and practical simplications of the reality have to be made. In our example, you can simplify the problem e.g. through regarding both, the wheels and the car body as two mass points. In Fig. 1.6 this simplied model of the vehicle is depicted.
11 Introduction Introduction
P$ >> P5
G
\$
UOCNN EQPUVCPV
P5 = P
F5
\5 = \
\6
Figure 1.6: Depiction of the simplied motor vehicle model. Figure 1.6: Depiction of the simplied motor vehicle model. (Newtons law) for the wheel: (Newtons law) for the wheel: m = forces, m = yy forces, m = d A y cR (y y ), m = dyy ccA y cR (y yss), yy or or m + y + (cA + R y = cR y m + ddy+ (cA + ccR))y = cR yss .. yy := := cc := k(t) := k(t) (1.3) (1.3) (1.1) (1.1) (1.2) (1.2)
t > t0 t > t0
(1.3) represents a second-order linear dierential equation equation with the initial Equation (1.3) represents a second-order linear dierentialwith the initial conditions conditions y(t0 ) = y0 , y(t0 ) = y0 (1.4) y(t0 ) = y0 , y(t0 ) = y0 (1.4) for the distance y and the velocity y at time t0 . The dierential equation and the initial for the distance y and the velocityproblem. The simulation experiment is dened through conditions form the model of the y at time t0 . The dierential equation and the initial conditions form the model of the problem. The simulation experiment is dened through the following known quantities: the following known quantities: model structure, X the model structure (equation 1.3c), the example), in X (time variant!) parameter (m, d, the values of parameters (m, d, c) which may be time-variant in general! , (time variant!) the X the time variantinput k(t) andand input (k(t)), initial conditions y X the initial conditions, y(y. , y ).
0 0
The wanted quantities are: The wanted quantities are: y(t), y(t), (t), X y(t), y(t), yy(t), 6 6 t>t . t > t00.
which leads to the following dierential equations: x1 = dx1 = y = x2 , dt dx2 1 x2 = = y = (dx2 cx1 + k(t)) , dt m x1 (t0 ) = y0 , x2 (t0 ) = y0 . (1.7) (1.8)
This is a time continuous, dynamic system of order n (here n = 2). So the equationoriented description of the model is: x1 = x2 , 1 x2 = (dx2 cx1 + k(t)), m x1 (t0 ) = y0 , x2 (t0 ) = y0 . (1.9) (1.10)
54 2 3 6 DC & '% ( 0) 1 78
1 Introduction
Figure 1.8: Dynamic parameters. In our example, you can perform the following approximations: d = 0, k(t) = const. (1.11)
With this and equation (1.3), you obtain the oscillation dierential equation: y+ 1 c y = k = const , m m =: 2 1 with the frequency f = 2 = 2 c 1 m and the period = f = 2 m. c (1.12)
A model is characterized through the time parameters Tmax and Tmin : Tmax = max( period ,
as above
excitation
typical time for the characterization of k(t)
),
(1.13)
Tmin = min(period, excitation). As an empirical formula, the simulation time T can be determined by: T = 5 Tmax
(1.14)
(1.15)
c. Time Step Length for Integrator For many numerical integration algorithms you need a clue on how to choose an appropriate time step length t (see Fig. 1.9).
Figure 1.9: Time step length t. As an empirical formula, the following choice of a time step length is valid: t = min Tmin T , 10 200
accuracy graphics
(1.16)
d. Expected Solution Space For the visualization of the solution, an imagination of its order of magnitude is useful. You can get this through approximative calculations or on the basis of process knowledge. In our example, you are able to determine e.g. the extreme values of the wheel movement. For a constant k(t) = k, we can calculate the steady-state solution of the system. The steady state is characterized by the fact that no time-dependencies of the state variables x1 = y and x2 = y occur, i.e. it can be determined with the conditions x1 = y = 0, x2 = y = 0. (1.17)
1 Introduction
Then equation (1.3) yields an expression for the steady-state solution y: cy = k, hence y= k . c (1.19) (1.18)
The dynamic solution of equation (1.3) is given by (assuming d = 0): y(t) = x1 (t) = y(1 cos(t)), y(t) = x2 (t) = y sin(t), therefore x1,max = 2 y, x2,max = y, x1,min = 0, x2,min = y. (1.22) (1.23) (1.20) (1.21)
x(t) = x(0) +
0
f (x( ))d.
(1.25)
10
The time axis will now be subdivided into an equidistant time grid (see Fig. 1.11). With this, equation (1.25) can be rewritten as:
tk+1
x(tk+1 ) = x(0) +
0 tk
f (x( ))d
tk+1
= x(0) +
0
f (x( ))d +
tk tk +1
f (x( ))d
= x(tk ) +
tk
f (x( ))d.
(1.26)
W
WN WN WN +
I
WN
WN
WN +
The the (explicit) f (x( ))d in equation (1.26) can be approximated (see Fig. 1.11). An term tkk Euler method: example for this is the (explicit) Euler method: xk+1 = xk + h fk withwith xk+1 x(tk+1 ) xk x(tk )
xk+1 x(tk+1 ) xk x(tk ) (1.26) xk+1 = xk + h fk (1.25)
t +1
(1.27)
(1.28) (1.29)
(1.27)
fk f= = (xk(x ) f f) k k h h =k+1k+1 tk tk = t t The numerical method is coded in a computer program (see Fig. 1.12).
11
1 Introduction
1.3.8 Simulation
Up to now, the solution of simulation problems has been discussed. It is important to compare the results of these simulations with real experiments in order to be able to make statements on e.g. whether model simplications are acceptable and the numerical solution method applied is suitable: simulation experiment simulated distances and velocities matching through test conditions If relevant deviations occur (Fig. 1.13) between simulation and experiment (after examination of comparable experimental conditions), then the simulation has to be modied through real experiment measured distances and velocities on test track
parameter adaptation (c, d, m), or model modication (friction, sophisticated vehicle model).
Often only an iterative model adjustment leads to a satisfactory correspondence between simulation and reality.
12
Figure 1.13: Deviations between simulation and real experiment. a. Analysis of the spring behavior |R | < MR y |A | < MA y safety comfort
b. Synthesis e.g. active spring c(y, y) c. Hardware in the loop e.g. active spring as a component in a simulated suspension model d. Training e.g. education e. Predictive simulation e.g. prediction
These dierent applications can be divided into online and o-line applications: o-line online, real time a, b, (e) c, d, (e)
13
1 Introduction
simulation can be divided with regard to its time scales: real time, time extension, time compression,
(1.32)
14
()
()
Figure 2.1: Linear transfer system. The model for this system is given through a generic linear dierential equation of order n: a0 y + a1 y + . . . + an y (n) = b0 u + b1 u + b2 u + . . . + bm u(m) , t >0. (2.1)
Note that for real systems, the condition m n always holds: The system state and the output depend only on previous states and inputs. The initial conditions for y(0), y(0), . . . , y (n1) (0) and u(t) as well as all derivatives of u(t) for t 0 have to be known. Attention has to be paid to the time derivatives of the input variables! In Fig. 2.2, a more detailed model of the vertical movement of a motor vehicle depicted in section 1.3 is given. Its mathematical representation
m = c1 y d1 y + c2 (u y) + d2 (u y) y
for t > 0.
(2.2)
15
()
()
Figure 2.2: Modeling of the vertical movement of a vehicle. The initial conditions at t = 0 is y(0) = y0 , y(0) = y0 . (2.3) (2.4)
After a simple transformation of equation (2.2), it becomes clear that the dierential equation of this model is a special case of the generic equation (2.1) with n = 2 and m = 1:
1 + y a2
d1 + d2 c1 + c2 y+ y = m m a1 a0
c2 d2 u+ u m m b0 b1
(+0)u. b2
(2.5)
Note that in favor of a more general solution, we have introduced the term b2 u. In our example, this term equals to zero. Indeed, the representation (2.5) is unsuitable for simulation purposes because in general, the inputs u(t) may not be dierentiable. For example, with a step change of u at t as depicted in Figure 2.3, the derivatives u(t ) and u(t ) are not dened.
()
16
As such discontinuities cause diculties in simulation (i.e. in the numerical methods), equations like (2.1) are transformed through successive integration at time 0 in the interval from t = 0 to t = 0+. As we want to eliminate all derivatives u, u, ..., u(m) , m integrations are required:
0+
0+
...
t=0 t=0 m integrals
dierential equation d . . . d .
(2.6)
Inspect, for instance, the general case n = m = 2 as in equation (2.5): By solving this equation with respect to the highest order derivative in y you obtain y = b0 u a0 y + b1 u a1 y + b2 u with the initial conditions y(0) = y0 , y(0) = y0 . (2.8) (2.9) (2.7)
As equation (2.7) is of second order in u, two integrations are required to obtain a representation free of derivatives of u. After the rst integration, we get y(t) = (b0 u a0 y) d + b1 u a1 y + b2 u , =: x1 hence y(t) = x1 + b1 u a1 y + b2 u , and the second integration yields y(t) = (x1 + b1 u a1 y) d + b2 u (2.12) (2.11) (2.10)
=: x2 = x2 + b2 u .
We get the following equation system, consisting of two dierential equations, namely the denitions of x1 and x2 in equation (2.10) and (2.11) respectively, and one algebraic equation, i.e. equation (2.12): x1 = b0 u a0 y , x2 = x1 + b1 u a1 y , y = x2 + b2 u , (2.13) (2.14) (2.15)
17
As y is an output variable of the system, it is reasonable to replace y in (2.13) and (2.14) by inserting equation (2.15): x1 = a0 x2 + (b0 a0 b2 )u , x2 = x1 a1 x2 + (b1 a1 b2 )u , y = x2 + b2 u . (2.16) (2.17) (2.18)
This form is called the state representation of the linear system (2.7). In this representation, the evolution of the system, i.e. the temporal derivatives of the state variables x1 and x2 , is given in function of the current state of the system (the xi ) and the input variable u. The solution of the equation system (2.16) - (2.18) requires the initial conditions x1 (0) and x2 (0) at time t = 0. They can be obtained with the help of the initial conditions y(0) = y0 and y(0) = y0 : y(0) = x2 (0) + b2 u(0) x2 (0) = y0 b2 u(0) . (2.19)
After dierentiation of equation (2.18), you obtain y(0) = x2 (0) + b2 u(0). Applying (2.17) leads to y(0) = x1 (0) + b1 u(0) a1 y(0) + b2 u(0), hence x1 (0) = y0 + a1 y0 b1 u(0) b2 u(0). (2.22) (2.21) (2.20)
So the initial conditions are known, if u(t), t 0 is known and if lim u(t) exists. For
t0
instance, if the input u(t) is a step function, u(t) = then lim u(t) = 1.
t0
0 1
t < 0, t 0,
(2.23)
(2.24)
A generalization of the state representation (2.16) - (2.18) renders the following representation in matrix notation: x = [x1 , x2 , . . . , xn ]T Rn u = [u1 , u2 , . . . , um ] R y = [y1 , y2 , . . . , yp ] R
T p T m
18
y = y
0 1 C
b2 D
u u m1
(2.29)
output matrix pn
transmission matrix pm
Therefore, you obtain a generic linear state representation in the following form: x = Ax + Bu, y = Cx + Du. x(0) = x0 , (2.30) (2.31)
Consequently, the simulation task is to solve a system of linear dierential equations of rst order. 2.2 The State Space
() ()
=
Figure 2.4: Trajectory in the state space (according to Fllinger and Franke, 1982). o Figure 2.4: Trajectory in the state space (according to Fllinger and Franke, 1982). o For given inputs u(t), t t0 , the trajectory is denitely determined through the initial conditions x(t0 ) = x0 . For dierent u(t) with the same x0 , you obtain dierent trajectories. The special case n = 2 (two-dimensional state space) is illustrated in Fig. 2.5, left. This 19 illustration is also called phase plot. Fig. 2.5, right, shows in contrast a time domain depiction.
2 Representation of Dynamic Systems For given inputs u(t), t t0 , the trajectory is denitely determined through the initial conditions x(t0 ) = x0 . For dierent u(t) with the same x0 , you obtain dierent trajectories. The special case n = 2 (two-dimensional state space) is illustrated in Fig. 2.5, left. This illustration is also called phase plot. Fig. 2.5, right, shows a time domain depiction.
x1 x2
x(0) t x(t )
t x2
2.3.1 Example
Now we study a common example from ecology: the predator-prey model (Lotka, 1925; Volterra, 1926). In Fig. 2.6 the corresponding ecology system is depicted schematically. We make the following assumptions:
As an input quantity we only use the catch, there is no immigration or migration, 1 is the prey, 2 is the predator, death rate number of prey or predator, birth rate, for prey number of prey,
for predator number of prey and predator, 20
()
()
As this state space model includes the term x1 x2 , it is nonlinear. Henceforth, the question comes up, what does the solution for x1 (t) and x2 (t) look like. A formula representation of x1 (t) and x2 (t) is hardly possible. We simplify the model (2.32) - (2.33) by assuming that no catch takes place (v1 = v2 = 0). Now we can determine the stationary states (equilibrium states) x1 and x2 , for which the conditions x1 = 0 and x2 = 0 hold (see also Section 3.3): (a1 b1 x2 c1 )x1 = 0 (b2 x1 c2 )x2 = 0 Besides the trivial solution x1 = x2 = 0 you obtain: c2 with equation (2.34) x1 = , b2 a1 c1 with equation (2.35) x2 = . b1 (2.34) (2.35)
(2.36) (2.37)
21
In the following step the non-stationary solution trajectory has to be worked out. In this case you can use a trick, where you divide equation (2.33) by equation (2.32) and then separate the variables x1 and x2 : dx2 /dt dx2 = dx1 /dt dx1 a1 b1 x2 c1 dx2 x2 a1 c1 dx2 b1 dx2 x2 (a1 c1 ) ln |x2 | b1 x2 x2 b2 x1 c2 , a1 b1 x2 c1 x1 b2 x1 c2 = dx1 , x1 c2 = b2 dx1 dx1 , x1 = b2 x1 c2 ln |x1 | + C. = (2.38) (2.39) (2.40) (2.41)
C is an integration parameter. With equation (2.41) you obtain a trajectory bundle of closed curves. Fig. 2.7 shows an illustration in the state space. The trajectories run through in the arrow direction with increasing time. We observe steady state oscillations with dierent amplitudes and frequencies. By which of the curves in Fig. 2.7 a particular system is described, depends on the initial conditions x10 and x20 . Fig. 2.7 also shows the behavior of the system for the two special cases x10 = 0 and x20 = 0. If x10 = 0, i.e. no prey at t = 0, the number of predator sh decreases; the system towards the stationary state [0, 0]T . If x20 = 0 and x10 > 0, i.e. no predator, but some prey at t = 0, the amount of prey sh increases. The system converges towards [, 0]T .
2 - predator
p2
stationary state 2
a1 c1 b1 p1 p1
p2
0 0
stationary state 1
c2 b2
1 - prey
Figure 2.7: State trajectories of the predator-prey model. The representation in the state space already gives an insight into the inuences of external eects on the system. For example, you can assume that the prey is a harmful insect, which
22
shall be reduced through the use of chemicals. Using chemicals to reduce the predator and the prey from p1 to p1 has the consequence that later a much greater amount of prey will appear. So a better time for combating would be a decisive moment with a large amount of prey, e.g. p2 . An alternative would be a continuous intervention u(t). However, this would lead to the question of determining the optimal trajectory of u(t).
If f and g are both linear in the input quantity u, this leads to a so-called nonlinear, control ane state space model: x = f 1 (x) + f 2 (x)u, y = g 1 (x) + g 2 (x)u. x(0) = x0 . (2.44) (2.45)
So the simulation task consists of solving a system of nonlinear dierential equations. Note that you deal with a vector model: x1 f1 . . x = . , f = . , ... (2.46) . . xn or x1 f1 (x, u) . . . . = , . . xn fn (x, u) x1 (0) x10 . . . = . , . . xn (0) xn0 y1 g1 (x, u) . . . .= . . . yp gp (x, u) fn
(2.47)
If the time t does not appear explicitly or implicitly through u(t) in the functions f and g, these systems are called autonomous. Every non-autonomous system x = f (x, u(t)) = f (x, t), y = g(x, u(t)) = g(x, t) (2.48) (2.49)
can be transformed into an autonomous one with the additional variable xn+1 = t and the supplementary equation xn+1 = 1, xn+1 (0) = 0.
23
24
+ =
= = =
+ +
C DE
4 03
#"
@A
F GH
) 0(
'
&
25
"
&
'
"
2 4 53
$%
26
&
'
3 Model Analysis
Before solving the model equations a model analysis should be performed (see simulation procedure, Fig. 1.4) in order to support the implementation of the model in a simulator. We look at an autonomous system of nonlinear dierential equations of the form: dx = f (x), dt x(0) = x0 . (3.1)
The output equations do not have to be considered because they are an explicit function of states and inputs. Therefore, the outputs can also be determined after the simulation (which is normally not done for practical reasons). In the following sections, dierent aspects of the model analysis will be briey introduced.
may be an arbitrary vector norm for Rm or Rn respectively, e.g. the maximum norm x
= max xi
1in
(3.3)
1 n
x2 + ... + x2 . n 1
(3.4)
In general, continuity of a function is a necessary condition for Lipschitz continuity. For dierentiable functions, the following theorem is valid: A function f is Lipschitz continuous, if and only if its derivative is bounded, i.e. a constant L R exists such that x Rn : f (x) L. (3.5)
27
3 Model Analysis
f(x)
3
f(x)
2 0 2 4 2
f(x)
2
0 2
0 2
Figure 3.1: Lipschitz-continuous and non-Lipschitz-continuous functions. Example a. The function f : R R, f (x) = ex (Fig. 3.1(a)) is not Lipschitz continuous, because its derivative f (x) = ex is not bounded. b. As the function in Figure 3.1(b) is even not continuous, it is not Lipschitz continuous. c. Consider the function f : R R, f (x) = 1 + |x| (Fig. 3.1(c)). By applying the denition above, we will show that f is Lipschitz continuous. For any x1 , x2 R, the following relations hold (the last transformation is a direct result of the triangle inequality): |f (x1 ) f (x2 )| = (1 + |x1 |) (1 + |x2 |) = |x1 | |x2 | |x1 x2 |. (3.6)
With the denition above, it follows immediately that f is Lipschitz continuous; L = 1 is a Lipschitz constant of f .
3.2 Solvability
We consider an ordinary dierential equation of the form x = f (x) for t > 0 , x(0) = x0 . (3.7)
If f is Lipschitz-continuous, then the dierential equation has a unique solution x(t). Note that this is a sucient, but not a necessary condition (see example 1). Example 3.1
28
3.2 Solvability
x(t)
x(t)
0.5
0.5
0.5
0 0
0.5
(a) x = e
x
1
, x(0) = 0
1.5
1 0
0.5 ln 2
1.5
Figure 3.2: Unique solution x(t). a. In Section 3.1, we have seen that f (x) = ex is not Lipschitz continuous. Nevertheless, the dierential equation x = ex for t > 0 , x(0) = 0 (3.8)
has a unique solution (Fig. 3.2(a)), which can be calculated analytically: dx = ex , dt dt = ex dx = d(ex ) , t=e e and therefore x(t) = ln(1 + t) for t 0 . (3.12)
x x(0)
= e 1,
b. As f (x) = 1 + |x| is Lipschitz continuous (cf. Section 3.1), the dierential equation x = 1 + |x| for t > 0 , x(0) = 1 (3.13)
has a unique solution (Fig. 3.2(b)). The calculation of the analytical solution x(t) = 1 2et etln 2 1 for t [0; ln 2] , for t (ln 2, ) (3.14)
29
3 Model Analysis
Therefore, for a given input u, the stationary states x of a system can be calculated by solving the equation f (x, u) = 0 . (3.18)
This condition shows that the stationary states of a system do not depend on initial values; they are a property of the system itself. Nevertheless, in general it depends on the initial values x(0) to which stationary state a system converges and whether it converges at all. Example In Section 2.3.1 we have seen that the predator-prey model has two stationary states: x1 = 0 , 0
1
x2 =
x2 b2 a1 c1 b1
(3.19)
If x(0) = x or x(0) = x , then the system will remain in the respective stationary state. If x (0) = 0 and x (0) > 0, then the system will converge towards x , but it will never reach this state. If x (0) = 0 and x (0) > 0, then the system will converge towards [, 0] . For all other x(0) with x (0) > 0, x (0) > 0, the system does not converge towards
2 1 2 1 2 1 T 1 2
For the calculation of stationary states, the linear and the nonlinear case are to be distinguished:
30
If the rank of the matrix rank(A) = n, i.e. det A = 0, then for any input u a solution x such that A x + B u = 0 exists. nonlinear case (x = f (x, u)):
In the nonlinear case no sucient conditions exist. As a necessary condition the implicit function theorem
det
f x
Jacobian matrix
=0
(3.21)
has to be satised. The Jacobian matrix must have full rank for all x in the neighborhood of the searched stationary state x.
fi f = xj x
(3.22)
i,j=1,...,n
...
numerically:
fi xj fi (. . . , xj + xj , . . . ) fi (. . . , xj xj , . . . ) 2xj (3.23)
31
3 Model Analysis
The pattern of the Jacobian-Matrix reects the coupling structure of the dierential equation system. Take a look at the following example; a non-zero entry is marked by : 1 2 3 4 5 0 0 0 f1 0 0 0 f2 = . . . . . . 0 0 f5
fi xj
f x
=0
=0
In this example, the function f1 depends only on x2 and x4 . You distinguish between dense (much more non-zero than zero entries) and sparse (much more zero entries) matrices. Among other reasons, this is important for the storage as well as for the numerical evaluation of the matrices (sparse numerical methods).
f (x) =
i=0
1 di f i! dxi
x=x
(x x)i
(3.24)
df 1 d2 f (x x) + (x x)2 + ... (3.25) dx x=x 2 dx2 x=x If the Taylor expansion is truncated after the second (linear) term, we get the a linear approximation of f at x: df flin,x (x) = f (x) + (x x) . (3.26) dx x=x If x := x x is suciently small, then flin,x (x) f (x). = f (x) + Note that a linearization of a function requires dierentiability of the function at the respective point. In particular, a function cannot be linearized at saltuses or breakpoints (cf. Fig. 3.3). Example Let f (x) = ex . The linearization of f at x is flin,x (x) = ex +
x
d x e x=x (x x) dx = ex + ex (x x) = e (1 x + x) .
1 df dx x=x
is the value of f =
df dx
at x.
32
Figure 3.3: Possibilities of linearization. For instance, in the neighborhood of x = 0 (see Figure 3.4), the linearization is flin,0 = 1 + x , and in the neighborhood of x = 1, we get flin,1 = ex . (3.31) (3.30)
f x
x=x
(x x) .
=:x
(3.32)
Example x1 + x2 + exp x1 2 . The linearization at x = [0, 1]T is 4 sin x1 0 + 12 + exp 0 1 + exp x1 2x2 + x 4 sin 0 cos x1 0 2 2 2 + x . 4 1 0 (3.33) (3.34)
Let f (x) =
f lin (x) = =
33
3 Model Analysis
f (x ) = e x
3,5 3 2,5 2 1,5 1 0,5 0 -2 -1 0 1 2
f lin,1 (x ) = ex
f lin,0 (x ) = 1+x
(3.35)
x=x,u=u
f u
x=x,u=u
(u u) .
=:u
(3.36)
At a stationary state, we have always f (x, u) = 0. Furthermore, as x is a constant that d does not depend on t, we have x = dx = dt (x + x) = dx . Thus, we get dt dt dx = Ax + Bu dt (3.37)
34
with f1 x1 = . . . fn x1 f1 x2 fn x2 ... f1 xn fn xn
x=x,u=u
A=
f x
x=x,u=u
(3.38)
...
and f1 u1 = . . . fn u1 f1 u2 fn u2 ... f1 um fn um
x=x,u=u
B=
f u
x=x,u=u
(3.39)
...
Here, I denotes the n-dimensional identity matrix. As v = 0, equation (3.41) holds if and only if det(I A) = 0 . det(I A) is the characteristic polynomial of A. (3.42)
35
3 Model Analysis
The characteristic polynomial of A is 1 0 1 1 1 det(I A) = det 0 0 1 1 = ( 1)3 + ( 1) = ( 1) [ ( 1)2 + 1 ] , and its roots are 1 = 1, 2 = 1 + i, and 3 = 1 i. (3.44)
3.8 Stability
A further important issue in the model analysis is the stability of a system after a perturbation x0 of a stationary state x. In order to make statements about the stability of a system, we will solve the homogeneous dierential equation system (3.37) under the assumption that there are no perturbations of the input variables, i.e. u = 0. Furthermore, we simplify the notation by writing x instead of x. Then equation (3.37) becomes x = Ax , x(0) = x0 . (3.45)
A Rnn is an n n-matrix, where n is the number of state variables xi . Assume that v Cn is an eigenvector of A and that is the corresponding eigenvalue, i.e. Av = v . It is evident that x(t) = vet is a solution of x = Ax, because d t ve = vet = Avet . dt (3.48) (3.47) (3.46)
It becomes obvious that the eigenvalues of A provide valuable information about the dynamic behavior of the system.
36
3.8 Stability
According to the value of k, the following statements about the systems stability can be made:
If k < 0, then lim x(t) = 0. Hence, the system will return to its stationary state after a (small) perturbation. The system is stable. If k > 0 (and x = 0), then |lim x(t)| = . The system will not return to the
t 0 t
stationary state; rather, the absolute value of the state variable x grows towards innity. The system is unstable.
Now let A Rnn be a square matrix whose eigenvalues i are all real, i R i = {1, ..., n} . (3.51)
It is known from linear algebra that a matrix T Rnn , det(T ) = 0, exists, such that 1 .. T 1 AT = diag i = (3.52) = . . n is diagonal matrix whose entries are the eigenvalues of A. Now, the dierential equation (3.45) can be transformed as follows: x = Ax = A T T 1 x , T
1
(3.53) (3.54)
x=T
AT T
identity 1
x = T 1 x .
We apply a coordinate transformation to the state vector x and get a new state vector z: z = T 1 x x = Tz . (3.55)
In terms of the transformed state vector, equation (3.54) yields the following dierential equation: z = z or z i = i zi i {1, ..., n} . (3.57) (3.56)
These are n independent dierential equations, one for each of the transformed state variables zi . Hence, for each zi the argumentation in Section 3.8.1 is valid. We get the following criteria for the stability of the system:
37
3 Model Analysis
If all eigenvalues are negative, i.e. < 0 i {1, ..., n}, then the system is stable. If there is at least one positive eigenvalue, i.e. i {1, ..., n} : > 0, then the
i i
system is unstable.
The eigenvalues are the roots of the characteristic polynomial 0 = det(A I) = (a )2 + b2 , hence + = a + bi , = a bi . (3.60) (3.59)
By equation (3.47) we get that the two components x1 and x2 of the solution x(t) are linear combinations of e and e
t +t
(3.61)
(3.62)
Therefore, the real parts of x1 and x2 are oscillations of the frequency b/2, which are damped (for a < 0) or excited (for a > 0) by the factor eat . For the stability of the system, this means that
for a < 0, the system is stable, and for a > 0, the system is unstable.
38
3.8 Stability
Hence, by a similar consideration as in Section 3.8.2, we can reduce the general case to the one-dimensional case with a real eigenvalue in Section 3.8.1 and the two-dimensional case with complex eigenvalues in Section 3.8.3. As a condition for the stability of a system x = Ax, we get:
If all real eigenvalues of A are negative and if the real parts of all complex eigenvalues of A are negative, then the system is stable. If there is at least one positive real eigenvalue or at least one complex eigenvalue with a positive real part, then the system is unstable. If A has pairs of conjugate complex eigenvalues a bi, then the system shows oscillations with frequency b/2. Figure 3.5 shows a graphical representation of these criteria. The solution of x = Ax takes the form of a linear combination
n
x(t) =
i=1
ei t ci .
(3.64)
The values of the ci depend on the initial conditions x(0) = x0 . The addends ei t ci are also called the eigenmodes of the system. Example Consider the dierential equation system 1 0 1 x = Ax = 0 1 1 x , 0 1 1 x(0) = [1, 2, 0]T . In Section 3.7 we have calculated the eigenvalues 1 = 1 , 2 = 1 + i , 3 = 1 i . (3.67)
(3.65) (3.66)
As there is at least one eigenvalue with a positive real part in fact all real parts are positive the system is unstable. Furthermore, the system shows an oscillation with frequency 2. Although this is rarely done in practice, we apply the coordinate transformation described above for illustrative purposes. The matrix of the eigenvalues is 1 0 0 1 0 0 = 0 Re(2 ) Im(2 ) = 0 1 1 , (3.68) 0 Im(2 ) Re(2 ) 0 1 1
39
3 Model Analysis
40
(3.69)
(3.70)
(3.72)
(3.73) + c2 e
(1i)t
(3.74) (3.75)
z3 (t) = c3 e(1+i)t + c4 e(1i)t . The coecients ci are c1 = 1 , c3 = i and thus z1 (t) z2 (t) = e
(1+i)t 3
c2 = 1 , c4 = i ,
(3.76) (3.77)
= et , +e
(1i)t
for the latter transformation, we have used Eulers formula ei = cos + i sin . Finally, we get the solution for x = T z: x1 (t) = z1 (t) + z2 (t) x2 (t) = z2 (t) x3 (t) = z3 (t)
2 3
The calculation of a matrix T fullling the given condition is not the subject of this course. The calculation of the coecients ci is left as an exercise to the reader. Hint: Solve the linear equation system for c1 ,... , c4 that consists of the initial conditions for z2 and z3 as well as two further equations resulting from the dierential equations z2 = z2 z3 and z3 = z2 + z3 .
41
3 Model Analysis
real i
1 |real | i
ci .
= bi i Ri, (3.85)
imag
tc
= ai bi i C with ai , bi = 0, the corresponding eigenmodes show damped or excited oscillations (depending on the sign of ai ) with the time constant 1 1 = com |ai | | Re(i )| (3.86)
The minimal and the maximal time characteristic of the entire system are Tmin = min(Titime constant , Tioscillation ) and Tmax = max(Titime constant , Tioscillation ) (3.89) (3.88)
As already introduced, you choose as simulation time tsim = 5Tmax , if the system is stable. Otherwise, tsim depends on an upper limit M : |x(t)| M or the problem formulation. You choose the time step length h = t Tmin with the time step index = 1 1 , , leading to (5 . . . 20) h = Tmin . 20 5 h Tmin
42
Example For the example in Section 3.8, we have found the real eigenvalue 1 = 1 with
time constant T1 =
1 =1 |1|
(3.90)
1 = 1, |1|
(3.91) (3.92)
2 = 2 . |i|
The maximal time characteristic of the system is 2, and its minimal time characteristic is 1.
Figure 3.6: Sti systems: very dierent time constants. The stiness measure SM makes use of the comparison of the minimal to the maximal time constant: SM = Tmax > 103 . . . 107 Tmin sti (3.93)
tsim 5 Tmax 5 = = SM . Because of h Tmin this, sti systems require long computation times. This problem can be solved by using integration methods with variable time steps hk (k = 0, 1, . . . ) as it will be described in Chapter 5. The computation time can be considered as
43
3 Model Analysis
Figure 3.7: Discontinuities in u and f . An improvement can be achieved with an integration method which detects the discontinuity and solves the dierential equations piecewisely with the corresponding new initial conditions.
44
robustness (the solution should be found), reliability (high accuracy), and eciency (short computation time).
This chapter deals with major issues regarding possible numerical errors and their eects. Generally, you can distinguish the following kinds of errors:
In these examples, number are represented in the form f 10e , where f is a three-digit number with 1 f < 10. The exponent e is an integer. This notation can be generalized in several respects. For a particular oating point representation, we have to choose:
a base b N, b 2,
45
a minimum exponent r Z and a maximum exponent R Z with r R, and the number of digits for the representation of a number, d N.
For instance, if we choose b = 10, r = 5, R = 5, and d = 3, the examples in equation (4.1) are valid oating point numbers. In general, a oating point number x can be written x = f be , (4.2)
where the mantissa f is a number with d digits, 1 f < b, and the exponent e is an integer with r e R. If we choose the base b = 2, then we get the following representations for the examples above: 12 = 1 23 + 1 22 = (1 20 + 1 21 ) 23 = 1.1 211 , 0.375 = 1 22 + 1 23 = (1 20 + 1 21 ) 22 = 1.1 210 .
Binary numbers are marked with . In a binary representation, only the two digits 0 and 1 are needed. Therefore, binary numbers are very well suited for the representation of numbers in computers: The two digits can be mapped to the two states voltage and no voltage in an electrical circuit. As the reader is assumed to be much more familiar with decimal numbers, we will use b = 10 in the following discussion. Note that all remarks are in principle valid for any choice of the base b.
Therefore, we introduce a rounding function rd : R A that maps a real number x to the nearest oating point number; i.e. for any x R, this rounding function fulls the condition |x rd(x)
machine number
| |x g|
for all g A.
(4.4)
Consequently, a oating point number x represents all real x in within an interval of length x (see Fig. 4.1). Note that x depends on the absolute value of x . As x = rd(x) for x A, numeric calculations with oating point numbers contain rounding / errors. As an example, we consider the calculation of the sum 1 + 6.789. We choose the 3 base b = 10 and the mantissa size d = 3.
46
( )
Figure 4.1: Representation of a number on the computer. a. Input errors. The input values xi of a calculation must be represented as oating point numbers x . The application of the rounding function causes an input error i xi : xi = x + xi . i
(4.5)
In the example, we have 1 rd( ) = 3.33 101 , 3 rd(6.789) = 6.79 100 , 1 1 0.333 = 0.000333, 3 3000 x2 = 6.789 6.79 = 0.001. x1 = (4.6) (4.7)
b. Computation errors. In general, the precise result of a calculation f (x , ..., x ) is n 1 no valid oating point number and must be rounded: f (x , ..., x ) = rd(f (x , ..., x )) 1 n 1 n f = f + f.
+
=
(4.8) (4.9)
Example: f = 3.33 101 + 6.79 100 = 7.123 100 , f = rd(f ) = 7.12 10 , f = f f = 7.123 7.12 = 0.003.
0
The deviation between a real number x and its oating point representation x is called relative rounding error x, referring to the true value: x = x x . x (4.13)
47
The machine accuracy (resolution of the computer) is an upper limit for the relative rounding error: x for all x. (4.14)
For an ordinary PC, this value is 1016 . As an example for eects of rounding errors, we will study the calculation of the cosine function. Its series expansion is the following:
cos x =
k=0
cos(k) (0) k cos 0 0 sin 0 1 cos 0 2 sin 0 3 x = x x x + x + ... k! 0! 1! 2! 3! x2 x4 x6 + + ... 2! 4! 6! x2k . (1)k (2k)!
=1
=
k=0
(4.15)
In a computer-program (written e.g. in Fortran or C) the calculation can be realized for instance in the following way:
function cosine(x, eps) # initialization cosine = 0.0 term = 1.0 k = 0 xx = x*x # loop DO WHILE cosine k term END DO
return cosine
If you apply this program, you obtain for x = 3.14159265 and for x = 31.4159265 10 the value cosine(x) = 8.593623E+04. the value cosine(x) = 1.000000E+00
48
4.3 Conditioning
The cause for this eect are so-called cancellation errors: 30th sum term: term30 = 3.0962506 1012 , 60th sum term: term60 = 8.1062036 107 . Because the size of the mantissa is limited to d = 7, you obtain: term30 = 3.0962506E+12 + term60 = 0.0000811E+12 3.0963317E+12 The solution of the series expansion depends on the summation order. Therefore, it is sensible to change the summation order and to calculate the small terms rst. (from the third digit on all numerals get lost)
4.3 Conditioning
The conditioning of a problem indicates how strongly errors in the input data, e.g. rounding errors, aect the solution. As an introductory example, we consider the calculation of the square root of an input value x: f (x) = x. (4.16)
f(x) f+f f
slope:
1 2 x
x+x
Figure 4.2: Eect of an input error in square root calculation. For a given input value x, let f = x be the precise output value. As shown in Figure 4.2, an input error x causes an output error f . Furthermore, for small x, the approximation f df x dx (4.17)
x=x
49
x=x
1 x 1 = x. 2 x 2
(4.19)
In general, an output value f can depend on n input values xj . Consequently, an error in each of the input values aects the output. This inuence can be calculated using a Taylor expansion of f which is truncated after the rst-order term:
n
f (x + x) = f (x) +
j=1
f xj
xj + . . . .
x=x
(4.20)
For the relative error f , we obtain under the conditions f (x) = 0 and j {1, . . . , n} : xj = 0 the expression f (x + x) f (x) f = = f (x)
n n j=1
f xj
x=x Kj
xj xj , f (x) xj
xj
(4.21)
f =
j=1
Kj xj .
(4.22)
The relative error xj of each input value contributes to the relative output error. The amplication factor or relative condition number Kj is a measure for the contribution of an error in the corresponding input value. Hence, a problem is ill-conditioned with respect to an input xj if |Kj | is big (> 1). The smaller the absolute values of the amplication factors are, the better the conditioning of the problem is. We have already determined the amplication factor for the calculation of the square root (equation (4.19)): 1 x = x, 2 1 K= . 2 (4.23)
x1 ,x2
and, by analogy, K2 = 1. The conditioning of the arithmetic operations is given in Table. 4.1.
50
4.3 Conditioning
f (x1 , x2 ) x1 + x2 x1 x2 x1 x2 x1 /x2 x1
K1
x1 x1 +x2 x1 x1 x2
K2
x2 x1 +x2
f
x1 x1 +x2 x1 x1 x1 x2 x1
Restrictions +
x2 x1 +x2 x2 x2 x1 x2 x2
x1 , x2 > 0 x1 , x2 > 0; x1 = x2
x1x2 2 x 1 1
1 1
1 2
x1 + x2 x1 x2
1 2 x1
x1 > 0
Table 4.1: Conditioning of some mathematical operations (x1 , x2 = 0) Multiplying, dividing, and square root calculation are not dangerous operations, because the relative errors of the input errors do not propagate into the results in an amplied way; in particular, the Kj are constant. Addition is also a well-conditioned operation, because x1x1 2 and x1x2 2 lie in a range +x +x between 0 and 1. Especially the case of x1 x2 and x1 x2 (and vice versa) yields a relatively small value of x1 +x2 . This is called an error damping: x1 = 1 , x1 = 10 x2 = 1000 , x1 + x2 = 1001 ,
For substraction is/are x1x1 2 and/or x1x2 2 > 1, thus the errors x1 and/or x2 become x x amplied. A big amplication occurs if x y, which causes a cancellation error during the calculation of x y: x1 = 100 , x2 = 0.01 x2 = 99 , x1 x2 = 1 ,
In this example, a perturbation of the input value x2 by 1% results in a cancellation error of 99% with respect to the correct result!
51
52
s t u qr p i h g
e i pq r s t u vw
" R () 01 " 7 b G "F 4 kj H Q R IP d S TU e fcd y " f h z |{ X ` Y x yvw VW pq u t U sr klm on P R Q TS GH I XY wx yz { | } ~ a b` ji x y q rs tuv WV h g hf e vw B DC F&E @A d u t ! #" % &$ s lm o pn 7 9 8 q r y 6 " b R p i g h i d h 5 4 f g f 1 32 "d e @89 cde 0) ~} ( 65 b a 3 42 ' b ' " & "% #$ " ! " " xy " wv
f h ig
The general (implicit) form of a scalar ordinary dierential equation of rst order is
Figure 5.1: List of dierent integration methods available in Simulink (MathWorks, 2002).
x(t0 ) = x0 .
s
"
x y
0 = F (x, x, t) t t0 ,
x s fd e 1 32 465 7 k l m n op ACB D E F HG
f " A B DEC
f gh 98 @ i j
r sq tu wxv z sy {| } ~ a c db X `Y
IP Q SR U T VW
The simulation of processes which are described by dierential equations (see the vehicle example in Section 1.3.3) requires methods for numerical integration. Simulation tools such as e.g. Simulink (MathWorks, 2002) often have accommodated various numerical integration procedures for selection (compare Fig. 5.1). In this chapter the basics of numerical integration will be discussed and dierent integration methods will be introduced.
(5.1)
(5.2)
53
(5.3) (5.4)
We will restrict our presentation to dierential equations that are given in the explicit form x = f (x, t) t t0 , x(t0 ) = x0 . Sometimes, it is possible to transform a DE in implicit form into the explicit form. Example The example above can easily be transformed into x 2 x(0) = 1 . x= t 0, (5.7) (5.8) (5.5) (5.6)
Solving a dierential equation (DE) means to nd a function x : R R such that the dierential equation (5.1) as well as the initial condition (5.2) are fullled. We will denote this exact solution as x(t). In Section 3.2 we have discussed a sucient criterion for the existence of a unique solution. In general, x(t) cannot be calculated analytically. Rather, we must apply a numerical integration method in order to get an approximation of x(t). More precisely, a numerical method yields a set of N pairs (tk , xk ) such that xk x(tk ) (5.9)
for all k. Each tk represents a discrete time point, and therefore the set {tk |k = 1, ..., N } is called a discrete time grid. The time step length between tk and tk+1 is denoted hk , i.e. tk+1 = tk + hk . (5.10)
To simplify the following discussion, we will assume a constant time step length h = hk , k = 1, ..., N such that we get an equidistant time grid.
54
x(t1 ) x(t0 ) =
t0 t0 +h
(5.11) (5.12)
x(t1 ) = x(t0 ) +
fk
xk+1 h fk tk
tk+1 tk
f dt = h fk + xk+1
tk+1
Figure 5.2: Illustration of the rectangle rule. Equation (5.12) is fullled by the solution x(t), but it cannot be used to calculate x(t1 ) as an exact evaluation of the integral is not possible. Numerical integration methods are based on the idea to approximate the integral term. A simple strategy is to replace the integral represented by the dashed area in Figure 5.2 with the product h f (x0 , t0 ) represented by the rectangle in Figure 5.2:
t0 +h
(5.13)
The error of the approximation corresponds to the area between the curve and the rectangle. By inserting equation (5.13) and the initial condition x(t0 ) = x0 into equation (5.12), we get an approximation x1 for x(t1 ): x1 = x0 + h f (x0 , t0 ) . In a similar way, an approximation x2 for x(t2 ) is calculated: x2 = x1 + h f (x1 , t1 ) . (5.15) (5.14)
55
5 Numerical Integration of Ordinary Dierential Equations Note that for the calculation of x2 x(t2 ), we use the approximation x1 for x(t2 ). Therefore, in addition to the error of the integral approximation, an error of x1 inuences the accuracy of x2 . The general form of this integration method, known as the explicit Euler method (cf. Section 5.2.1), is xk+1 = xk + h f (xk , tk ) . (5.16)
The accuracy of each xk depends on the accuracy of its predecessor and of the accuracy of the approximation for the integral.
}
}
+
+
Figure 5.3: Illustration of the tangent rule. The explicit Euler method can also be deduced by means of an alternative consideration (see Figure 5.3): The derivative of x(t) w.r.t t, evaluated at tk , can be approximated (tangent rule) f (xk , tk ) = x(tk ) hence x(tk+1 ) x(tk ) + hf (xk , tk ) . (5.18) x(tk+1 ) x(tk ) , h (5.17)
By replacing the exact values of x(tk ) and x(tk+1 ) with the corresponding approximations, we get the integration rule xk+1 = xk + h f (xk , tk ) . Example We apply the explicit Euler method to solve the dierential equation x x= , 2 x(0) = 1 , (5.20) (5.19)
56
5.1 Principles of Numerical Integration i.e. f (x, t) = x , t0 = 0, x0 = 1. The application of equation (5.16) to this problem leads 2 to xk h xk+1 = xk + hf (xk , tk ) = xk h = (1 )xk . (5.21) 2 2 If we choose the step size h = 0.5, we get the following approximations of the correct result x(t) = et/2 for t1 = t0 + 1 h and t2 = t0 + 2 h (see Figure 5.4): 3 x(t0 + 1 h) = x(0.5) x1 = x0 = 0.75 4 3 x(t0 + 2 h) = x(1.0) x2 = x1 = 0.5625 . 4 (5.22) (5.23)
1.0 exact solution et/2 0.5 explicit Euler with h = 0.5 0 0 0.5 1.0 1.5 2.0
Figure 5.4: Illustration of the explicit Euler method. Numerical integration methods dier in how the area/gradient is determined. Numerical integration methods can be divided into
one-step methods, multiple-step methods, and extrapolation methods (which are not subject of the lecture). explicit and implicit methods.
5.1.3 Consistency
The dierence quotient of the exact solution x(t) between t0 and t1 = t0 + h (see Figure 5.5) is x(t0 + h) x0 if h = 0 , (x0 , t0 , h) = (5.24) h f (x0 , t0 ) if h = 0 ,
57
Figure 5.5: Dierence quotient of exact solution and approximation. whereas the dierence quotient of a numerical approximation of x(t) is given by (x0 , t0 , h) = x1 (x0 , t0 , h) x0 . h (5.25)
depends on the integration method by which x1 is calculated. The local discretization error is dened as (x0 , t0 , h) = (x0 , t0 , h) (x0 , t0 , h) . (5.26)
is a measure for the deviation of the numerical approximation x1 from the exact solution x(t1 ). The term local error emphasizes that describes the error of one integration step under the precondition that the calculation of xk+1 is based on an exact value xk = x(tk ); this is why we have dened and for k = 0 we know the exact value x0 . For a reasonable integration method, we demand that the local discretization error is small for a small step size h:
h0
lim (x0 , t0 , h) = 0 .
(5.27)
If this equation holds for an integration method, we say that the method is consistent. As limh0 (x0 , t0 , h) = f (x0 , t0 ), equation (5.27) can be rewritten
h0
(5.28)
Example We show that the explicit Euler method is consistent. By inserting equation (5.16) into (5.25), we get (x0 , t0 , h) = x0 + hf (x0 , t0 ) x0 = f (x0 , t0 ) . h (5.29)
58
Consistency is a condition for a reasonable integration method, but the denition above does not allow for the comparison of dierent integration methods. Therefore, we dene the order of consistency p as the error order of the local discretization error: (x0 , t0 , h) O(hp ) . (5.30)
If an integration method is of consistency order p for all problems, i.e. all admissible f , x0 , and t0 , we say that the integration method itself is of order p. Example For the explicit Euler method, we have (x0 , t0 , h) = 1 [ x(t0 + h) x0 ] h 1 dx = x(t0 ) + h + O(h2 ) x0 h dt t=t0 = dx dt
t=t0
+ O(h) (5.31)
= f (x0 , t0 ) + O(h) , (x0 , t0 , h) = 1 [x1 x0 ] h 1 = [x0 + hf (x0 , t0 ) x0 ] h = f (x0 , t0 ) , and (x0 , t0 , h) = (x0 , t0 , h) (x0 , t0 , h) O(h) . The explicit Euler method is of order 1.
(5.32)
(5.33)
k = 0, . . . , K, with
i=1
(5.34) (5.35)
xk+1 = xk + h
i=1
wi si ,
wi = 1.
59
x 4 3 2 1
h=6
h=1 0 2 1 2 4 6 8 10 h=4 12 t
(5.36)
"
Figure 5.7: Explicit Euler method. The characteristics of the explicit Euler method are:
Eciency: Quick calculation of one step. Accuracy: For a high accuracy small step lengths are required. This leads to stability
problems for sti systems. 60
(5.37)
"
Figure 5.8: Implicit Euler method. The method is called implicit due to the fact, that fk+1 = f (xk+1 ) . Therefore an iteration or an approximation is required, for which the nonlinear equation 0 = xk+1 xk h f (xk+1 ). has to be solved for xk+1 . The characteristics of an implicit Euler method are: (5.38)
Eciency: For every step the nonlinear equation (5.38) has to be solved. This increases the computational eort signicantly. Accuracy: For a high accuracy small step lengths are required. However, the stability
of the algorithm is ensured.
requires the solution of a non-linear equation system for xk+1 because f (xk+1 ) is unknown. The semi-implicit Euler method is based on the idea to approximate f (xk+1 ) be means of a linearization of f at xk : f (xk+1 ) = f (xk ) + f x
x=xk
xk + O( xk 2 ) ,
(5.40)
61
5 Numerical Integration of Ordinary Dierential Equations where xk = xk+1 xk . Thus, equation (5.39) becomes xk+1 = xk + h f (xk ) + f x (xk+1 xk ) . (5.41)
x=xk
f (xk ) .
(5.42)
! "
+
+
#$
xp = xk + hfk . k+1
Figure 5.9: Heuns method. In a predictor step you obtain a rst approximation of the value xp : k+1 (5.43)
In the subsequent corrector step the gradient is averaged with this value and the initial value: xk+1 fk + f (xp ) k+1 , = xk + h 2
()
&
(5.44)
The term () corresponds to an average of two gradients (compare Fig. 5.9). It is a two-stage one-step method, which is also called Runge-Kutta method of second order.
62
+
"
Through the averaging the accuracy of the approximation (of the gradient) is strongly increased.
lim
(5.53)
63
a) Heun
h0
h0
= f (x(tk )) = x(tk )
(5.54)
lim
(5.55)
Figure 5.11: Multiple-step method. Hereby you proceed in the following way (see Fig. 5.11): a. Interpolation of a polynomial pn (t) of order n through fkn , fkn+1 , . . . , fk .
tk+1
pn (t) dt.
xk+1 =
i=1
i xk+1i + h
i=0
i fk+1i
k = m 1, m, . . .
! "
#$
Hereby you distinguish again between implicit and explicit methods: 0 = 0: explicit multiple-step method (e.g. 1 = 1, i = 0: Adams-Bashforth method) 0 = 0: implicit multiple-step method (e.g. 1 = 1, i = 0: Adams-Moulton method) The following points have to be noted:
64
( ) 0 % 21
(5.56)
()
&'
+
k = 0, . . . , m 1 requires a start-up calculation, e.g. with a one-step method with the same accuracy. The computational eort for explicit multi-step methods is only 1 f -evaluation
k
per step, if fk1 , fk2 , . . . is stored interim. However, this increases signicantly the organizational eort.
As an example we will investigate the Gear-Method of second order ( = 0, implicit method): 4 1 2 xk+1 = xk xk1 + hfk+1 . 3 3 3 The consistency of this method is ensured:
1 1 2 xk+1 xk 3 xk 3 xk1 + 3 hfk+1 lim = lim h0 h0 h h xk xk1 2 = lim + fk+1 h0 3h 3 1 2 = x(tk ) + x(tk ) 3 3 = x(tk )
(5.57)
(5.58)
the Adams-Bashforth method of third order (polynomial of second order) as a predicator and the Adams-Moulton method of fourth order (polynomial of third order) as a corrector.
65
66
B C
(6.1) (6.2)
Solely component A exists at time t0 . The mole fractions xA , xB , xC have to be found, which are present in an equilibrium. In the equilibrium the reaction rates ri are dependent on the reaction rate constants ki and the concentrations ci : rAB = k1 cA k2 cB = 0, rAC = k3 cA k4 cB = 0.
! !
(6.3) (6.4)
The equilibrium constants Ki are dened as: KAB = KAC xB xA xC = xA or or KAB xA xB = 0, KAC xA xC = 0. (6.5) (6.6)
In terms of the mole fractions, the total mole balance can be set up: xA + xB + xC = 1. Thus, we have three equations (6.5) - (6.7) for the three unknown variables. (6.7)
67
The general form of a linear equation system can be written as: Ax = b (6.8)
with A : (nn)-matrix and x, b Rn . The equation system is solvable, if the rank A = n, or the determinant det A = 0. In our example this means: 1 1 1 A = KAB 1 0 , KAC 0 1 xA x = xB , xC 1 b = 0 . 0
(6.9)
=
2. Forward substitution Lz = b z
3. Backward substitution Rx = z
68
1 1 1 1 0 (1 + KAB ) KAB KAB (KAC ) 0 KAC (1 + KAC ) KAC (1 + KAB ) 1 1 1 1 0 (1 + KAB ) KAB KAB 0 0 (1 + KAB + KAC ) KAC (6.13 c) gives: xc = KAC . 1 + KAB + KAC KAB KAC = KAB . 1 + KAB + KAC (a) (b) (c)
(6.12)
(6.13)
(6.14)
This solution is then inserted into (6.13 a): xA = 1 . 1 + KAB + KAC (6.16)
The Gau elimination is considered to be the most important algorithm in the numerical mathematics. There are many numerically robust modications, which are often implemented into standard subroutines of commercial software (e.g. MATLAB (MathWorks, 2000)).
A A
k2
k3
B+D C +D
(6.17) (6.18)
k4
(6.19) (6.20)
69
Again, the molar fractions xA , xB , xC , xD have to be found, which appear in the equilibrium. The total mole balance for the four components is: xA + xB + xC + xD = 1. Furthermore, the mole balance of the component D is given by: xD = xB + xC . Together with the two nonlinear equations for the equilibrium constant Ki : KABD = KACD xB xD xA xC xD = xA or or KABD xA = xB xD , KACD xA = xC xD (6.23) (6.24) (6.22) (6.21)
you obtain four equations (6.21) - (6.24) for the four unknown mole fractions. The general form of a nonlinear equation can be written as: f1 (x1 , x2 , . . . , xn ) 0 f2 (x1 , x2 , . . . , xn ) 0 or f (x) = 0. = . . . . . . fn (x1 , x2 , . . . , xn ) In our example this means: f1 (xA , xB , xC , xD ) = xA + xB + xC + xD 1= 0, f2 (xA , xB , xC , xD ) = xB + xC xD f3 (xA , xB , xC , xD ) = KABD xA xB xD f4 (xA , xB , xC , xD ) = KACD xA xC xD = 0, = 0, = 0. (6.26) (6.27) (6.28) (6.29) 0
(6.25)
Jacobian matrix
Here, the problem is that in many cases the Jacobian matrix can only determined with high computational eort. An alternative formulation for the solvability of the nonlinear equation system, also a necessary condition, reads: The nonlinear equation system is only solvable, if exactly one variable xj can be assigned to every equation fi = 0, so that this equation can be used for calculating the assigned variable under the pre-requisite that all other variables are known.
70
For an illustration of this necessary condition, we consider the following example: f1 : f2 : f3 : f4 : f5 : x1 + x4 10 2 x2 x3 x4 x5 6 x1 x1,7 (x4 5) 8 2 =0 =0 =0 (6.31) (6.32) (6.33) (6.34) (6.35)
x4 3x1 + 6 = 0 x1 x3 x5 + 4 = 0
We make use of the incidence matrix which helps us to determine information on the structural regularity of the problem. In the incidence matrix, it is indicated which equation fi depends on which variable xi (marked by ). x1 f1 f2 f3 f4 f5 x2 x3 x4 x5
One then tries to assign to each equation exactly one variable as its solution (marked by ). If this succeeds, as in our example, the structural regularity is given.
You truncate it after the rst-order terms: xi+1 = xi f (xi ) fx (xi ) i = 0, 1, . . . (6.37)
This method is called Newtons method for scalar equations and is graphically illustrated in Fig 6.1.
71
f(x)
x* x 3
x2
x1
Figure 6.1: Newtons method for scalar equations. 6.2.2.2 Newton-Raphson Method for Equation Systems In the vector case f (x) = 0 the procedure is similar to the scalar case. One also expands a Taylor series for f (xi+1 ) = 0 for the vector xi with the iteration index i: f (xi+1 ) = f (xi ) + f i (x ) x
Jacobian matrix
(xi+1 xi ) + = 0 xi
(6.38)
f1 i x1 (x ) f2 i (x ) f i (x ) = x1 . x . . fn (xi ) x1
f1 i (x ) . . . x2 f2 i (x ) . . . x2 . .. . . . fn i (x ) . . . x2
f1 i (x ) xn f2 i (x ) xn . . . fn i (x ) xn
(6.39)
The solution procedure for the determination of the zero vector x, as shown below, is referred to as Newton-Raphson method: f i (x )xi = f (xi ). x (6.40)
This is a linear equation system of the form Axi = b and contains n linear equations for xi . After their solution (see Section 6.1), starting with a sensible initial value x0 , an update (called Newton step or Newton correction) of xi+1 is carried out: xi+1 = xi + xi i = 0, 1, . . . , i0 < imax (6.41)
72
The solution of the linear equation system and the following correction continue until either the maximum index imax or the absolute and/or relative accuracy is reached: xi0 < abs + rel xi0 . (6.42)
In order to come to a better understanding of the Newton-Raphson method, we want to consider the rst steps of the solution procedure in a simple example. The example contains two nonlinear equations: x3 x3 x2 x2 1 2 1 x2 1 + x2 2 = = 7, 4, (6.43) (6.44)
f1 (x1 , x2 ) = x3 x3 x2 x2 7 1 2 1 f1 (x1 , x2 ) = x2 1 x2 2 4
= =
0, 0.
(6.45) (6.46)
The Jacobian matrix reads: f = x 3x2 2x1 x2 3x2 x1 1 2 . 2x1 2x2 (6.47)
f 0 (x ) = x
12 4 . 4 0
(6.48)
x0 1 x0 2
1 0
(6.49)
(6.50)
(6.51)
With this solution you carry out the second iteration step f (x1 ) = . . .
73
6.2.2.3 Convergence Problems of the Newton-Raphson Method Convergence problems within the Newton-Raphson method can occur for various reasons. In the following we study graphical interpretations of various problems. Convergence Range in the Case of Multiple Solutions (see Fig 6.2) The zero x can 2 only be found, if the initial value x0 is chosen between the two extreme values of the function. If the initial value is chosen outside this range, the solution converges towards one of the two outer zeros. That means, that the convergence range of the Newton-Raphson method is problem specic.
f(x)
x* 1
x* 2
x* 3
Divergence and Singularity (see Fig 6.3) The zero x of the function can only be found, if the initial value x0 is chosen on the right of the pole. If it is chosen on the left-hand side, the method cannot converge to the root. The method leads then to the proximity of the extreme value and fails there, because the reciprocal values of the gradient of the function tend towards innity. The method has to be aborted due to numerical singularity. Dicult Problem (see Fig 6.4) In this example, divergence appears in spite of an initial value x0 in the proximity of the roots x1 , x2 . This is due to the fact, that the search direction leads into an area from which the method can no longer reach the roots. The Newton-Raphson method is basically not suitable for such functions.
74
75
76
7 Dierential-Algebraic Systems
Besides dierential equation systems and algebraic systems, which were both introduced in the previous chapters, a combination of both can also appear: the so-called dierentialalgebraic (equation) systems (DAE systems). The introductory example comes from the area of electrical engineering.
Example 1
We observe an electrical circuit consisting of two resistors (R1 , R2 ), a coil with the inductivity L and a capacitor with the capacity C (see Fig. 7.1).
"
+
% &
Figure 7.1: Electrical circuit example 1. The voltage source U0 as well as R1 , R2 , L and C are assumed to be known. The electrical currents i0 , i1 , i2 , il , iC and the voltages u1 , u2 , uC , ul are to be found. Ohms law and
77
7 Dierential-Algebraic Systems
Kirchhos loop rule yields the model equations: u1 = R1 i1 u2 = R2 i2 diL uL = L dt duC iC = C dt U0 = u1 + uc uC = u2 uL = u1 + u2 i0 = i1 + iL i1 = i2 + iC (7.1) (7.2) (7.3) (7.4) (7.5) (7.6) (7.7) (7.8) (7.9)
This leads to an equation system, which consists of two dierential equations (7.3), (7.4), and seven algebraic equations. Therefore, it is called a dierential-algebraic system. For the representation of dierential-algebraic systems several dierent characteristic classes can be distinguished.
(7.10)
(7.11) (7.12)
The system can be transformed to: 0 = f (z, z, d) 0 = g(z, d) f Rnk , g Rk (7.13) (7.14)
(7.13) comprises a system of ordinary dierential equations, (7.14) comprises a system of (non-) linear algebraic equations.
78
y consists of variables, which are not dierentiated. Therefore, the vector z can be divided into dierential and algebraic variables. Example 1 is an example for such an explicit DAE system. This can be easily seen after the following transformations: diL 1 = uL dt L 1 duC = iC dt C g1 : g2 : g3 : g4 : g5 : g6 : g7 : with x = [iL , uC ]T , y = [i0 , i1 , i2 , iC , u1 , u2 , uL ] , d = [U0 , R1 , R2 , L, C] .
T T
x = f (x, y, d),
0 = u1 R1 i1 0 = u2 R2 i2 0 = U0 u1 uC 0 = uC u2 0 = uL u1 u2 0 = i0 i1 iL 0 = i1 i2 iC
0 = g(x, y, d),
(7.20)
You obtain such models e.g. through linearization of nonlinear implicit or explicit dierentialalgebraic systems. Many control engineering and system theoretical concepts are based on systems of the form (7.20).
79
7 Dierential-Algebraic Systems
i1
5
i2
iC
u1
u2
uL
It can be seen, that the electrical current i0 can only be determined with equation g6 . From this it follows that iC can be determined from g7 , uL from g5 , i2 from g2 , i1 from g1 , u1 from g3 and u2 from g4 . The structural integrity is given. In this example, the solution is even unique. This is not always the case, as the following example shows.
Example 2
In example 2 a third resistor R3 is introduced replacing the capacitor in example 1 (see Fig.. 7.2).
80
"
+
# % !
'
u1 = R1 i1 , u2 = R2 i2 , diL 1 = uL , dt L uL = u1 + u2 , i0 = i1 + iL , i1 = i2 + i3 ,
&
you obtain
u3 = R3 i3 , U0 = u1 + u3 , u3 = u2 .
This is an explicit dierential-algebraic system, which consists of one dierential equation and eight algebraic equations:
81
7 Dierential-Algebraic Systems i0 g1 g2 g3 g4 g5 g6 g7 g8
1
i1
i2
i3
u1
4
u2
u3
uL
8 3
i0 can only be calculated through equation g6 , uL only with g5 . Now you have to decide, with which variable and equation you want to proceed further. E.g. u3 can be determined from g3 . Doing so, you decide that u1 has to be determined from g1 , i1 from g7 , i2 from g2 , il from g8 and u2 from g4 . In contrast to example 1, you see here that no unique solution exists, but the problem is still structurally solvable. The decision to calculate u3 with g3 leads to the problem of algebraic loops: u3 = g3 (u1 ) u1 = g1 (i1 )
i1 = g7 (i2 , i3 ) i3 = g8 (u3 )
u3 = g(u3 , . . . )
The output variable u3 is at the same time an input variable. The problem of algebraic loops is only solvable iteratively. In Simulink a solution method based on Newtons method is used. Models with algebraic loops run slower than models without. If possible, algebraic loops should be avoided. As example (see Fig. 7.3(a)) we consider the algebraic loop in y = A(x By) Here, remedy can be achieved through a simple transformation (see Fig. 7.3(b)): (1 + AB)y = Ax (7.34) (7.33)
82
(b) transformed
(a) original
Example 3
We consider a further example 3 (see Fig. 7.4), where capacitor and coil from example 1 are exchanged.
+
& # "
Figure 7.4: Electrical circuit example 3. In addition to the already known model equations u1 = R1 i1 , u2 = R2 i2 , diL uL = L , dt duC iC = C , dt you obtain U0 = u1 + uL , uL = u2 , uC = u1 + u2 , i0 = i1 + iC , i1 = i2 + iL . (7.39) (7.40) (7.41) (7.42) (7.43) (7.35) (7.36) (7.37) (7.38)
83
7 Dierential-Algebraic Systems
The model includes two dierential equations and seven algebraic equations. The vectors x, y and d correspond to those of example 1. The incidence matrix looks like the following: i0 g1 g2 g3 g4 g5 g6 g7 i1 i2 iC u1 u2 uL
It can be seen, that for the calculation of i0 and iC at each case only the equation g6 is available. Therefore, the dierential-algebraic system is not solvable in this form. This is referred to as structural singularity.
84
Figure 8.1: Schematic representation of a heat conductor. The temperature T of the heat conductor is a function of the position in space and time. The dierential heat balance for a volume element with the area A and the thickness dx reads: A dx c T (x, t) t = A [q(x, t) q(x + dx, t) ] (8.1)
with
The heat ux density q can be represented as follows according to Fouriers law: T (x, t) , : heat conductivity. x You insert this equation into (8.1) and obtain: q(x, t) = A dx c T (x, t) t T (x, t) t = A = T 2 (x, t) dx x2 T 2 (x, t) c x2 (8.2)
(8.3) (8.4)
temperature conductivity
85
(8.4) is the heat conductivity equation. This is a partial dierential equation, because it contains derivatives with respect to space as well as derivatives with respect to time. For a complete mathematical model additional boundary conditions have to be determined. The number of the conditions for every independent variable (x, t) corresponds to the order of its highest derivative. T . Therefore one boundary condition for t is t needed, e.g. an initial condition: T (x, t0 ) = T0 (x), the temperature prole along the heat conductor at time t0 . For the time t the rst derivative is 2T . So two boundary conditions are needed for x. x occurs in the second derivative x2 Examples for these are: a) Dirichlet conditions T (0, t) = T1 (t) T (l, t) = T2 (t) b) Neumann conditions T (0, t) x T (l, t) x = 0 = 0 (8.7) (8.8) (8.5) (8.6)
(8.9) (8.10)
You distinguish between linear, quasi-linear and nonlinear partial dierential equations: linear: quasi-linear: nonlinear: a(), b(), c() constant or function of t and x, a(), b(), c() function of t, x, z, zt , zx , in all other cases.
86
Linear partial dierential equations of second order can be further distinguished into hyperbolic, parabolic and elliptic dierential equations: hyperbolic: parabolic: elliptic: b2 ac > 0, b2 ac < 0, b2 ac = 0.
The choice of a suitable numerical solution method strongly depends on the type of the partial dierential equation. Except for linear partial dierential equations that only contain a small number of variables, in most cases an analytical solution of partial dierential equations is not possible. The heat conductivity equation (8.4) Tt Txx = 0 (8.12)
is a linear, elliptic partial dierential equation of second order, because a = b = 0 and c = = const and so b2 ac = 0. In our example we choose: = 1 x T (x, 0) = sin l T (0, t) = T (l, t) = 0 (8.13) (8.14) (8.15)
0 21 At @ 2
l
T (x, t) = e
sin
x . l
(8.16)
87
The evaluation is done at N+1 selected nodes xk (see Fig. 8.2): k l k = 0, . . . , N N zk (t) = z(t, xk ) xk = z(t, x) [z0 (t), z1 (t), . . . , zN (t)]
T
Figure 8.2: Discretization of the solution function z(x, t). Inserting the above into the partial dierential equation, one receives n dierential equations which have to be solved at the points xk . This is illustrated in Fig. 8.3. In the example of the heat conduction equation, the equations are: Tt |xk Txx |xk
k 0
= 0
k T0 ,
k or Ttk Txx = 0,
k = 1, . . . , N,
T (t = t0 ) =
n
k = 1, . . . , N,
For the use of the method of lines the dierentials with respect to the spatial coordinate have to be determined. For this purpose you use the method of nite dierences. 8.3.1.1 Finite Dierences First of all, zk+1 is expanded into a Taylor series: zk+1 = zk + dzk 1 d 2 zk x + x2 + . . . dx 2 dx2 (8.24) dzk : dx (8.25)
In the case of a rst-oder approximation, one solves this equation with respect to dzk zk+1 1 d2 zk zk+1 zk = x = O1 (x) 2 dx x 2 dx x
88
Figure 8.3: Illustration of the method of lines. Admittedly, this is a bad approximation. An approximation of second order is better, in which both zk+1 and zk+2 are expanded into a Taylor series: dzk 1 d 2 zk x + x2 + . . . dx 2 dx2 dzk 1 d 2 zk = zk + (2x) + (2x)2 + . . . dx 2 dx2
zk+1 = zk + zk+2
(8.26) (8.27)
dzk d 2 zk and is obtained. By multiplying (8.26) dx dx2 by four and substracting (8.27) from this equation, you obtain: In this way, a linear equation system in dzk x dx 3zk + 4zk+1 zk+2 + O2 (x) 2x
(8.28) (8.29)
In the approximation by nite dierences, there are the following degrees of freedom:
89
error order 1 2 3 4 5 6 1 2 3 4 5
zk 1 3 2 11 6 25 12 137 60 49 20 1 2
35 12 15 4 203 45
zk+1 1 2 3 4 5 6 2 5 26 3 77 6 87 5
zk+2
zk+3
zk+4
zk+5
zk+6
1 2 3 2 1 3 3 4 10 3 20 3
3 5 15 2 1 4
19 4 107 6 117 4
1 4 5 4 15 4
1 5 6 5 1 6
x2
2z x2 x k
1 14 3 13 254 9
11 12 61 12 33 2
5 6 27 5
137 180
At the boundaries of the dened range of the spatial coordinate x problems may arise in using nite dierences, if points are not available which are necessary for the evaluation of the dierences. In the central ve-point formula, this applies for example to zk1 and zk2 . Among others there are the following solutions:
Sliding Dierences (see Fig. 8.5): The center point is shifted successively.
However, both methods reduce the approximation order at the boundaries.
90
error order 2 4 6 2 4 6
zk3
zk2
zk1
1 2
zk 0 0 0 2 5 2 49 18
zk+1
1 2 2 3 3 4
zk+2
zk+3
1 12 1 60 3 20
2 3 3 4 1
1 12 3 20 1 60
h2
2z x2 x k
1
4 3 3 2 1 12 3 20 1 90
1 12 1 90 3 20
4 3 3 2
z(x, t) =
i=0
i (t)i (x)
(8.30)
with i . . . known basis of the function system, i . . . coecients. The quality of the approximation depends on the dimension N and the choice of the basis
91
Figure 8.5: Sliding dierences. . The task is now to determine the coecients i . In the example of the heat conduction equation you proceed as follows:
This corresponds to a representation in residual-form, because only zeros occur on the right-hand side. The sought solution T is approximated by applying (8.30):
N
T (x, t) =
i=0
i (t)i (x)
(8.35)
This is inserted into the heat conducting equation, leading to equation residuals Ri , because only an approximation of the real solution is used.
2T d T 2 = RP DGL (x, t) = RP DGL (, , x) = 0, t x dt T (0, t) T0 (x) = RAB (x) = 0, T (0, t) T1 (t) = RRB0 (t) = 0,
92
An approximation z is considered acceptable, if the suitable weighted equation residual R disappears in the means over the considered range:
l
R ,
0
d , x wi (x)dx = 0, dt
i = 1, . . . , N.
(8.40)
The choice of the weights wi characterizes the weighted residuals method. The three most important methods are briey introduced in the following sections (see Fig. 8.6). 8.3.2.1 Collocation Method In the collocation method Dirac impulses are used as weighting functions:
l
R ,
0
d , x (x xi )dx = 0. dt
(8.41)
As a consequence, you do not have to solve an integral system, but rather only an algebraic equation system: R , d , xi dt = 0, i = 1, . . . , N. (8.42)
8.3.2.2 Control Volume Method The control volume method uses weights wi only in an interval between two active nodes: 1 0 if xi1 < x < xi elsewhere
wi (x) =
(8.43)
R ,
xi1
d , x dx = 0, dt
i = 1, . . . , N.
(8.44)
8.3.2.3 Galerkin Method The Galerkin method uses as weighting functions the sensitivity of the approximation function with respect to the parameters to be determined. In other words, the weighting functions are the basis functions: z(x, t) wi (x) = = i (x), i
i = 1, . . . , N.
(8.45)
93
Figure 8.6: Dierent weighting functions (according to Lapidus and Pinder, 1982). 8.3.2.4 Example The method of weighted residuals shall be illustrated with the example of an ordinary dierential equation of rst order, Newtons cooling law. An object with the temperature T is exposed to its environment with the temperature Tu and cools down: dT + k(T Tu ) = 0, dt T (0) = 1. (8.46)
k is a proportionality constant. Let k = 2 , Tu = 1 , and the length of the object shall 2 be normalized to be one. As basis function (t) the so-called hat function is used (see Fig. 8.7): t ti1 ti1 t ti ti ti1 i (t) = (8.47) t i+1 t ti t ti+1 ti+1 ti
T =
j=1
Tj j (t).
(8.48)
94
Tj is the value of the temperature of T at the node j. The condition for the weighted residuals reads:
1
R(t)wi dt = 0,
0
i = 1, 2, 3
(8.49)
with
dT R(t) = + k(T Tu ) dt
(8.50)
you obtain
1
j=1
This equation is to be solved both with the Galerkin method and the collocation method:
Galerkin method With the Galerkin method you obtain the following expression as a result of the weighting with the basis functions:
3 1 1
Tj
j=1 0
di + kj i dt = dt
0
kTu i dt,
i = 1, 2, 3.
(8.52)
95
0
1 1/2 1 1/2
d3 2 + k3 2 dt d3 3 + k3 3 dt
T1 dt T2 = dt T3
1/2
k Tu 1 dt 0 1 k Tu 2 dt . 0 1 k T dt
u 3 1/2
d1 1 + k1 1 dt = dt
0 0 1/2
2 + 4t + k(1 2t)2 dt = 1 k 1 k = + 2 6 2 6
k 2t + 2t2 (1 2t)3 6
= 1 +
0
leading (with T1 = 1 as initial condition) to kTu 1+ 0 1 (a) 1 2 2k k T2 = kTu (b) 1+ 6 3 2 kT u T3 (c) 1 + k 1 + k 6 3 2 Therefore, we have three equations for two unknown variables, yielding 1 + 1 1 + 2 0
k 3 k 6 k 6
(8.54)
[T1 , T2 , T3 ] = [1, 0.678, 0.571] [T1 , T2 , T3 ] = [1, 0.625, 0.550] [T1 , T2 , T3 ] = [1, 0.625, 0.625]
The exact analytical solution of (8.46) reads: T (t) = T (0)ekt + Tu (1 ekt ) leading to: [T1 , T2 , T3 ] = [1, 0.684, 0.568]. If you compare the exact solution with the numerically calculated ones, you can see that the best solution in this case is (8.55). Generally speaking, in the Galerkin method one should delete rows and columns with known coecients.
96
8.4 Summary
Collocation Method Applying the collocation method to our example, you receive an equation system without the integrals of the equation residuals (due to the use of Dirac impulses for weighting):
3 1
Tj
j=1 0
di + kj dt
kTu (t ti )dt = 0,
i = 1, 2, 3.
(8.58)
t2 = 0.25, t3 = 0.75. With this you obtain (8.58): d1 0 dt + k1 | 0 d1 d2 | | dt + k1 0.25 dt + k2 0.25 d2 + k2 | 0 dt 0.75 1 2 + 0 0 2+ k 2 2 + k 2
(8.59)
k 2
0 T1 1 0 T2 = kTu kTu T3 2+ k 2
(8.60)
8.4 Summary
In collocation methods only the equation residuals have to be determined, but not their integrals, which are frequently not analytically evaluable . If an analytical evaluation is not possible, the computational eort increases drastically. There are no general statements on the most suitable method, because the best choice is strongly problem dependent. The approximation quality of the method of the weighted residuals is strongly de In the dierences method many degrees of freedom exist (order, centering, boundary
treatment). 97 pendent on the choice of the basis function. Using the collocation method, the choice of the location of the collocation points is crucial for a high approximation quality.
98
time t
time t
state x
state x
time t
time t
Figure 9.1: Trajectories for continuous and time discrete systems (according to Kiencke, 1997). In discrete event systems, the states change at unpredictable moments of time, which only depend on the events which cause the state change (Fig. 9.1 b,d). Input signals are the discrete events. In continuous systems, the states change continuously in time and (usually) have continuous input variables (Fig. 9.1 a). A discrete time system (Fig. 9.1 c) only approximates a
99
continuous time system. The variables still have continuous values, but are captured only at discrete moments. Discrete events occur either as natural discrete events or through the reduction of a continuous state transition.
Quantities are dened as discrete events, if the transition process is relatively short or can even be neglected.
Signal change in trac controls Number of trains in a railway station neglecting the in and out driving trains Floors during an elevator trip
A state passes into a successive state through a continuous transition. This transition can be (articially) dened as a discrete event, if for example only qualitative changes are of interest.
As an example we observe the driving ability of a motor vehicle driver. The ability decreases with the alcohol content (continuous variable) in the drivers blood. Thus the ability to drive is a continuous variable. The law declares that one loses the ability to drive with an alcohol content of > 0,5 . A graphical illustration as a discrete state graph is shown in Fig. 9.2.
>
<
100
The objects are represented through mathematical variables as well as input and output functions. (see Section 9.2). The variables are depicted as graphical symbols. The input and output functions are executed through graphical operations (see Section 9.3). The advantage of this representation is the vividness. On the other hand, graphical symbols often cannot completely represent a real process. Every graphical model can be transformed into a mathematical model (e.g. with matrix operations).
(9.1)
state transition function for internal events state transition function for external events output function residence time function
101
State transition function for internal events The system is in the state zi with its corresponding residence time (zi ). A new state will be reached after the end of the residence time: z[t + (zi )] = IN (zi ) State transition function for external events The system is in the state z(t). Because of an external event x(t) it switches to the new state z z (t) = EX [z(t), x(t)] Output function y(t) = [z(t), x(t)] Input variables x A model is called autonomous, if it works without being inuenced by the environment (without inputs x). M = z, y, IN , , Time dependency If the state transition functions IN , EX , the output function , and the residence time function are dependent on time, one calls the system time-variant, otherwise timeinvariant. Dependency of the the state transitions on the systems previous history A model is called memoryless, if the conditional transition probability for the successive state is only dependent on the immediate state zi . This characteristic eases the mathematical analysis. (9.4) (9.3) (9.2)
9.2.1 Example
We consider a liquid vessel as an illustrative example. The lling and emptying of the vessel is supposed to be realized by means of a digital feedforward control (see Fig. 9.3). Three process states (lling, emptying, equilibrium) are considered in the model: h > 0 if h < 0 if h=0 v1 = 0, v2 = 0 v1 = 0, v2 = 0 =0 =0 (lling) (emptying) (equilibrium)
if v1 = v2
102
Figure 9.3: Model of a liquid vessel. The inputs are x1 = v1 und x2 = v2 . For these we have: xi = 1 if vi = 0 0 if vi = 0 i = 1, 2
The process states can be represented with the Boolean variables x1 and x2 in the following way: lling: x1 x2 x1 x2 emptying: equilibrium: x1 x2 + x1 x2 The state vector z must contain two components z1 and z2 , because three states have to be represented: z = z = z = 1 0 0 1 0 0 lling, emptying, equilibrium.
103
Figure 9.4: Illustration of a graph. In the following some important concepts of the graph theory will be explained.
Directed graph
A directed graph (see Fig. 9.5) is a graph, in which all edges are directed. A directed edge is called arrow.
Vertices, which are directly connected with an arrow to another vertex k, are called predecessor and successor of k, respectively (see Fig. 9.6). Sources are vertices without any predecessor. Sinks are vertices without any successor (see Fig. 9.7). Two arrows are called parallel, if they have the same starting and ending vertex (see Fig. 9.8). A degenerate edge (arrow) of a graph which joins a vertex to itself is called loop (see Fig. 9.9).
Simple graph
A graph which is permitted to contain loops and multiple edges is called a simple or general graph (see. Fig. 9.10).
105
Finite graph
A graph is called nite, if the sum of vertices as well as the sum of arrows or edges of this graph are nite.
A vertex w of a digraph is called reachable from a vertex V , if there exists a path from the starting vertex V to the ending vertex w. This sequence of arrows is called reachability graph
(9.7)
an1 an2
n is the number of vertices. For the elements of the adjacency matrix, the following holds: aij = 1 0 if the edge from vertex i to j exists otherwise
In the incidence matrix I the n vertices and m edges of a graph are represented. The rows correspond to the vertices, and the columns to the edges: i11 i12 i1m i21 i22 i2m I= . (9.8) . . .. . . . . . . . in1 in2 inm
106
The edge el is called incident with the vertex vk , if it starts at vk or if it ends at vk . ikl = 1 0 if el is incident with vk otherwise k = 1, ..., n l = 1, ..., m (9.9)
In a digraph, the directions of the edges are distinguished: 1 ikl = 1 0 if el is positive incident with vk (edge el leaves from the vertex vk ) if el is negative incident with vk ( edge el leads to the vertex vk ) otherwise (9.10) In a digraph the elements of the incidence matrix satisfy:
n
ikl = 0
k=1
l = 1, . . . , n
(9.11)
The column sum equals zero because every edge starts at one vertex and leads to another one. This characteristic can be used for error checking. The above introduced matrices shall be illustrated in the following example (see Fig. 9.11):
(9.12)
107
The incidence matrix reads correspondingly: a b c d e f g h 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 2 1 1 0 0 1 1 0 1 0 0 3 I= 0 0 0 4 0 1 0 1 1 0 0 0 1 1 1 0 5 0 0 0 0 0 0 1 1 6 0 9.3.2.1 Models for Discrete Event Systems The following model types are described in more detail in the following sections:
(9.13)
common programming languages: FORTRAN, C(++), PASCAL simulation languages for general DES: GPSS, SIMAN, HPSIM simulation systems for special DES: Simple++ for manufacturing systems
9.4 Automaton Models
Examples for automaton models are: pay phones, washing machines, money transaction machines. One characteristic of an automaton is, that it is operated from outside. Thus, a sequence of discrete time events from outside is present. A time basis does not exist, therefore the residence time function is not needed. Because the system reacts exclusively on inputs, also no state transition function is needed for the internal events IN . The input xi at the state zi denitely determines the successive state zi+1 and the output yi . EX (zi , xi ) = zi+1 (zi , xi ) = yi Therefore, an automaton model is a 5-tuple x, z, y, EX , . (9.14) (9.15)
108
= (
Figure 9.12: State graph in an automaton model. The state transition in the automaton model (state graph) is illustrated in Fig. 9.12. A soda machine serves us an example of an automaton. After inserting a coin you can choose between two possible drinks or you can recall the coin. Upon maloperation a signal sounds. First of all we dene the inputs, the states, and the outputs of the model:
1. input x1 . . . x2 . . . x3 . . . x4 . . .
2. states z1 . . . soda machine ready z2 . . . amount of money sucient 3. output y1 . . . signal sounds y2 . . . output drink 1 y3 . . . output drink 2 y4 . . . output coin
Fig. 9.13 shows the resulting state graph of the soda machine model.
' ) 2 0 &$ % 1 (
9 B@ 4 75 8A 36
"
109
Places are represented by circles and describe (possible) states. Transitions describe (possible) events and are represented as bars. Places and transitions are connected with directed edges. Thereby, connections The actual system states are represented through tokens, which are stored in the
places. between places and places as well as connections of transitions with transitions are not allowed.
Tokens are produced, deleted or moved by the ring of the transitions. The dynamic ow of a Petri net is represented by the transitions between the marked states (token distribution in all places). One distinguishes between two Petri net models:
Discrete time (causal) models: Petri nets describe logically what happens in which sequence. Continuous time models:
Time Petri nets predict additionally when the events occur.
110
The weights state how many tokens can be taken away or enter according to the capacity restriction of a node. With the matrices W and W + the incidence matrix W of a Petri net can be calculated: W = W + W . As an example we will study the message exchange between a message receiver and a message transmitter. A transmitter sends a message to a receiver and waits for an answer (see Fig. 9.15).
Figure 9.15: Petri net model for message exchange. The incidence matrices W of this Petri net reads: t 1 1 0 0 = 0 0 0 t2 0 1 0 1 0 0 t3 0 0 1 0 0 0 t4 0 P1 0P2 0P3 0P4 1P5 1 P6 t 1 0 1 0 = 0 0 1 t2 0 0 1 0 0 0 t3 0 0 0 1 1 0 t4 1 P1 0P2 0P3 0P4 0P5 0 P6
W+
(9.16)
W = W+ W
t t 2 t3 t4 1 1 0 0 1 P1 1 1 0 0 P2 0 1 1 0 P3 = 0 P4 0 1 1 0 0 1 1P5 1 0 0 1 P6
(9.17)
111
(9.18)
k = [k1 , k2 , ..., kn ]
(9.19) i = 1, 2, . . . , n
Transitions
(9.20)
In order to activate transitions, which leads to a change of the marking vector (m(r 1) m(r)), the following conditions have to be fullled: 1. In all pre-places a sucient amount of tokens has to be available:
pi utj : mi (r 1) wij wij = 0
(9.21)
2. The capacity in all post-places is not allowed to exceed the maximal capacity after the ring of the transition:
+ pi tj u : mi (r 1) ki + wij wij
(9.22)
(9.23)
An individual verication of both activation conditions is a necessary condition. As the result you obtain the activation function uj for the transition tj : uj (r) = 1 0 tj activated otherwise (9.24)
In the following, we assume that no conicts occur. Then we obtain the new state with:
+ mi (r) = mi (r 1) wij uj (r) + wij uj (r)
(9.25)
The new marking vector can be described by the introduction of a switching vector u at the time moment r: u(r) = [u1 (r), u2 (r), ..., um (r)]T m(r) = m(r 1) W u(r) + W u(r) m(r) = m(r 1) + W u(r)
+
112
( )= [
9 8
( )= [
( )=[
4
( )=[
%
'
"
()
01
23
Figure 9.16: Reachability graph for the message exchange model. One sees clearly that all states can be reached from any starting state. One uses the reachability graph for instance to review, whether desired states are not and undesired states are reachable. In this case the Petri net design has to be improved. Furthermore, one can avoid predecessors of dangerous states by the use of suitable design methods. 9.5.3.2 Boundedness and Safety A Petri net is said to be bounded if in no single place pi more than a certain maximal amount of ki tokens are present. In the case of ki = 1 the Petri net is also said to be safe. 9.5.3.3 Deadlock A deadlock in a Petri net is a transition (or a set of transitions) which cannot re. We consider again the slightly modied example of the messages exchange (see Fig. 9.17). Through the introduction of the new transitions t5 and t6 deadlocks appear.
&
113
9.5.3.4 Liveness Transitions that cannot be longer activated are said to be non-live. Fig. 9.18 shows an example of a deadlock free but non-live Petri net including the reachability graph.
( )=[
( )=[
( )=[
( )=[
Figure 9.18: Non-live Petri net. A live Petri net is always deadlock free. However, this does not hold true in the inverse case. For practical uses, a Petri net should be live. Fig. 9.19 shows a live variant of the
114
115
Another possibility is a delay in the ow of the tokens (see Fig. 9.21). Hereby, the transition res delayed after the delay time .
116
10 Parameter Identication
10.1 Example
We consider the simulation of a bioprocess as an introductory example. It concerns the fermentation of Escherichia coli. In Fig. 10.1 the measured values for biomass, substrate (glucose), and the product, which are obtained during a fermentation, are represented.
50 40 glucose [g/l] 30 20 10 0 0 10 20 30 time [h] 40 biomass glucose product addition inductor 125 100 75 50 25 0 biomass [g/l], product [%]
Figure 10.1: Measured values for a high cell density fermentation of Escherichia coli. After a so-called lag phase the biomass grows exponentially. Through the addition of an inductor (IPTG) the product generation is started. Shortly afterwards, the product generation stops and the biomass dissolves (dies). The goal of the simulation task is to determine an optimal feed prole, which delivers as much product as possible at the end of the fermentation. This task is performed according to the simulation procedure (see Fig. 1.4). A so-called compartment model is used (Schneider, 1999) as the model structure. The structure of a simulation model is determined by setting up basic mathematical relations (e.g. balance equations). As the result of the model building in this case, one obtains a dierential-algebraic system which consists of ten coupled nonlinear dierential equations of rst order and two nonlinear algebraic equations. This system can be solved according to the methods introduced in Chapter 7. The parameters of the model, which nally determine the concrete behavior of the model, still have to be dened and/or identied. The compartment model contains 30 parameters
117
10 Parameter Identication
(20 model parameters and 10 initial conditions). There are 15 parameters which can be determined using preliminary considerations and biological knowledge. The task of the parameter identication is to determine the remaining parameters of the simulation model on the basis of measurements. The general parameter identication procedure is depicted in Fig. 10.2 (Norton, 1997). The measurements are compared to the simulated values. If the deviations are signicant, the parameters are changed until a good t is obtained.
Figure 10.2: Parameter identication procedure. In order to evaluate the deviations between the real outputs (measurements) and the simulated outputs (output quantities), an objective function is formulated. With the help of identication methods the parameters are determined in such a way that the objective function is minimized. The degrees of freedom for parameter identication are the choice of the objective function and the calculation rule for the determination of the model parameters.
118
The model equation reads: yk = a xk . The error between the real system and the model is given by ek = yk yk . (10.3) (10.2)
The aim of the parameter identication is the determination of the parameter a in a way that the objective function Q is minimized. The objective function evaluates the deviations between the simulated and the measured values. In case of the least squares method the objective function has the following form:
N N N
Q=
k=1
e2 k
=
k=1
(yk yk ) =
k=1
(yk axk )2
(10.4)
N denotes the number of measurements. The error is counted quadratically in the objective function in order to prevent the positive and negative deviations of the measurements from the model predictions from balancing themselves. In the vector form you obtain the following representation: xT y
T T T
y e
We study the valid necessary and sucient conditions for a minimum of Q. The necessary condition for the minimum of Q is: Q = 2y T x + 2xT x = 0. a (10.10) a The sucient condition is always satised, because of 2Q = 2xT x > 0. 2 a Therefore, the sought parameter a is given by: yT x . xT x This equation is called regression equation. a= (10.11)
(10.12)
119
10 Parameter Identication
Q=
k=1
1 2 e = eT 1 e. k k
(10.13)
is a weighting matrix. Information about the disturbance can be available e.g. in form of the variance matrix C. Therefore, this variance matrix is a sensible choice for . = C. The variance matrix has the 2 (n1 ) 0 0 2 (n2 ) C= . . . . . . 0 0 following form: 0 0 . . .. . . . 2 (nN ) (10.14)
(10.15)
With this you obtain as the objective function Q = (y ax)T C 1 (y ax) = y T C 1 y 2y T C 1 x + a2 xT C 1 x a The necessary condition for the minimum of the objective function leads to: Q = 2y T C 1 x + 2xT C 1 x = 0. a a The sought parameter a arises as follows: a= y T C 1 x . xT C 1 x (10.18) (10.17) (10.16)
This identication method is called Markov estimation, if is the covariance matrix of the disturbance. In many cases the expectation E can be characterized by E[n(ti )] = E[n] = 0, E[ni nT ] j = ij C(ti ). (10.19) (10.20)
120
x2 1 . . . x2 N
.. .
x1 N
xm N
xm n1 a1 1 m a n2 x2 2 . . + . . . . . . . . am nN
(10.23)
If (X T X)1 is ill-conditioned, you will have numerical problems inverting the matrix.
121
10 Parameter Identication
The k-th estimation (up to the present k measurements are available) is given as (see (10.28)): ak = (X T X k )1 X T y k . k k The (k + 1)-th estimation is: ak+1 = (X T X k+1 )1 X T y k+1 . k+1 k+1 With x1 x2 1 1 x1 x2 2 2 = . . .. . . . . . xm xm 1 2 x1 k x1 k . . . xm k | x1 k+1 2 | xk+1 = X T y k + xk+1 yk+1 . k . | . yk | xm k+1 yk+1 y1 y2 . . . (10.30) (10.29)
X T y k+1 k+1
(10.31)
you obtain ak+1 = (X T X k+1 )1 (X T y k + xk+1 yk+1 ). k k+1 (10.29) can be transformed to X T X k ak = X T y k . k k This equation is inserted into (10.32): ak+1 = (X T X k+1 )1 Xk T X k ak + (X T X k+1 )1 xk+1 yk+1 + k ak . a k+1 k+1
extension
(10.32)
(10.33)
(10.34)
We now dene the matrix S k as S k = (X T X k )1 k and obtain ak+1 = ak + S k+1 reads S k+1 = (X T X k + xk+1 xT )1 = (S 1 + xk+1 xT )1 , k k+1 k+1 k therefore S 1 can be written as k S 1 = S 1 xk+1 xT . k+1 k k+1 The estimation equation of the recursive regression is ak+1 = ak + S k+1 xk+1 (yk+1 xT ak ), k+1
error
(10.35)
(10.36)
(10.37)
(10.38)
a0 = 0,
(10.39)
with S k+1 (because of the matrix inversion lemma): S k+1 = S k 1 1+ xT S k xk+1 k+1 S k xk+1 xT S k , k+1 S 0 = I. (10.40)
122
The already introduced quadratic objective function is used here, as well. A direct solution for p is not possible, rather it involves a nonlinear optimization problem which has to be solved with optimization methods. An optimization method is an algorithm which changes the sought parameters until the value of objective function is as small as possible. Optimization methods, as briey introduced in the following, can be used for dierent purposes. Once a model exists which represents well the measured values (for example after a parameter identication), one can use optimization methods e.g. for design improvement etc. (compare with simulation procedure use of simulator). Out of the wide eld of mathematical optimization methods, in the lecture only the socalled hill-climbing methods shall be considered. Dierent hill-climbing methods can be distinguished (Homann and Hofmann, 1971):
Search methods: Only the objective function Q() is evaluated. p Gradient methods: The objective function Q() and the derivative (the gradient) p Q ( ) are evaluated. p Newton methods: The objective function Q(), the gradient Q () and the second p p
p p
In the following sections the search methods are briey discussed. In order to understand these methods, one has to clarify the connection between the sought parameters and the objective function. For this reason we consider in Fig. 10.3 the representation of an objective function in a two-dimensional space with respect to the sought parameters. The lines with the same value of the objective function (level curves) can be e.g. ellipses. On such a level curve the values of the objective function remain the same independently of the combination of the parameters. In this example the minimum is in Q .
123
10 Parameter Identication
10.6.1.1 Successive Variation of Variables The method of the successive variation of variables starts at an initial point p(1) . One varies p1 (for constant p2 ) until Q( ) reaches a minimum. Then one varies p2 (with the p new (constant) value for p1 ) until the objective function reaches a minimum (see Fig. 10.4). Then again p1 is varied and so forth.
Figure 10.4: Successive variation of variables. The new parameter vector is given by: pj+1 = pj + v j , j j = 1, . . . , n. (10.44)
The search direction v j is determined through the choice of the coordinate system. The change of the objective function determines the length of the vector v.
124
10.6.1.2 Simplex Methods For the simplex method, at the beginning n + 1 points are determined (in Fig. 10.5 the points 1, 2, 3), dening an n-dimensional simplex.
"
Figure 10.5: Simplex method. For these points the objective function is evaluated. The point with the worst objective function value is reected in the centroid of the remaining n points (reection). For this new parameter combination (point 4 in Fig. 10.5) the objective function is evaluated and again the point with the worst objective function is reected, and so on. For the use of such optimization methods it is not guaranteed that the global optimum is always found.
"
125
10 Parameter Identication
In Fig. 10.6 such a case is depicted for the simplex method. Because of the structure of the level curves the simplex method remains in a local minimum; the global optimum is not found. 10.6.1.3 Nelder-Mead Method The Nelder-Mead method comprises a modied simplex method. In addition to the reection used in the simplex method, other possibilities of determining new parameter combinations are considered (see Fig. 10.7). In doing so, a search is performed into one direction as long as the objective function improves.
( )< ( )
( ) ( )
( ) ( ) ( )> ( )
Figure 10.7: Nelder-Mead method. With the Nelder-Mead method 15 parameters of the compartment model of the introductory example are determined. In Fig. 10.8 the simulation results calculated with the optimal parameter set is compared to the measurements.
50 40 glucose [g/l] 30 20 10 0 0 10 20 30 time [h] 40 biomass glucose addition inductor 125 8 acetate, nitrogen [g/l] 100 biomass [g/l] 75 50 25 0 acetate nitrogen product addition inductor 1 0.5 0
0 0 10 20 time [h] 30 40
126
product []
11 Summary
This chapter gives a summary of the course oriented along the simulation procedure (see Fig. 1.4) and Page (1991).
Figure 11.1: Basic concepts in systems theory (according to Page, 1991) If a system component cannot be further decomposed, it is called a system element. These system elements have specic characteristics (attributes). Changes in these characteristics correspond to changes in the state variables. The set of all values of the state variables forms the state of the system. You can distinguish between the following systems:
127
11 Summary open system at least one input or output static system closed system no inputs or outputs dynamic system time dependent change of the state
11.2 Modeling
Among other things, models can be classied according to their transformation and their analysis method (see Fig 11.2), according to their state transition (see Fig. 11.3), or according to the application of the model (see Fig. 11.4).
Figure 11.2: Classication of models according to the used transformation and analysis method (according to Page, 1991).
Classication of Dynamic Mathematical Models As illustrated in the course, dynamic (graphical) mathematical models play an important role in simulation techniques. Therefore, they are going to be discussed and classied in more detail in this section.
128
11.2 Modeling
Figure 11.3: Classication of models according to the type of state transition (according to Page, 1991).
Figure 11.4: Classication of models according to their application (according to Page, 1991).
In time continuous systems the state variables are continuous functions of time. Their mathematical representation can be given by systems of ordinary dierential equations of rst order or algebraic equation systems or a combination of both (DAE system). Examples, which have been discussed during the course, are the vertical movement of a motor vehicle and an electrical circuit. Discrete event systems are a special case of time discrete systems. The time steps during the process are determined by incidentally occurring external events or by functions of the actual values of the state variables. The graphical mathematical representation is carried out by e.g. Petri nets. In the course we studied the examples of a soda machine and a trac light circuit. In contrast to distributed systems, lumped systems include a nite number of states. The mathematical representation of distributed systems is given by partial dierential equations, which comprise not only derivatives of the time but also of the space coordinates. In the course, the example of a heat conductor has been studied.
129
11 Summary
In time invariant systems the systems function is independent of time (autonomous system). Therefore, the model parameters are time independent as well.
In linear systems the superposition principle holds true: f (u1 + u2 ) = f (u1 ) + f (u2 ) (11.1)
The predator-prey model has been introduced as an example for a nonlinear model.
In contrast to dynamic models, the behavior of steady-state models is time independent. An example is a chemical reaction system.
The numerical methods treated in the course shall be listed in note form in this section:
equation systems
linear: Gau elimination
11.4 Simulators
Simulation software can be distinguished according to dierent criteria:
130
11.4 Simulators
compilers or interpreters of a programming language, which is suitable for the implementation of simulation models (e.g. Matlab) completely implemented models, only the input parameters are freely selectable (e.g. ight simulator) integrated, interactive working environment (e.g. Simulink)
universally usable software: e.g. compiler for general high-level programming languages simulation specic software: software especially developed for the purpose of simulation (e.g. Simulink) software oriented along specic classes of problems and specic model types software oriented along the problem denition of one application domain general programming languages e.g. FORTRAN, C(++), PASCAL Simulation packages collection (library) of procedures and functions Simulation languages
low-level simulation languages high-level simulation languages: implementation of models is eased systems-oriented simulation language: direct modeling of specic system classes
131
11 Summary
Figure 11.5: Structure of a simulation system (according to Page, 1991). Simulation environments can be classied according to the following criteria and characteristics:
with or without programming environment which model classes are going to be supported complexity of the model library complexity of the numerical methods on-line or oine graphics possible simulation experiments open or closed structure (interfaces) carrier language: FORTRAN, C, ... compiler or interpreter platform: PC, VAX, UNIX, ... equation- or block-oriented
The dierences between equation- and block-oriented simulators are illustrated by the following example.
132
11.4 Simulators
89
'
&
"
Figure 11.6: Block-oriented representation of a simulation model. Example In Fig. 11.6, the block-oriented representation of a simulation model is given.
The equations of all model blocks are given by: f1 (x11 , x12 , y11 ) = 0 f2 (x21 , y21 ) = 0 f3 (x31 , y31 , y32 ) = 0 f4 (x41 , x42 , y41 ) = 0 The connections between the model blocks are described by: x12 y31 = 0 x21 y11 = 0 x31 y21 = 0 x42 y32 = 0 (11.6) (11.7) (11.8) (11.9) (11.2) (11.3) (11.4) (11.5)
An equation-oriented simulator solves (11.2) - (11.9) simultaneously. There is no block structure identiable in the equations anymore. For this reason, an equation-oriented simulator has no graphical representation (input) of the model structure. A block-oriented simulator solves each block separately in a specic order. From (11.2) (11.5) the equations of the block-oriented simulator can be derived: y11 = g11 (x11 , x12 ) y21 = g21 (x21 ) y31 = g31 (x31 ) y32 = g32 (x31 ) y41 = g41 (x41 , x42 ) (11.10) (11.11) (11.12) (11.13) (11.14)
The known inputs of the system are: x11 and x41 . One possible solution strategy for the equations is the following:
56
133
11 Summary
1. Estimate x12 2. Calculate y11 , then y21 , . . . , y31 , y32 3. x12 y31 =? a) = 0: calculate y41 b) = 0: set x12 = y31 and repeat step 2
direct solution: (weighted) least squares method indirect solution using optimization methods
11.6 Use of Simulators
The major reasons for the use of simulators are:
The system is not available. Experiments within the system are too dangerous. The experiment costs are too high. The time constants of the system are too large. The variables of interest are dicult to measure or not measurable at all. There are disturbances within the real system.
134
Figure 11.7: Potentials and problems of modeling and simulation (according to Page, 1991).
135
11 Summary
136
Bibliography
Astrm, K., Elmqvist, H. and Mattsson, S. (1998). Evolution of continuous-time modeling o and simulation, 12th European Simulation Multiconference, ESM 98, Manchester, pp. 110. Bachmann, B. and Wiesmann, H. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 15, at - Automatisierungstechnik 48: A57A60. Bauknecht, K., Kohlas, J. and Zehnder, C. (1976). Simulationstechnik, Springer, Berlin. Beater, P. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 10, at Automatisierungstechnik 48: A37A40. Bennett, B. (1995). Simulation Fundamentals, Prentice Hall. bestellt, 10.7.00. Bischo, K. and Himmelblau, D. (1967). Fundamentals of Process Analysis and Simulation, number 1 in AIChE Continuing Education Series, American Institute of Chemical Engineers, New York. Bossel, H. (1992). Modellbildung und Simulation, Vieweg & Sohn Verlagsgesellschaft mbH, Braunschweig. Bratley, P., Fox, B. and Schrage, L. (1987). A Guide to Simulation, Springer, New York. Breitenecker, F., Troch, I. and Kopacek, P. (eds) (1990). Simulationstechnik, Vieweg, Braunschweig. Cellier, F. (1992). Simulation modelling formalism: Ordinary dierential equations, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 420423. Clau, C., Schneider, A. and Schwarz, P. (2000a). Objektorientierte Modellierung Physikalischer Systeme, Teil 13, at - Automatisierungstechnik 48: A49A52. Clau, C., Schneider, A. and Schwarz, P. (2000b). Objektorientierte Modellierung Physikalischer Systeme, Teil 14, at - Automatisierungstechnik 48: A53A56. Dahmen, W. (1994). Einfhrung in die numerische Mathematik (fr Maschinenbauer), u u Institut fr Geometrie und Praktische Mathematik, RWTH Aachen. u Engeln-Mller, G. and Reutter, F. (1993). Numerik-Algorithmen mit Fortran 77u Programmen, BI Wissenschaftsverlag, Mannheim.
137
Bibliography
Fishman, G. (1978). Principles of Discrete Event Simulation, John Wiley, New York. Fllinger, O. and Franke, D. (1982). Einfhrung in die Zustandsbeschreibung dynamischer o u Systeme, Oldenbourg, Mnchen. u Fritzson, P. (1998). Modelica - a language for equation-based physical modeling and high performance simulation, APPLIED PARALLED COMPUTING 1541: 149160. bestellt, 10.7.00. Fritzson, P. and Engelson, V. (1998). Modelica - a unied object-oriented language for system modeling and simulation, ECOOP98 (the 12th European Conference on ObjectOriented Programming). Gipser, M. (1999). Systemdynamik und Simulation, B.G: Teubner, Stuttgart. Gordon, G. (1969). System Simulation, Prentice-Hall, Englewood Clis. Hanisch, H.-M. (1992). Petri-Netze in der Verfahrenstechnik, Oldenbourg, Mnchen. u Homann, U. and Hofmann, H. (1971). Einfhrung in die Optimierung, Verlag Chemie, u Weinheim. Hohmann, R. (1999). Methoden und Modelle der Kontinuierlichen Simulation, Shaker, Aachen. Holzinger, M. (1996). Modellorientierte Parallelisierung von Modellen mit verteilten Parametern, Masters thesis, Technische Universitt Wien. a Jain, R. (1991). The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling, John Wiley & Sons, New York. Kiencke, U. (1997). Ereignisdiskrete Systeme, Oldenbourg, Mnchen. u Kcher, D., Matt, G., Oertel, C. and Schneewei, H. (1972). Einfhrung in die Simulao u tionstechnik, Deutsche Gesellschaft fr Operations Research. u Korn, G. and Wailt, J. (1983). Digitale Simulation kontinuierlicher Systeme, Oldenbourg, Mnchen. u Kramer, U. and Neculau, M. (1998). Simulationstechnik, Carl Hanser, Mnchen. u Lapidus, L. and Pinder, G. (1982). Numerical Solution of Partial Dierential Equations in Science and Engineering, John Wiley & Sons, New York. Law, A. and Kelton, W. (1991). Simulation Modeling and Analysis, McGraw Hill, New York. Lehman, R. (1977). Computer Simulation and Modeling: An Introduction, Lawrence Erlbaum, Hillsdale.
138
Bibliography
Liebl, F. (1992). Simulation - Problemorientierte Einfhrung, Oldenbourg, Mnchen. u u Lotka, A. J. (1925). Elements of Physical Biology, Williams and Wilkins, Baltimore. Margolis, D. (1992). Simulation modelling formalism: Bond graphs, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 415420. Marquardt, W. (1991). Dynamic process simulation - recent progress and future challenges, in W. Ray and Y. Arkun (eds), Chemical Process Control - CPC IV. MathWorks (2000). Using MATLAB, Version 6, The MathWorks, Inc., Natick, USA. MathWorks (2002). Simulink for model-based and system-level design. Product informationen online available at http://www.mathworks.com/ (viewed 25 March 2002). Meadows, D. L., Meadows, D. H. and Zahn, E. (2000). Die Grenzen des Wachstums. Club of Rome. Bericht des Club of Rome zur Lage der Menschheit, Deutsche VerlagsAnstalt. Mezencev, R. (1992). Simulation languages, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 409415. Minsky, M. (1965). Matter, mind and models, in A. Kalenich (ed.), Proceedings International Federation of Information Processing Congress, Spartan Books, Washington, pp. 4549. Mller, D. (1992). Modellbildung, Simulation und Identikation dynamischer Systeme, o Springer, Berlin. bestellt, 22.9.00, nicht nachweisbar. Nidumolu, S., Menon, N. and Zeigler, B. (1998). Object-oriented business process modeling and siumlation: A discrete event system specication framework, Simulation Practice and Theory 6: 533571. Norton, J. (1997). An Introduction to Identication, Academic Press. Odum, H. and Odum, E. (2000). Modeling for All Scales: An Introduction to System Simulation, Academic Press, San Diego. Otter, M. (1999a). Objektorientierte Modellierung Physikalischer Systeme, Teil 1, at Automatisierungstechnik 47: A1A4. Otter, M. (1999b). Objektorientierte Modellierung Physikalischer Systeme, Teil 2, at Automatisierungstechnik 47: A5A8. Otter, M. (1999c). Objektorientierte Modellierung Physikalischer Systeme, Teil 3, at Automatisierungstechnik 47: A9A12. Otter, M. (1999d). Objektorientierte Modellierung Physikalischer Systeme, Teil 4, at Automatisierungstechnik 47: A13A16.
139
Bibliography
Otter, M. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 17, at Automatisierungstechnik 48: A65A68. Otter, M. and Bachmann, B. (1999a). Objektorientierte Modellierung Physikalischer Systeme, Teil 5, at - Automatisierungstechnik 47: A17A20. Otter, M. and Bachmann, B. (1999b). Objektorientierte Modellierung Physikalischer Systeme, Teil 6, at - Automatisierungstechnik 47: A21A24. Otter, M., Elmqvist, H. and Mattson, S. (1999). Objektorientierte Modellierung Physikalischer Systeme, Teil 8, at - Automatisierungstechnik 47: A29A32. Otter, M., Elmqvist, H. and Mattsson, S. (1999). Objektorientierte Modellierung Physikalischer Systeme, Teil 7, at - Automatisierungstechnik 47: A25A28. Otter, M. and Schlegel, M. (1999). Objektorientierte Modellierung Physikalischer Systeme, Teil 9, at - Automatisierungstechnik 47: A33A36. Page, B. (1991). Diskrete Simulation. Eine Einfhrung mit Modula-2, Springer, Berlin. u Piehler, J. and Zschiesche, H.-U. (1976). Leipzig. Simulationsmethoden, BSB B.G. Teubner,
Press, W. H., Flannery, B. P., Teukolsky, S. A. and Vetterling, W. T. (1990). Numerical Recipes - The Art of Scientic Computing, Cambridge University Press, Cambridge. Roberts, N. e. A. (1994). Introduction to Computer Simulation: A System Dynamics Modeling Approach, Productivity Press. bestellt, 10.7.00. Sadoun, B. (2000). Applied system simulation: A review study, Information Sciences 124: 173192. Savic, D. and Savic, D. (1989). BASIC Technical Systems Simulation, Butterworth & Co, London. Schiesser, W. (1991). The Numerical Method of Lines, Academic Press, San Diego. Schneider, R. (1999). Untersuchung eines adaptiven prdiktiven Regelungsverfahrens a zur Optimierung von bioverfahrenstechnsichen Prozessen, Fortschritt-Berichte VDI, Reihe 8, Nr. 855, VDI Verlag, Dsseldorf. u Schne, A. (1973). Simulation Technischer Systeme, Carl Hanser. bestellt, 7.7.00. o Schumann, R. (1998). CAE von Regelsystemen, atp - Automatisierungstechnische Praxis 40: 4863. Schwarze, G. (1990). Digitale Simulation, Akademie-Verlag, Berlin. Shannon, R. (1975). Systems Simulation: The Art and Science., Prentice Hall, Englewood Clis.
140
Bibliography
Tummescheit, H. (2000a). Objektorientierte Modellierung Physikalischer Systeme, Teil 11, at - Automatisierungstechnik 48: A41A44. Tummescheit, H. (2000b). Objektorientierte Modellierung Physikalischer Systeme, Teil 12, at - Automatisierungstechnik 48: A45A48. Tummescheit, H. and Tiller, M. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 16, at - Automatisierungstechnik 48: A61A64. Tzafestas, S. (1992). Simulation modelling formalism: Partial dierential equations, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 423428. Verein Deutscher Ingenieure (1996). VDI-Richtlinie 3633: Simulation von Logistik-, Materialu- und Produktionssystemen - Begrisdenitionen. Volterra, V. (1926). Variations and uctuations of the number of individuals in animal species living together, in C. R. N. (ed.), Animal Ecology, McGraw-Hill, New York, pp. 409448. Watzdorf, R., Allgwer, F., Helget, A., Marquardt, W. and Gilles, E. (1994). Dynamische o Simulation verfahrenstechnischer Prozesse und Anlagen -Ein Vergleich von Werkzeugen, in G. Kampe and M. Zeik (eds), Simulationstechnik, 9.ASIM-Symposium in Stuttgart, pp. 8388. Wllhaf, K. (1995). Objektorientierte Modellierung und Simulation verfahrenstechnischer o Mehrproduktanlagen, Shaker, Aachen. Woolfson, M. and Pert, G. (1999). An Introduction to Computer Simulation, Oxford University Press, New York. Zeigler, B. (1984a). Multifacetted Modelling and Discrete Event Simulation, Academic Press, chapter Multifacetted Modelling: Motivation and Perspective, pp. 319. Zeigler, B. (1984b). Multifacetted Modelling and Discrete Event Simulation, Academic Press, chapter System Architectures for Multifacetted Modelling, pp. 335371. Zeitz, M. (1999). Simulationstechnik. Lecture notes, Institut fr Systemdynamik und u Regelungstechnik, Universitt Stuttgart. a
141