Вы находитесь на странице: 1из 4

12 Vector Solutions to Ordinary Differential Equations

Last time we learned that vector and matrix representations can make your life easier, specifically
when you implement Euler integration for a complicated system involving many states. But how
do we use vector and matrix representations to find analytic solutions?
Lets say we have a spring-mass-damper system like the one with three masses in lecture that I
created by portions of the slinky together.
How many states would we expect? We should have six states totalthree for the positions
of each spring and three for the velocities of each mass. This means that the vector and matrix
representation would include a state vector that has six componentsthe six statesand a matrix
that has six rows and six columns. What type of solution to the differential equation w = Aw do
we expect? Hopefully, you are expecting an exponential soluton!

w(t) = eht w0

Keep in mind that this exponential solution is a vector because w0 is a vector. However, the
exponential term itself is still just a number for any given t. If I plug this exponential into w = Aw,
I get
d ht
w = Aw ) e w0 = Aeht w0
dt
If I then differentiate, I get
heht w0 = Aeht w0
Note that h is a number, eht is a number, w0 is a vector, A is a matrix, eht is again a number, and
w0 is again a vector.
eht w0 = |{z}
h |{z}
|{z} A |{z}eht w0
|{z} |{z}
number number vector matrix number vector

If I divide both sides by the exponential term (which is nonzero, so I am allowed to do so), I see
that
hw0 = Aw0 .
This is a special situation where a matrix A multiplying the vector w0 is the same as just multiplying
that vector by h! Such a vector is called an eigenvector of A and h is called an eigenvalue of A. If
there are n states there will always be n eigenvector-eigenvalue pairs.
The nice thing about this is that you can use MATLAB or Python to calculate eigenvector-
eigenvalue pairs easily!
Lets go back to our spring-mass-damper system and assume that the damping is reasonably

Figure 19: spring mass damper system

38
high, giving us an A matrix of
0 1
A=
1 3
To use an exponential solution, we need to solve Aw0 = hw0 , so I use MATLAB to compute
the eigenvectors and eigenvalues. (See the lecture video for an example of this.) MATLAB returns
an eigenvector-eigenvalue pair.

0.4
w0,1 = h = 2.6
1

2.6
w0,2 = h = 0.4
1
(Note that I have scaled these to make the second component equal to one in each case. I can do
this because any scalar multiple of an eigenvector is also an eigenvector.)
What does this tell us? It says that if

0.4 2.6t 0.4
w0,1 = ) w(t) = e
1 1

and if
2.6 0.4t 2.6
w0,2 = ) w(t) = e
1 1
Moreover, if w0 is three times one of these vectors, then I would just multiply that exponential
solution by three because of superposition.
Lastly, if
3
w0 = ,
2
that is, w0 is the sum of the two eigenvectorsthen the solution would be the sum of the two
exponential solutions!

3 2.6t 0.4 0.4t 2.6
w0 = = w0,1 + w0,2 ) w(t) = e +e
2 1 1

This example suggests that we should be able to compute any solution of a differential equation
from the exponential solutions for each eigenvector-eigenvalue pair, which is in fact true!
What should you remember from today? Remember that vector and matrix representations
are useful for computing analytical solutionsas well as Euler integrationfor complex systems
with many states. And remember that eigenvectors and eigenvalues provide the means by which
we compute those solutions as exponential solutions.

39
Superposition in Vector Linear ODEs with Complex Eigenvalues
On the previous page, and in the lecture video, I talk about superposition of solutions to ODEs
when the eigenvalues are real-valued, but I dont say anything about what happens when the eigen-
values are complex. The amazing thing is that nothing changes! Let me do an example to convince
you of why this makes sense.
Lets take a spring-mass system as an example with k = m = 1. We know that x = x and
that in first-order form this is x = v, v = x. So we can write it as

0 1
w = Aw, where A = .
1 0

The eigenvalues/eigenvector pairs are



j j
r1 = j, w1 = and r2 = j, w2 = .
1 1

We know that we therefore get two solutions:

x1 (t) = ejt w1 and x2 (t) = e jt


w2 .

Moreover, we also know from Eulers formula that

ejt = cos(t) + j sin(t) and e jt


= cos(t) j sin(t).

How can we now construct real-valued solutions from x1 (t) and x2 (t)? Specifically, how can we
use superposition to construct real-valued solutions from x1 (t) and x2 (t)? By superposition, we
can add and scale x1 (t) and x2 (t), so consider the following quantities:
j 1
(x1 (t) x2 (t)) and (x1 (t) + x2 (t)).
2 2

j cos(t)
The expression 2
(x1 (t) x2 (t)) simplifies to the vector while the expression 12 (x1 (t)+
sin(t)
sin(t)
x2 (t)) simplifies to . Both of these are real-valued andagain, by superpositionare
cos(t)
solutions to the ODE. Just as importantly, they are what we should expect from plugging the ex-
ponential solution into the second-order system x = x.
1
Example: Now assume that w(0) = . What is the analytic solution for w(t)? It must be
0
cos(t) sin(t)
of the form a1 + a2 , but what are the constants a1 and a2 ? At t = 0, this
sin(t) cos(t)

1 0 a1 1
simplifies to a1 + a2 , so = . Therefore, we get that a1 = 1 and
0 1 a2 0
cos(t)
a2 = 0 and w(t) = .
sin(t)

40
Why We Use Exponential Solutions
A very reasonable question that comes up is what is so special about ert ? Why not use a different
number than e, like 2.5 or or something else? And why an exponential? Why not arctan(rt) or
sin(rt)? There are a couple of ways of understanding answers to this question, but I want to focus
on one in particularthe question of existence and uniqueness of ordinary differential equations.
Mathematicians can tell us that if we have an ordinary differential equation

x = f (x, t) and x(0) = x0 ,

the solution both exists and is unique if f satisfies certain properties. (Roughly speaking, these
properties are that the first derivative of f with respect to x and t exists; however, the actual
conditions are more general.)
Our case of x = Ax (and later x = Ax+Bu(t)) with x(0) = x0 always satisfy these conditions
because @x@
Ax = A (so the derivative of f with respect to x exists and is A). We therefore can
conclude both that a solution to x = Ax exists and that it is uniqueno other solution can exist
once we have found one. Why does this matter?
We know, from the previous sections, that we can construct solutions to x = Ax with x(0) = x0
using exponential solutions. In fact, by solving for eigenvalues and eigenvectors, we can get a
solution for any choice of initial condition. And once we have that solution, there cannot be
another solution with the same initial condition. So no other choice of function could work, unless
it was the same as an exponential function.

41

Вам также может понравиться