Вы находитесь на странице: 1из 19

Chapter 4

Applications of diagonalisation
Reading
Simon and Blume is particularly good on systems of dierential equations, and the discussion in this chapter of the guide follows their approach. Leon, S.J., Linear Algebra with Applications. Chapter 6, sections 6.2, 6.3, 6.5 and 6.6. Simon, C.P. and Blume, L., Mathematics for Economists. Chapter 23, Sections 23.2, 23.8; Chapter 25, Sections 25.1, 25.2, 25.4. Ostaszewski, A. Mathematics in Economics, Chapter 7, Sections 7.1, 7.3, 7.5, 7.7, 7.8,

Powers of matrices, systems of dierence and dierential equations, quadratic forms

Introduction
In this chapter we apply the diagonalisation techniques described in the previous chapter. We will look at applications to coupled systems of dierence and dierential equations, and to quadratic forms.

Powers of matrices
For a positive integer n, the nth power of a matrix A is simply An = A A A A .
n times

For example, A2 = AA and A4 = AAAA. It is often useful, as we shall see in this chapter, to determine An for a general integer n. Diagonalisation helps here. If we

52

can write P 1 AP = D, then A = P DP 1 and so An = AAAA = (P DP


n times 1

) (P DP 1 ) (P DP 1 ) (P DP 1 )
n times

= P D(P 1 P )D(P 1 P )D(P 1 P ) D(P 1 P )DP 1 = P DIDIDI DIDP = P DDD D P 1


n times

= P Dn P 1 . The product P Dn P 1 is easy to compute since Dn is simply the diagonal matrix with entries equal to the nth power of those of D. We give an illustrative example using a 2 2 matrix, but you should be able to carry out the procedure for 3 3 matrices as well. Example: Suppose that A= 1 4 1/2 0 .

The characteristic polynomial |A I| is (check this!) 2 2 = ( 2)( + 1). So the eigenvalues are 1 and 2. An eigenvector for 1 is given by 2x1 + 4x2 = 0 (1/2)x1 + x2 = 0, so we may take (2, 1)T . Eigenvectors for 2 are given by x1 + 4x2 = 0 (1/2)x1 2x2 = 0, so we may take (4, 1)T . Let P be the matrix Then 2 P = 1 The inverse is 1 1 P 1 = 6 1 whose columns are these eigenvectors. 4 1 4 2 .

(Check!) We have P 1 AP = D = diag(1, 2). The nth power of the matrix A is given by An = P Dn P 1 1 2 4 (1)n 0 1 4 = 0 2n 1 2 6 1 1 n n n 1 2(1) + 4(2 ) 8(1) + 8(2n ) = (1)n + 2n 4(1)n + 2(2n ) 6

Systems of dierence equations


Dierence equations: revision
Recall that a dierence equation is an equation linking the terms of a sequence to previous terms. For example, xt+1 = 5xt 1 is a rst-order dierence equation for 53

the sequence xt . (It is said to be rst-order because the relationship expressing xt+1 involves only the previous term.) Dierence equations are also often referred to as recurrence equations. One very simple result we will need to recall from previous work on dierence equations (such as in the Mathematics for economists subject) is that the solution to the dierence equation xt+1 = axt is simply xt = at x0 , where x0 is the rst term of the sequence. (We assume that the sequence is x0 , x1 , x2 , . . . rather than x1 , x2 , . . ..)

Systems of dierence equations


Suppose three sequences xt , yt and zt satisfy x0 = 12, y0 = 6, z0 = 6 and are related, for t 0, as follows: xt+1 yt+1 zt+1 = 5xt + 4zt = 5yt + 4zt = 4xt + 4yt + 9zt , (4.1) (4.2) (4.3)

We cannot directly solve equation (4.1) for xt since we would need to know xt and zt . On the other hand we cant work out xt or zt directly from equation (4.2) or equation (4.3) because to do so we would need to know yt ! It seems impossible, perhaps, but there are ways to proceed. Note that this (coupled) system of dierence equations can be expressed as xt 5 0 4 xt+1 yt+1 = 0 5 4 yt . 4 4 9 zt zt+1 That is, Xt+1 = AXt , where xt 5 X t = yt , A = 0 zt 4 0 5 4 4 4. 9

The general system we shall consider will take the form Xt = AXt where A is an n n square matrix. We shall concentrate on 3 3 and 2 2 systems, though the method is applicable to larger values of n.

Solving by change of variable


We can use diagonalisation as the key to a general method for solving systems of dierence equations. Given a system Xt+1 = AXt , in which A is diagonalisable, we perform a change of variable, as follows. Suppose that P 1 AP = D (where D is diagonal) and let Xt = P Zt (or, equivalently, the new variable vector Zt is Zt = P 1 Xt ). Then the equation Xt+1 = AXt becomes P Zt+1 = AP Zt , 54

which means that Zt+1 = P 1 AP Zt = DZt , which since D is diagonal, is very easy to solve for Zt . To nd Xt we then use the fact that Xt = P Zt . Example: We nd the sequences xt , yt , zt such that xt+1 yt+1 zt+1 = 5xt + 4zt = 5yt + 4zt = 4xt + 4yt + 9zt

and x0 = 12, y0 = 6, z0 = 6. (This is the problem described above.) In matrix form (as we have seen) this system is Xt+1 = AXt where 5 0 4 A = 0 5 4. 4 4 9 To use the technique, we need to diagonalise A. omit my workings here, but I found that if 1 1 P = 1 1 1 0 then P 1 AP = D = diag(1, 5, 13). Now let ut Zt = vt wt Xt+1 = AXt gives rise (as explained ut 0 0 vt , 13 wt You should work through this! Ill 1 1 2

be given by Xt = P Zt . Then the equation above) to Zt+1 = DZt . That is, 1 0 ut+1 vt+1 = 0 5 0 0 wt+1

so we have the following system for the new sequences ut , vt , wt : ut+1 vt+1 wt+1 = ut = 5vt = 13wt .

This is very easy to solve: each equation involves only one sequence, so we have uncoupled the equations. We have, for all t, ut = u0 , vt = 5t v0 , wt = (13)t w0 . We have not yet solved the original problem, however, since we need to nd xt , yt , zt . We have xt 1 1 1 ut 1 1 1 u0 Xt = yt = P Zt = 1 1 1 vt = 1 1 1 5t v0 . xt 1 0 2 wt 1 0 2 (13)t w0

55

But we have also to nd out what u0 , v0 , w0 are. These are not given in the problem, but x0 , y0 , z0 are, and we know that x0 u0 1 1 1 u0 y0 = P v0 = 1 1 1 v0 . z0 w0 1 0 2 w0 To nd u0 , v0 , w0 we can either solve the linear system u0 x0 12 P v0 = y0 = 6 6 w0 z0 using row operations, or we can (though it involves more work) nd out what P 1 is and use the fact that u0 x0 12 v0 = P 1 y0 = P 1 6 . w0 z0 6 Either way (and the working is again omitted), we nd u0 4 v0 = 3 . w0 5 Returning then to the general solution to the system, we obtain xt 1 1 1 u0 yt = 1 1 1 5t v0 xt 1 0 2 (13)t w0 4 1 1 1 = 1 1 1 (3)5t 5(13)t 1 0 2 4 + 3(5t ) + 5(13)t = 4 3(5t ) + 5(13)t . 4 + 10(13)t So the nal answer is that the sequences are: xt = 4 + 3(5t ) + 5(13)t , yt = 4 3(5t ) + 5(13)t , zt = 4 + 10(13)t .

Activity 4.1 Perform the diagonalisation calculations required for the example just given.

As this example demonstrates, solving a system of dierence equations involves a lot of work, but the good news is that it is just a matter of going through a denite (if time-consuming) procedure.

Solving using matrix powers


An alternative approach is to notice that if Xt+1 = AXt , then Xt = At X0 . 56

This solution can be determined explicitly if we can nd the tth power At of the matrix A. As described above, this can be done using diagonalisation of A. Example: We solve the system of the above example using matrix powers. The system is Xt+1 = AXt where 5 0 4 A = 0 5 4 4 4 9 and where x0 = 12, y0 = 6, z0 = 6. So the solution is Xt = At X0 = At (12, 6, 6)T . We have seen how A can be diagonalised: with 1 1 1 P = 1 1 1, 1 0 2 we have P 1 AP = D = diag(1, 5, 13). So At = P Dt P 1 . Now, as you can calculate (the details are omitted here), 1/3 1/3 1/3 P 1 = 1/2 1/2 0 , 1/6 1/6 1/3 so (doing the multiplicationagain, 1 Xt = At X0 = P Dt P 1 X0 = P 0 0 details omitted), 4 + 3(5t ) + 5(13)t 12 0 0 5t 0 P 1 6 = 4 3(5t ) + 5(13)t , 4 + 10(13)t 6 0 (13)t

which is (of course) the same answer as we obtained using the previous method. Activity 4.2 Check the calculations omitted in this example.

Systems of dierential equations


Population dynamics
Suppose that y(t) is the size of a population of a species of animal at time t. (The time is measured with respect to some reference time, t = 0, and the size need not be an integer, since it may, for example, measure the population in thousands.) Modelling the population as a continuous-time, dierentiable, function, we denote the derivative dy by y(t). dt If the population has a constant growth rate, r (which will equal the dierence between the birth and death rates), then we have the Malthus equation y = ry, with solution y(t) = ert y(0). 57

This model is over-simplistic. More realistically, as the population increases, growthinhibiting factors come into action. (These might arise, for instance, from limitations on space and scarcity of natural resources.) So we should modify the above model. We may do this by assuming that the growth rate y/y is not constant, but is a decreasing function of y. The simplest type of decreasing function is the linear one, a by, for a, b > 0. We have y = (a by)y, known as the logistic model of population growth. This equation can easily be solved: it is a separable rst-order dierence equation, and we can write dy = y(a by) dt.

Using partial fractions to integrate the left-hand side, we obtain t+c= b/a 1/a + y a by dy = 1 y ln , a a by

assuming that y(t) = a/b for any t. Simplifying this, we see either that 1. y(t) = a/b for all t, or 2. for some constant k = 0, y(t) = a . b + keat

Activity 4.3 Show why either 1. or 2. holds. Note that, in the second case, the constant k is determined by the initial population y(0) but that, whatever this is, y(t) a/b as t . We refer to the constant solution y(t) = a/b as a steady-state solution. The fact that every solution tends towards this in time means that it is globally asymptotically stable (more on this later . . .).

Competing species
Now suppose that we have two species of animal and that they compete with each other (for food, for instance). Denote the corresponding populations at time t by y1 (t) and y2 (t). We assume that, in the absence of the other, the population of either species would exhibit logistic growth, as above. But given that they compete with each other, we assume that the presence of each has a negative eect on the growth rate of the other. That is, we assume that for some positive numbers a1 , a2 , b1 , b2 , c1 , c2 , y1 y1 y2 y2 = a1 b1 y1 c1 y2 = a2 b2 y2 c2 y1 .

Then we have the coupled system of dierential equations y1 y2


2 = a1 y1 b1 y1 c1 y1 y2 2 = a2 y2 b2 y2 c2 y1 y2 .

Clearly such a model could be extended to more than two species. 58

General systems of dierential equations


In general, a (square) system of dierential equations for the functions y1 , y2 , . . . , yn is of the form dy1 dt dy2 dt dyn dt = f1 (y1 , y2 , . . . , yn ) = f2 (y1 , y2 , . . . , yn ) . . . = fn (y1 , y2 , . . . , yn ).

It is common to use y as a shorthand notation for dy/dt. Then this system can be written as y1 y2 = f1 (y1 , y2 , . . . , yn ) = f2 (y1 , y2 , . . . , yn ) . . . = fn (y1 , y2 , . . . , yn ), y = F (y), where F : Rn Rn is given by F (x1 , x2 , . . . , xn ) = (f1 (x1 , . . . , xn ), . . . , fn (x1 , . . . , xn )) . Systems like this occur often in economics and in social science.

yn or, more succinctly,

Linear systems of dierential equations


A linear system of dierential equations is one in which the function F is a linear transformation. But each linear transformation is equivalent to multiplication by a corresponding matrix. So such a system takes the form y = Ay where A is an n n matrix whose entries are constants (that is, xed numbers). If A is diagonal, this is easy to solve. For, suppose A = diag(1 , 2 , . . . , n ). Then the system is precisely y 1 = 1 y 1 , y2 = 2 y2 , . . . , y n = n yn , and so y1 = y1 (0)e1 t , . . . , yn = yn (0)en t . To reduce a linear system to this simple form, we might be able to use diagonalisation. Suppose that A can indeed be diagonalised. Then P 1 AP = D, with P = (v1 . . . vn ), D = diag(1 , 2 , . . . , n ), 59

where i are the eigenvalues and vi corresponding eigenvectors. Let z = P 1 y (or, equivalently, y = P z). Then y= d d (P z) = P z = P z, dt dt

since P has constant entries. Therefore P z = Ay = AP z, and z = P 1 AP z = Dz. We may now easily solve for z and hence y. d (P z) = P z. dt

Activity 4.4 Convince yourself that

Example: Consider y1 y2 = 1 2 1 2 y1 y2 .

Then (using the standard method) we can diagonalise A as follows: P 1 AP = D, where 1 1 0 0 P = , D= . 1 2 0 3 Proceeding as suggested above, let z1 , z2 satisfy z = P 1 y. Then z = Dz; that is, z1 z2 = 0 0 0 3 z1 z2 = 0 3z2 .

This is now an uncoupled system, with solution z1 (t) = z1 (0), z2 = e3t z2 (0). Then, y1 y2 so y1 = z1 + z2 = z1 (0) + z2 (0)e3t , y2 = 2z2 z1 = z1 (0) + 2z2 (0)e3t . Well want the answer in terms of y1 (0), y2 (0). So, we observe that z1 (0) z2 (0) so z1 (0) = (2/3)y1 (0) (1/3)y2 (0), z2 (0) = (1/3)y1 (0) + (1/3)y2 (0). Finally, then, y1 (t) = (2/3)y1 (0) (1/3)y2 (0) + e3t ((1/3)y1 (0) + (1/3)y2 (0)) y2 (t) = (2/3)y1 (0) + (1/3)y2 (0) + e3t ((2/3)y1 (0) + (2/3)y2 (0)) . = P 1 y1 (0) y2 (0) = 1 3 2 1 1 1 y1 (0) y2 (0) , =P z1 z2 = 1 1 1 2 z1 z2 ,

60

Steady states and stability


The vector y Rn is a steady state of the system y = F (y) if F (y ) = 0, the all-zero vector. If a system has initial condition y(0) = y then it satises y(t) = y for all t. (So, y is a constant solution of the system.) A steady state y is an asymptotically stable equilibrium if every solution y(t) that starts near y converges to y as t ; that is, if there is some > 0 such that if y = F (y) and y(0) y < then y(t) y as t . (Here, y(0) y is the length of the vector y(0) y , which is the usual Euclidean distance between y(0) and y .) Example: Consider again the linear system we solved above. It is clear that y = (0, 0) is a steady state of the system. However, this is not stable. To see this, suppose that y(0) = (0, 0). Then either y1 (0) = 0 or y2 (0) = 0 (or both). If y1 (0) + y2 (0) = 0 then, because of the e3t terms, y1 (t) and y2 (t) tend either to or as t and certainly, therefore, do not converge to y . On the other hand, if y1 (0) + y2 (0) = 0, so that y2 (0) = y1 (0) = 0, then y1 (t) = (2/3)y1 (0) (1/3)y2 (0) = y1 (0) = 0, so y1 (t) does not converge to 0 and hence y(t) y . It is natural to ask how we determine the stability of steady states for general (not necessarily linear) systems of dierential equations. We will not answer this question completely, but we will, by examining linear systems, be able to nd a sucient condition for a steady state to be an asymptotically stable equilibrium. That is, we will nd a way of showing that a steady state is asymptotically stable. First, we study linear systems.

Stability in linear systems


We now return to the general linear system in which A can be diagonalised. We have z = P 1 AP z = Dz, so c1 e1 t . z= . , . cn en t

for constants ci which are the initial values zi (0), determined by the initial values yi (0). Therefore, c1 e1 t . y = P z = (v1 . . . vn ) . = c1 e1 t v1 + c2 e2 t v2 + + cn en t vn . . n t cn e We know that A can be diagonalised if it has n distinct eigenvalues, so we have the following result.

Theorem 4.1 If the n n matrix A has n distinct eigenvalues 1 , 2 , . . . , n and corresponding eigenvectors v1 , . . . , vn then the system y = Ay has general solution y(t) = c1 e1 t v1 + c2 e2 t v2 + + cn en t vn .

61

If the eigenvalues i are all negative real numbers, then each term ei t tends to 0 as t . Furthermore, if the matrix can be diagonalised, then it is nonsingular and the only solution to Ay = 0 is y = 0; in other words, the only steady state is y = 0 and, since y(t) 0 as t , y is asymptotically stable. We state this as a Theorem. Theorem 4.2 Suppose A has n distinct negative real eigenvalues. Then the only steady state of the system y = Ay is y = 0, and this is asymptotically stable.

Stability in general systems: linearization


What about non-linear systems? To get some information about these, we use a version of Taylors Theorem to relate the system to a linear one. The version of Taylors theorem that we use is the following. (A function is continuously dierentiable if its rst-order partial derivatives exist and are continuous.) Theorem 4.3 (Taylors Theorem) Let F : Rn Rn be continuously dierentiable and let y Rn . Then for h Rn , F (y + h) = F (y ) + DF (y )h + R(h), where, if F (y) = (f1 (y), f2 (y), . . . , fn (y)), then DF is the Jacobian f /y f1 /y2 f1 /yn 1 1 f2 /y1 f2 /y2 f2 /yn , DF = . . . .. . . . . . . . fn /y1 fn /y2 fn /yn DF (y ) is the Jacobian evaluated at y , and R(h) has the property that R(h)/ h 0 as h 0. Loosely speaking, if each entry of h is small, then F (y + h) where means is approximately. F (y ) + DF (y )h,

Suppose now that y is a steady state of y = F (y), so that F (y ) = 0. Let y(t) be a solution of the system such that y(0) y is small in length, and let h(t) = y(t) y . Then y(t) = y + h(t) and the system y = F (y) is d (y + h(t)) = F (y + h(t)). dt Using Taylors theorem, we have d h(t) = (y + h(t)) = DF (y )h(t) + R(h(t)). dt For h(t) small in length, we can ignore the R term. Then the behaviour of h(t) for h(0) = y(0) y small in length is qualitatively the same as if it were the solution to the linear system h = DF (y )h. (This argument is not watertight, but can be made so.) This results in the following theorem. 62

Theorem 4.4 Let y be a steady state solution of the system y = F (y) (where F : Rn Rn is continuously dierentiable). Let DF (y ) denote the Jacobian matrix evaluated at y . If DF (y ) has n negative real eigenvalues then y is asymptotically stable.

In fact, a more complete characterisation of stability is possible (though this is not needed for this subject, and we do not indicate its proof here; see Simon and Blume, Section 25.4 if you are interested). Recall that the real part of a complex number a + bi is the real number a. Then we have the following result. Theorem 4.5 Let y be a steady state solution of the system y = F (y) (where F : Rn Rn is continuously dierentiable). Let DF (y ) denote the Jacobian matrix evaluated at y . If all eigenvalues of DF (y ) are negative or have negative real part, then y is asymptotically stable. If any eigenvalues of DF (y ) are positive or have positive real part, then y is not asymptotically stable.

Example: Consider the system y1 y2


2 = 4y1 y1 y1 y2 2 = 6y2 y2 3y1 y2 .

This is of the form y = F (y) where F (y1 , y2 ) = (f1 (y1 , y2 ), f2 (y1 , y2 )), with
2 2 f1 (y1 , y2 ) = 4y1 y1 y1 y2 , f2 (y1 , y2 ) = 6y2 y2 3y1 y2 .

This system has the steady states (0, 0), (0, 6), (4, 0), (1, 3). We have DF = f1 /y1 f2 /y1 f1 /y2 f2 /y2 = 4 2y1 y2 3y2 y1 6 2y2 3y1 .

Consider the steady state y = (0, 6). Here, we have DF (y ) = 2 0 18 6 .

This has eigenvalues 2, 6. Since these are both negative real numbers, y is asymptotically stable. Activity 4.5 Verify also that y = (4, 0) is asymptotically stable.

Quadratic forms
A quadratic form in two variables x and y is an expression of the form q(x, y) = ax2 + 2cxy + by 2 . This can be written as q = xT Ax where x= x y , xT = (x y), A = a c c b .

63

The matrix A is symmetric: AT = A. Example: The quadratic form q = x2 + xy + 2y 2 can be expressed in matrix form as q = (x y) 1 1/2 1/2 2 x y .

More generally, a quadratic form in n 2 variables is of the form q = xT Ax where A is a symmetric n n matrix and x Rn . Example: The following is a quadratic form in three variables: q(x1 , x2 , x3 ) = 5x2 + 10x2 + 2x2 + 4x1 x2 + 2x1 x3 + 2x2 x3 . 1 2 3 In matrix form, it is xT Ax where x = (x1 , x2 , x3 )T and 5 2 1 A = 2 10 1 . 1 1 2

Consider the quadratic form q1 (x, y) = x2 + y 2 . For any choices of x and y, q1 (x, y) 0, and q1 (x, y) = 0 only when x = y = 0. On the other hand, the quadratic form q2 (x, y) = x2 +3xy +y 2 is not always non-negative: note, for example, that q(1, 1) = 1 < 0. An important general question we might ask is whether a quadratic form q is always nonnegative. Here, eigenvalue techniques help: specically, orthogonal diagonalisation is useful. First, we need a denition.

Denition 4.1 Suppose that q(x) is a quadratic form. Then q(x) is positive denite if q(x) 0 for all x, and q(x) = 0 only when x = 0, the zero-vector q(x) is positive semi-denite if q(x) 0 for all x q(x) is negative denite if q(x) 0 for all x, and q(x) = 0 only when x = 0, the zero-vector q(x) is negative semi-denite if q(x) 0 for all x q(x) is indenite if it is neither positive denite, nor positive semi-denite, nor negative denite, nor negative semi-denite; in other words, if there are x1 , x2 such that q(x1 ) < 0 and q(x2 ) > 0. Consider the quadratic form q = xT Ax where A is symmetric, and suppose that we have found P which will orthogonally diagonalise A; that is, which is such that P T = P 1 and P T AP = D, where D is a diagonal matrix. We make a change of variable as follows: dene z by x = P z (or, equivalently, z = P 1 x = P T x). Then q = xT Ax = (P z)T A(P z) = zT (P T AP )z = zT Dz. Now, the entries of D must be the eigenvalues of A: let us suppose these are (in the order in which they appear in D) 1 , 2 , . . . , n . Then
2 2 2 q = zT Dz = 1 z1 + 2 z2 + + n zn .

64

Suppose that all the eigenvalues are positive: then, for all z, q 0, and q = 0 only when z is the zero-vector. But because of the way in which x and z are related (x = P z and z = P T x), x = 0 if and only if z = 0, so we have the rst part of the following result (the other parts arising from similar reasoning):

Theorem 4.6 Suppose that the quadratic form q(x) has matrix representation q(x) = xT Ax. Then: If all eigenvalues of A are positive, then q is positive denite If all eigenvalues of A are non-negative, then q is positive semi-denite If all eigenvalues of A are negative, then q is negative denite If all eigenvalues of A are non-positive, then q is negative semi-denite If some eigenvalues of A are negative, and some are positive, then q is indenite.

We say that a matrix A is positive denite if the corresponding quadratic form q = xT Ax is (and similarly, we speak of negative denite, positive semi-denite, negative semi-denite, and indenite matrices). There is another characterisation of positive deniteness, based on examining the sign of the leading principal subdeterminants (sometimes called the leading principal minors) of the matrix A. These are the determinants of the leading principal submatrices, the square submatrices A1 , A2 , . . . , An of size 1 1, 2 2, . . . , n n, lying along the main diagonal of the matrix. If, for example, 5 2 1 A = 2 10 1 , 1 1 2 then the matrices A1 , A2 , A3 are A1 = (5), A2 = 5 2 2 10 , A3 = A.

The leading principal subdeterminants are then |A1 | = 5, |A2 | = 46, |A3 | = |A| = 81. Notice that all three principal subdeterminants are positive. In fact, this is enough to show that A is positive denite, as stated in the following result (the proof of which you need not know).

Activity 4.6 Check these determinant calculations.

Theorem 4.7 Suppose that A is an n n matrix and that A1 , A2 , . . . , An are its leading principal submatrices. Then: A is positive denite if and only if all its leading principal subdeterminants are positive, |A1 | > 0, |A2 | > 0, . . . , |An | > 0. Now, a matrix A is negative denite if and only if its negative, A, is positive denite. (You can see this by noting that the quadratic form determined by A is the negative of that determined by A.) The leading principal submatrices of A are 65

A1 , A2 , . . . , An , where A1 , A2 , . . . , An are the leading principal submatrices of A. The theorem just presented tells us that A is positive denite (and hence A is negative denite) if and only if | A1 | > 0, | A2 | > 0, | A3 | > 0, . . . , | An | > 0. But, if k is even, then | Ak | = |Ak | and, if k is odd, | Ak | = |Ak |. So we have the following characterisation: Theorem 4.8 Suppose that A is an n n matrix and that A1 , A2 , . . . , An are its leading principal submatrices. Then: A is negative denite if and only if its leading principal subdeterminants alternate in sign, with the rst negative, as follows: |A1 | < 0, |A2 | > 0, |A3 | < 0, . . . , |An | > 0. It should be noted that there is no test quite this simple to check whether a matrix is positive or negative semi-denite.

Learning outcomes
At the end of this chapter and the relevant reading, you should be able to: calculate the general nth power of a diagonalisable matrix using diagonalisation solve systems of dierence equations in which the underlying matrix is diagonalisable, by using both the matrix powers method and the change of variable method solve linear systems of dierential equations in which the underlying matrix is diagonalisable know what is meant by a steady state of a system of dierential equations, and by an asymptotically stable equilibrium determine the steady states of a system, and show that certain steady states are asymptotically stable nd the symmetric matrix representing a given quadratic form and, conversely, nd the quadratic form given by a particular symmetric matrix determine the deniteness of a quadratic form by examining the eigenvalues use the test involving principal leading subdeterminants to show that certain quadratic forms or matrices are positive denite or negative denite.

Sample examination questions


Question 4.1 Solve the following system of dierence equations. xt+1 yt+1 given that x0 = y0 = 1000. 66 = xt + 4yt = (1/2)xt ,

Question 4.2 Verify that the vector v and M are as given below. 2 0 M = 0 5 1 0

is an eigenvector for the matrix M , where v 1 1 0, v = 0. 2 1

What is the corresponding eigenvalue? Find the other eigenvalues of M . Hence nd an invertible matrix P and a diagonal matrix D such that P 1 M P = D. Sequences xt , yt , zt are dened by x0 = 6, y0 = 1, z0 = 4 and xt+1 yt+1 zt+1 = 2xt + zt = 5yt = xt + 2zt .

Using the preceding calculations, nd formulae for xt , yt , and zt .

Question 4.3 Given that 3 1 1 1 , 0 , 1 0 1 1 are eigenvectors of the matrix 1 2 6 A= 2 5 6, 2 2 3 nd an invertible matrix P such that P 1 AP is diagonal. Hence nd functions y1 (t), y2 (t), y3 (t) satisfying the equations dy1 dt dy2 dt dy3 dt = y1 2y2 6y3 = 2y1 + 5y2 + 6y3

= 2y1 2y2 3y3 ,

and with the property that y1 (0) = y2 (0) = 1 and y3 (0) = 0.

Question 4.4 Find the steady states of the following system of dierential equations. dy1 dt dy2 dt = y1 (2 2y1 y2 ) = y2 (2 2y2 y1 ).

One of the steady states y has neither y1 nor y2 equal to 0. Show that this steady state is asymptotically stable. Question 4.5 Express the quadratic form 9x2 + 4xy + 6y 2 in the form xT Ax, where A is a symmetric 2 2 matrix, and nd the eigenvalues of A. Without doing any more calculations, say whether the quadratic form is positive denite or not.

67

Question 4.6 Prove that the quadratic form 5x2 + 10x2 + 2x2 + 4x1 x2 + 2x1 x3 + 2x2 x3 1 2 3 is positive denite. Question 4.7 Prove that the following quadratic form is neither positive denite nor negative denite. q = 2x2 + 8x1 x2 12x1 x3 + 7x2 24x2 x3 + 15x2 . 1 2 3

Sketch answers or comments on selected questions


Question 4.1 We solve this using matrix powers. We could, of course, use instead a change of variable. Notice that the system can be written as Xt+1 = where Xt = xt yt . 1 4 1/2 0 Xt ,

This is Xt+1 = AXt , where A is the matrix whose nth power we calculated in the example given earlier in this chapter. The solution (using the nth power result obtained earlier) is Xt = At X0 1 2(1)t + 4(2t ) 8(1)t + 8(2t ) = 4(1)t + 2(2t ) 6 (1)t + (2t ) = That is, xt = 1000(1)t + 2000(2t ), yt = 500(1)t + 500(2t ). Question 4.2 M v = 3v so the eigenvalue corresponding to v is 3. The characteristic polynomial turns out to be (5)(3)(1) after factorisation, so the eigenvalues of M are 1, 3, 5. Corresponding eigenvectors are (respectively) (1, 0, 1)T , v, (0, 1, 0)T . Then P 1 M P = D where 1 1 0 P = 0 0 1 , D = diag(1, 3, 5). 1 1 0 We use the notation used earlier. The system of dierence equations is Xt+1 = M Xt . Setting Xt = P Zt , it becomes Zt+1 = DZt , so ut+1 = ut , vt+1 = 3vt , wt+1 = 5wt , so ut = u0 , vt = 3t v0 , wt = 5t w0 . Now, 6 1 1 X0 = 1 = P Z0 = 0 0 4 1 1 68 0 u0 u0 + v0 , 1 v0 = w0 0 w0 u0 + v0 1000(1)t + 2000(2t ) 500(1)t + 500(2t ) .

1000 1000

so u0 = 2, v0 = 4 and w0 = 1. We therefore obtain xT 2 + 4(3t ) . yt = Xt = P Zt = 5t zt 2 + 4(3t ) Note: the question explicitly says that the diagonalisation result must be used to solve the system of dierence equations. Question 4.3 Eigenvectors are given, so there is no need to determine the characteristic polynomial to nd the eigenvalues. Simply multiply A times the given eigenvectors in turn. For example, A(1, 1, 1)T = (3, 3, 3)T = 3(1, 1, 1)T , so 3 is an eigenvalue and this vector is a corresponding eigenvector. The other two are eigenvectors for eigenvalue 3. So if 1 3 1 P = 1 0 1 , 1 1 0 then P 1 AP = diag(3, 3, 3) = D. The system of dierential equations is y = Ay. Let z be given by y = P z. Then the system is equivalent to z = Dz, which is dz1 dz2 dz3 = 3z1 , = 3z2 , = 3z3 . dt dt dt This has solutions z1 = z1 (0)e3t , z2 = z2 (0)e3t , z3 = z3 (0)e3t . We have to nd z1 (0), z2 (0), z3 (0). Now, z = P 1 y, and (as can be determined by the usual methods) 1/3 1/3 1 P 1 = 1/3 1/3 0 , 1/3 4/3 1 so 1/3 1/3 1 1 2/3 z(0) = P 1 y(0) = 1/3 1/3 0 1 = 2/3 . 1/3 4/3 1 0 5/3 So z1 (0) = 2/3, z2 (0) = 2/3, 1 3 y(t) = P z(t) = 1 0 1 1 z3 (0) = 5/3. The solution y(t) is therefore (2/3)e3t + (1/3)e3t 1 (2/3)e3t 1 (2/3)e3t = (2/3)e3t + (5/3)e3t . (2/3)e3t (2/3)e3t 0 (5/3)e3t

Question 4.4 The steady states are the solutions (y1 , y2 ) to f1 (y1 , y2 ) = y1 (2 2y1 y2 ) = 0 f2 (y1 , y2 ) = y2 (2 2y2 y1 ) = 0. We could have y1 = y2 = 0; or y1 = 0 and 2 2y2 = 0, giving (0, 1); or y2 = 0 and 2 2y1 = 0, giving (1, 0); or 2 2y1 y2 = 2 2y2 y1 = 0, giving (2/3, 2/3). So these are the four steady states. The Jacobian matrix is DF = f1 /y1 f2 /y1 f1 /y2 f2 /y2 = 2 4y1 y2 y2 y1 2 4y2 y1 .

69

When y1 = y2 = 2/3, this is 4/3 2/3 2/3 4/3 .

By computing the characteristic polynomial, we can nd that the eigenvectors are 2, 2/3, both negative. So that steady state is asymptotically stable. There is an easier way to see that the eigenvalues are negative. A matrix has all its eigenvalues negative if and only if the principal subdeterminants are alternately negative and positive. Here, the 1 1 principal subdeterminant is just 4/3 and the 2 2 is the determinant of DF , which is 12/9, which is positive. (This will be familiar as the test for negative deniteness. We need the matrix DF to be negative denite.)

Question 4.5 The matrix A is A =

9 2 . Its eigenvalues are 5 and 10. Since 2 6 these are both positive, the quadratic form is positive denite.

Question 4.6 The matrix representing the quadratic form is 5 2 1 A = 2 10 1 1 1 2 and (see the example after Theorem 4.6), this is positive denite.

Question 4.7 The matrix representing the quadratic form is 2 4 6 A= 4 7 12 . 6 12 15 The rst two principal leading subdeterminants are 2 and 2 4 4 = 2. 7

The rst is positive and the second negative. If the matrix (and the quadratic form) were positive denite, both should be positive. If it were negative denite, the rst should be negative and the second positive. So it is neither.

70

Вам также может понравиться