Академический Документы
Профессиональный Документы
Культура Документы
Remark 27 Remember that, in general, the word scalar is not restricted to real numbers. We
are only using real numbers as scalars in this book, but eigenvalues are often complex
numbers.
Definition 9.1 Consider the square matrix A . We say that is an eigenvalue of A if there
exists a non-zero vector x such that . In this case, x is called an eigenvector
(corresponding to ), and the pair ( ,x) is called an eigenpair for A.
Now that you have seen an eigenvalue and an eigenvector, let's talk a little more about them.
Why did we require that an eigenvector not be zero? If the eigenvector was zero, the equation
would yield 0 = 0. Since this equation is always true, it is not an interesting case.
Therefore, we define an eigenvector to be a non-zero vector that satisfies However,
as we showed in the previous example, an eigenvalue can be zero without causing a problem.
We usually say that x is an eigenvector corresponding to the eigenvalue if they satisfy
Since each eigenvector is associated with an eigenvalue, we often refer to an x and
that correspond to one another as an eigenpair. Did you notice that we called x "an"
eigenvector rather than "the" eigenvector corresponding to ? This is because any non-zero,
scalar multiple of an eigenvector is also an eigenvector. If you let c represent a scalar, then we
can prove this fact through the following steps.
You have already computed eigenvectors in this course. When we studied Markov chains, you
computed an eigenvector corresponding to AT when you found the matrix to which the
probabilities seemed to converge after many steps. Any row of that matrix is an eigenvector for
AT because all the rows of that matrix are the same. We write that row as a column vector when
we use it as an eigenvector. The eigenvector that you found is called the dominant
eigenvector.
Although we only found one eigenvector, we found a very important eigenvector. Many of the
"real world" applications are primarily interested in the dominant eigenpair. The method that
you used to find this eigenvector is called the power method. The power method will be
explained later in this chapter. An eigenvector corresponding to the transpose of a transition
matrix is the transpose of any row of the matrix that Ak converges to as k grows, these rows are
all the same. The dominant eigenvalue is always 1 for a transition matrix. Let's look at the
example that we used in the Markov chain chapter. Consider the matrix
If k is large,
Therefore, an eigenvector corresponding to the dominant eigenvalue, 1, is Let's
see if holds true.
Yes, the equation holds, so we have found an eigenpair corresponding to the transpose of the
transition matrix.
If we do not have a transition matrix, can we still use the power method? Yes we can, but we
need to modify the steps a bit because the dominant eigenvalue will not necessarily be the
number one. Let us explain how to use the power method. An example follows the remarks to
help clarify these steps.
3. Divide every term in xi+1 by the last element of the vector and call the new vector .
4. Repeat steps 2 and 3 until and agree to the desired number of digits.
5. The vector obtained in step 4 is an approximate eigenvector corresponding to the
dominant eigenvalue. We will call it x.
which is defined as the length of the vector, so that they don't have to worry about whether or
not an element is zero.
Remark 30 Some calculators will not let you divide a vector by a constant. On those
calculators, you can multiply by the multiplicative inverse (reciprocal) of the constant.
Remark 31 You will probably not be able to directly input the Rayleigh quotient into your
calculator. It will consider the numerator and denominator as 1 by 1 matrices. We consider 1
by 1 matrices to be the same as real numbers, but your calculator may not consider them the
same. Since you cannot divide matrices, your calculator will probably give you an error
message.
You have seen the steps to the power method. Let's demonstrate those steps on the matrix
Notice that . This means that our vectors will just continue in a cycle and never
converge. The power method only works if there is one eigenvalue whose absolute value is
strictly larger than the absolute value of the other eigenvalues. Even when the power method
works, convergence might be slow.
The power method found one very important eigenpair, but what should we do if we want to
find all the eigenpairs? We know that for all eigenpairs. We can transform this
equation into a form that will help us. When we learned about the identity matrix, we learned
that Ix = x for any x. Therefore, We can use algebra steps from here.
We know that x = 0 would solve this equation, but we defined an eigenvector to be non-zero,
Therefore, or We now know our eigenvalues. Remember that all eigenvalues are
paired with an eigenvector. Therefore, we can substitute our eigenvalues, one at a time, into the
Notice that this system is underdetermined. Therefore, there are an infinite number of
solutions. So, any vector that solves the equation x1 - 2x2 = 0 is an eigenvector corresponding to
when To have a consistent method for finding an eigenvector, let's
choose the solution in which x2 = 1. We can use back-substitution to find that x1 - 2(1) = 0
when This is the same solution that we found when we used the power
method to find the dominant eigenpair.
Notice that this system is underdetermined. This will always be true when we are finding an
eigenvector using this method. So, any vector that solves the equation x1 + 3x2 = 0 is an
Therefore, or . The power method does not work because |4| = |-4|. In other
words, there is not a unique dominant eigenvalue.
Since the system is underdetermined, we have an infinite number of solutions. Let's choose the
solution in which x2 = 1. We can use back-substitution to find that x1 - 3(1 ) = 0 which implies
We can find eigenpairs for larger systems using this method, but the characteristic equation
gets impossible to solve directly when the system gets too large. We could use approximations
that get close to solving the characteristic equation, but there are better ways to find eigenpairs
that you will study in the future. However, these two methods give you an idea of how to find
eigenpairs.
Another matrix for which the power method will not work is the matrix
because the eigenvalues are both the real number 5. The method that we showed you earlier
will yield the eigenvector to correspond to the eigenvalue . Other methods will
We said that eigenvalues are often complex numbers. However, if the matrix A is symmetric,
then the eigenvalues will always be real numbers. As you can see from the problems that we
worked, eigenvalues can also be real when the matrix is not symmetric, but keep in mind that
they are not guaranteed to be real.
Did you know that the determinant of a matrix is related to the eigenvalues of the matrix? The
product of the eigenvalues of a square matrix is equal to the determinant of that matrix. Let's
look at the two matrices that we have been working with. For
For