Академический Документы
Профессиональный Документы
Культура Документы
Shaun Lahert
December 28, 2014
1
1.1
Vector Spaces
Definition
1.2
An inner product space is a vector space with additional structure. This addition structure
associates every pair of vectors with a scalar < ~u, ~v >= c. An inner product is any function
that satisfies the following axioms.
c < ~u, ~v > = < c~u, ~v > = < ~u, c~v >
Positive Definite < ~u, ~v > 0 and < ~u, ~u >= 0 if and only if ~u = ~0
1.3
Norm
Suppose V is an inner product space space, the norm or length of a vector u is defined as
1
k~uk =< ~u, ~u > 2
1
1.4
Distance
The distance between two vectors ~u, ~v is defined as d(~u, ~v ) = k~u ~v k that is the norm of their
difference.
1.5
Orthogonal Vectors
Two vectors are ~u, ~v are said to be orthogonal if < ~u, ~v >= 0
1.6
Linear Transformations
Give two vector spaces V, W a linear transformation is a map from the elements of one vector
space to the other which obeys linearity.
T :VW
with ~u, ~v V:
T (~u + ~v ) = T (~u) + T (~v )
T (c~u) = cT (~u)
2
2.1
Choosing a basis {~v1 , ~v2 , .., ~vn } for an n-dimensional vector space V allows one to construct a
coordinate system. Where one can assign every vector a coordinate vector relative to the basis.
The fact that the basis spans V means that all vectors have coordinate vectors and linear independence of the basis vectors implies that the coordinate vector representations are unique.
In this way once a basis has been chosen, addition and scalar multiplication of vectors corresponds to addition and scalar multiplication of their coordinate vectors.
Furthermore if V, W are n, m dimensional vector spaces respectively and a basis has been fixed,
then any linear transformation T can be represented by a mn matrix called the transformation
matrix of T with respect to these basis.
2.2
Coordinate Vectors
Suppose u is a vector in V and that V has basis set S = {~v1 , ~v2 , .., ~vn } then
~u = (a1~v1 + a2~v2 + .. + an~vn ) = [S][u]s
where the coordinate vector is
[u]s = (a1 , a2 , .., an )
.
When S is the standard basis {~e1 , ~e2 , .., ~en }, [u]e = ~u
2.3
2.4
Transformation Matrix
T : Rn Rm with [u]e Rn .
T (~u) = A[u]e where A is an m n matrix.
A = [T (~e1 )|T (~e2 )|..|T (~en )]
where {~e1 , ~e2 , .., ~en } is the standard basis.
2.5
and
and
[u]A = P[u]B
Let T be a linear transformation and C be its matrix with respect to the basis A and D be its
matrix with respect to the basis B.
[T (~u)]A = C[u]A
and
[T (~u)]B = D[u]B
3.1
A is invertible.
A~x = ~0, ~x is the zero vector.
A reduces to the identity matrix I.
A is the product of elementary row operation matrices(A = LU)
For every row operation there is a corresponding matrix Ei that when multiplied does the
same thing. Therefore knowing the row operations to reduce A to I is the same as:
1 1
E3 E2 E1 A = I then E3 E2 E1 = A1 and A = E1
1 E2 E3 these inverse matrices
correspond to the inverse row operations.
For every A~x = ~b there exists one ~x for every ~b.
det(A) 6= 0
Dimension of the Null Space and Left Null Space is zero
Rank is N
Column and Row Vectors form a basis for Rn
Transformation with the matrix A is bijective/endomorphism.
3.2
1
[adjA]
det(A)
3.3
Properties of Inverses
(AB)1 = B1 A1
(An )1 = (A1 )n
1
(cA)1 = (A1 )
c
(AT )1 = (A1 )T
4
4.1
det(A) = (1)
n
X
i+j
i=1
4.2
n
X
aij Cij
j=1
Properties of Determinants
det(AB) = det(A)det(B)
det(A1 ) =
1
det(A)
det(AT ) = det(A)
If A is triangular or diagonal det(A) is the product of the entries on the diagonal.
If B is the matrix resulting from multiplying a row or column of A by c then det(B) =
c det(A). Furthermore det(cA) = cn det(A)
If B is the matrix resulting from a row or column exchange of A then
det(B) = det(A)
If B is the matrix resulting from adding a multiple of one column or row to another then
det(B) = det(A)
4.3
(A B)T /H = AT /H BT /H
(AB)T /H = BT /H AT /H
4.4
Cramers Rule
det(A1 )
det(A2 )
det(An )
x2 =
...... xn =
det(A)
det(A)
det(A)
5
5.1
Orthogonality
Fundamental Subspaces
For an m n matrix A:
Row Space: R(A) The space spanned by the linear combinations of a matrixs rows.
Column Space: C(A) The space spanned by the linear combinations of a matrixs columns.
5
5.2
5.3
Vector Projection
5.4
Subspace Projection
Let ~x be a vector in Rn , V is a subspace of Rn with basis {~v1 , ~v2 , .., ~vm }. Any vector in V can
be represented as a linear combination of the basis vectors (a1~v1 + a2~v2 + .. + am~vm ).
Therefore the projection of ~x onto V can be represented as a linear combination of basis
vectors (a1~v1 + a2~v2 + .. + am~vm ) =
a1
a2
C(A) = V
The orthogonal complement of the column space is the left null space C(A) = N (AT ). Therefore:
V = N (AT )
(~x projV ~x) N (AT )
7
5.5
Least Squares
For A~x = ~b where ~b lies outside the column space, the best possible approximation of ~x is the
solution to A
x = projection of ~b onto the C(A). Projecting ~b onto C(A):
projC(A)~b = A(AT A)1 AT ~b
A
x = A(AT A)1 AT ~b
x
= (AT A)1 AT ~b
5.6
QT = Q1
[u]V ~q1
[u]V ~q2
~q1 +
~q2 + ...
2
kq1 k
kq2 k2
If orthonormal:
[u]Q = [u]V ~q1 ~q1 + [u]V ~q2 ~q2 + ...
Projection formula with orthogonal matrices(Subspace with orthogonal basis) takes the
form:
projQ ~x = QQT ~b
This can also be done through vector projection:
projQ ~x =
~x ~q1
~x ~q2
~q1 +
~q2 + ....
kq1 k2
kq2 k2
If orthonormal:
projQ ~x = ~x ~q1 ~q1 + ~x ~q2 ~q2 + ....
Least squares formula with orthogonal matrices takes the form:
x
= QT ~b
5.7
Gram-Schmidt
To construct an orthogonal basis {~q1 , ~q2 , .., ~qn } given an arbitrary basis {~v1 , ~v2 , .., ~vn }:
First let:
~q1 = ~v1
Now let each ~qi equal their corresponding ~vi minus the projections of the ~vi on the previous
orthogonalised vectors.
~v2 ~q1
~q1
~q2 = ~v2 projq~1 ~v2 = ~v2
kq1 k2
~v3 ~q2
~v3 ~q1
~q3 = ~v3 (projq~2 ~v3 + projq~1 ~v3 ) = ~v2
~q2 +
~q1
kq2 k2
kq1 k2
To normalize the vectors simply divide by the their length
5.8
A=QR Factorization
A = QR
5.9
Unitary Matrices
UT = U1
6.1
Eigenvalues are values corresponding to an n n matrix A where A~v = ~v for some vector
~v .
To find i :
A~v = ~v
A~v ~v = 0
(A I)~v = 0
This is only true for non-zero ~v if the nullspace of (A I) is non-trivial i.e:
det(A I) = 0
This equation gives a characteristic polynomial of degree n which can be solved for i .
Then using (A i I)~vi = 0 you can solve for ~vi
Properties
The eigenvalues of A2 are i 2 with the same eigenvectors.
The eigenvalues of A1 are
1
with the same eigenvectors.
i
6.2
Diagonalization
A~v = ~v
Let
=
3
AS = S
A = SS1
A needs n independent eigenvectors to be diagonalizable.
10
6.3
6.4
When a Symmetric or Hermitian Matrix have positive eigenvalues we call them positive definite.
To test for positive definiteness ~v T A~v > 0
6.5
Similar Matrices
6.6
SVD Factorization
Any real or complex matrix can be factorized into the form A = UVT /H U is an m m
unitary matrix, is an m n diagonal matrix and VT /H is an n n unitary matrix.
To find VT /H and :
AT /H = VUT /H
AT /H A = VUT /H UVT /H
AT /H A = V2 VT /H
This is the eigendecomposition of AT /H A which means that i the diagonal entries of are the
square root of the eigenvalues of AT /H A. And V is just the matrix of eigenvectors of AT /H A
Similarly multiplying on the right by AT /H will give you U
11