Академический Документы
Профессиональный Документы
Культура Документы
Karianne Bergen
kbergen@stanford.edu
a11 a12 a1n aT1
a21 a22 a2n aT2
A = a1 a2 an = . .. , AT =
.. ..
. .
T
am
am1 am2 amn
Solutions:
1-norm:
3
X
kvk1 = vi = 3 + 6 + 2 = 11
i=1
2-norm:
3
!1
X 2 p
kvk2 = vi2 = 32 + 62 + 22 = 9 + 36 + 4 = 49 = 7
i=1
-norm
kvk = max |vi | = max{3, 6, 2} = 6
i
uT v = 0 uv
proju v = uT v u
Solution:
kx + yk2 = (x + y)T (x + y)
= xT x + 2xT y + y T y
= kxk2 + kyk2 + 2 xT y
Scalar multiplication:
a11 a1n a11 a1n
(A)ij = Aij ,
.. =
..
. .
am1 amn am1 amn
Matrix addition:
(A + B)ij = Aij + Bij
a11 + b11 a1n + b1n
A+B =
.. ,
A, B Rmn
.
am1 + bm1 amn + bmn
Matrix product:
p
X
(AB)ij = Aik Bkj , A Rmp , B Rpn
k=1
aT1
aT
2
(AB)ij = aTi bj , A= . , B = b1 b2 bn
. .
aTm
Since inner products are only defined for vectors of the same
length, the matrix product requires that ai , bj Rp for all i, j.
aT1 aT1 B
aT2 aT2 B
AB = B =
.. ..
. .
aTm aTm B
Matrix-matrix multiplication can be expressed in terms of the
concatenation of matrix-vector products.
For A Rkl and B Rmn the product AB is only defined if
l = m, and the product BA is only defined if k = n.
A(BC) = (AB)C
A(B + C) = AB + AC
AB 6= BA
1 1 1 0
For example, let A = and B = , then
0 1 1 1
2 1 1 1
AB = 6 = = BA.
1 1 1 2
xy T
ij
= x i yj
x Rm , y Rn , xy T Rmn
The outer product xy T of vectors x and y is a matrix, and is
defined even if the vectors are not the same length.
1 x 4 xT y 7 Az
2 A 5 xT z
3 xy 6 zxT 8 z T Ay
1 4
3 4 0 2
= 5, x = 5 , y = 0 , z= , A=
0 1 5 3
2 1
1 x Yes 4 xT y Yes 7 Az No
2 A Yes 5 xT z No
3 xy Yes 6 zxT Yes 8 z T Ay Yes
1 4
3 4 0 2
= 5, x = 5 , y = 0 , z= , A=
0 1 5 3
2 1
Properties:
(AT )T = A
T
(A + B) = AT + B T
(AB)T = B T AT
(B)T = (B T )
Frobenius norm:
v
um X
n
uX q
kAkF = t |aij |2 = trace(AT A)
i=1 j=1
1
diagram from Trefethen and Bau.
K. Bergen (ICME) Applied Linear Algebra 30 / 140
Properties of Matrix Norms
TRUE
Use the property that tr(XY ) = tr(Y X):
To show that tr(ABC) = tr(CAB), let X = AB, and Y = C.
To show that tr(ABC) = tr(BCA), let X = A and Y = BC.
T
kAk,1 = max kAxk = max max(ai xi ) = max |aij |.
kxk1 =1 kxk1 =1 i i,j
l11 u11 u12 u1n
l21 l22 u22 u2n
L= . , U =
.. .. .. ..
.. . . . .
lm1 lm2 lmn umn
U T U = U U T = I.
kQxk2 = kxk2
= max AT QT QA
= max AT A)
= kAk22
= tr AT A
= kAk2F
K. Bergen (ICME) Applied Linear Algebra 43 / 140
Solution (Q5):
Determine whether each statement given below is True or False:
Let D Rnn = diag(d1 , , dn ) be any diagonal matrix, then
DA = AD for any matrix A Rnn .
FALSE
DA rescales the rows of A while AD rescales the columns of A:
T
d1 aT1
d1 0 0 a1
0 d2 0 T T
a2 d2 a2
DA = . = 6=
.. . . .
.. . .. .. ..
0 0 dn T
an dn anT
d1 0 0
0 d2 0
AD = a1 an . .. = d1 a1 dn an
.. ..
. .
0 0 dn
v = 1 v1 + 2 v2 + + r vr , i R i = 1, . . . , r
1 v1 + 2 v2 + + r vr = 0
1 v1 + 2 v2 + + r vr = 0 1 = = r = 0
N (A) := {z Rn | Az = 0} , N (A) Rn
The range and nullspace of AT are called the row space and left
nullspace of A.
I These four subspaces of A are intrinsic to A and do not depend on
the choice of basis.
Corollary (Rank-nullity):
Ax = Ay = x = y.
AX = A x1 xn = Ax1 Axn = e1 en = I
(A + B)2 = A2 + 2AB + B 2 .
(ABA1 )3 = AB 3 A1 .
1
A1 B is invertible and A1 B = B 1 A.
(A + B)2 = A2 + 2AB + B 2 .
2x1 + x2 = 7
and x1 3x2 = 7
Ax = b.
2
diagram from http://oak.ucc.nau.edu/jws8/3equations3unknowns.html
K. Bergen (ICME) Applied Linear Algebra 69 / 140
Gaussian Elimination (row reduction)
L3 L2 L1 such that L3 L2 L1 A = A3 :
1
2 0 0 2 8 4 1 4 2
1 1 0 2 5 1 = 0 3 3
2 0 1 4 10 1 0 6 9
Ln1 L1 A = U or L1 A = U, L1 = L1 1
1 Ln1
A = LU.
3
see Trefethen and Bau: Chapter 21 for details.
K. Bergen (ICME) Applied Linear Algebra 78 / 140
Cholesky and Positive Definiteness
Positive definite matrix : a matrix A Rnn is positive definite if
Example:
4 2
A=
2 10
4 2 x1
xT Ax =
x1 x2
2 10 x2
= 4x21 4x1 x2 + 10x22
= (2x1 x2 )2 + 9x22 > 0 x 6= 0
QT Q = ... q1 qn = In
qnT
PA = AAT
A = QR
Ax = b = QRx = b = Rx = QT b
Hint: All three columns are normalized to length 1. The first and
second columns are orthogonal, and the second and third columns
are orthogonal. The projection operator for normalized vectors is
proju v = (uT v)u.
q1 = a1 , q2 = a2
v3 = a3 projq1 a3 projq2 a3 = a3 (aT1 a3 )a1
1
2 1
3 3 9
5 1
= 23 3
1
= 9
3 3
1 1
3
3
29
1
v3 16
q3 = =
kv3 k 6
26
krk22 = kb Axk22 .
AT Ax = AT b
PA = A(AT A)1 AT
by solving Ax = b1 or AT Ax = AT b
Hints:
1 0 0
R(A) = 0 + 1 , , R , N (AT ) = 1 , R ,
0 1 1
1 0 0
b = 1 0 + 1.5 1 + 0.5 1
0 1 1
Solution:
1 0
T 1 0 T 1
b = b1 + b2 + 1.5 + 0.5 , A A= , A b=
0 2 3
1.5 0.5
1 0 1
x x1 1
Ax = b1 = 0 1 1 = 1.5 = =
x2 x2 1.5
0 1 1.5
1 0 x1 1 x1 1
AT Ax = AT b = = = =
0 2 x2 3 x2 1.5
K. Bergen (ICME) Applied Linear Algebra 105 / 140
Least Squares via QR
AT Ax = AT b
RT QT QRx = RT QT b
RT Rx = RT QT b
Rx = QT b
Vector-by-vector derivatives:
y y2 ym
1
x 1 x1 x1
y1 y2 ym
y x2 x2 x2
y Rm , x Rn , =
x .. .. ..
. . .
y1 y2 ym
xn xn xn
Vector-by-vector identities:
4
see http://en.wikipedia.org/wiki/Matrix_calculus for more properties.
K. Bergen (ICME) Applied Linear Algebra 109 / 140
Eigenvalues and Eigenvectors
For any square matrix A Rnn , there is at least one scalar and
a corresponding vector v 6= 0 such that:
Av = v or equivalently (A I)v = 0.
(a) (b)
2 1
Figure: Under the transformation matrix A = , the directions of
1 2
1 1
vectors parallel to v1 = (blue) and v2 = (purple) are preserved.
1 1
5
diagram from wikipedia.org/wiki/Eigenvalues_and_eigenvectors
K. Bergen (ICME) Applied Linear Algebra 112 / 140
... Eigenvalues and Eigenvectors
(A) := { R | A I is singular.}
(A) = max |i |.
i (A)
P 1 AP = D, where D is diagonal.
I Note: this decomposition does not exist for every square matrix A.
1 1
e.g. A = can not be diagonalized.
0 1
Av = v
1
A (Av) = A1 (v)
v = (A1 v)
1
v = A1 v
det(A I) = (2 )(2 ) 1 1
= 2 4 + 3
Therefore Az = A(1 v1 + + n vn )
= 1 Av1 + + n Avn
= 1 1 v1 + + n n vn
6
diagram: http://people.sc.fsu.edu/~jburkardt/latex/fsu_2006/svd.png
K. Bergen (ICME) Applied Linear Algebra 128 / 140
Singular Values
Frobenius norm:
v v
um X
n
uX q u r
uX
kAkF = t 2 T
|aij | = trace(A A) = t i2
i=1 j=1 i=1
AT A = (V U T )(U V T ) = V 2 V T = V V T
and similarly AAT = U U T = U U T .
q q
= i (A) = i (AT A) = i (AAT ), i = 1, , r
k
where the minimum is attained by B ? = Ak = i ui viT .
P
i=1
I Application: image compression
Hint: Use the SVD and unitary invariance of the Euclidean vector
norm (kU xk = kxk).
Polar decomposition:
Show how any square matrix A Rnn = U V T can be written as
= max (y T U )(V T x)
kV T xk=1, kU T yk=1
= max y T x
kxk=1, kyk=1
= max ii
i
= max (A)
Show how any square matrix A = U V T can be written as
A = QS, Q orthogonal, S symmetric positive semi-definite.
Solution:
A = U V T = U In V T = U (V T V )V T = (U V T )(V V T )
A = QS, where Q = U V T , and S = V V T .
K. Bergen (ICME) Applied Linear Algebra 138 / 140
The End!