Вы находитесь на странице: 1из 5

1

Matrix Fundamentals

a12
..
..
A = (aij ) = .
.
mn
am1

1.1


a1n
a1
.. = .. (a , , a )
1
n
. .
amn
an

Kroneker Product

A B = (aij B)

mn

pq

1.2

Hatamand Product

mpnq

A B = (aj bj )

mn

mn

1.3

Range and Nullspace

mn

range(A) = {y|y = Ax, f or some x Rn }

null(A) = {x|Ax = 0}

Theorem 1.1 range (A) = span a1 , . . . , an

2
2.1

Orthogonal Vectors and Matrices


Inner Product

inner product of vectors : x, y Rn , xT y =

m
P

xi yi

i=1

The Euclidean length of xmay be written as ||x||2 , ||x||2 =

xT x = (

m
P

x2 ) 2

i=1

The cosine of x and y is cos =

2.2

xT y
||x||2 ||y||2

Orthogonal Vectors

if xT y = 0, xand yare orthogonal

2.3

Orthogonal Sets

Two sets X, Y are orthogonal, if for every x X and every y Y, xT y = 0


A set of nonzero vectors Sis orthogonal if its complement are pairwise orthogonal. If x, y
S, xT y = 0, x 6= y

2.4

Orthonormal

x 6= yxT y = 0, xT x = 1
Theorem 2.1 The vectors in a orthogonal set S are linearly independent

2.5

Orthogonal Matrix

Q = Rmm , QT Q = QQT = Im
Q = Cmm , QH Q = QQH = Im , (QH is the unitary matrix of Q)
Q1 = QT
|det(Q)| = 1
||Qx||2 = ||x||2

2.6

Column Orthogonal Matrix

Q1 Rmn , n < m, QT1 Q1 = In , Q1 QT1 6= Im


(rank(Q) n).
However, we can construct the matrix from column orthogonal matrix by adding colunms
Q2 Rm(mn) , QT2 Q2 = Im1 , QT1 Q2 = 0, Q = [Q1 Q2 ]
n mn

Rank

column rank = row rank


Proposition 3.1 Let A and B be m n and n p matrices, then
rank(A) = rank(AT ) = rank(AT A) = rank(AAT )
rank(AB) min{rank(A), rank(B)}
rank(AB) = rankA if B is square matrix of full rank
rank(A + B) rank(A) + rank(B)
Theorem
3.1



 rank(A,
 B) =rank(A)+ rank(B)

D C
C D
B A
A B
=rank
=rank
=rank
rank
B A
A B
D C
C D
Theorem 3.2 Rank factorization Let Abe an m n nonull matrix of rank r, then there
exist an m r matrix and an r n matrix T, such that A= BT. Moreover, for any m
r matrix Band r n matrix Tsuch that A= BT, rank (B) = rank (T) = r.

Inverse

A is n m matrix. rank (A)= m. A= (a1 , . . . , am ).{a1 , . . . , am }is a group of bases in Rn


n
space.For
any vector R , there is a linear combination of bases.

0
b

b
11
1m
..

.. = AB
..
ei = .
I = [e1 , . . . , em ] = (a1 , . . . , am ) ...
.
.
1i
bm1 bmm
0 n1
2

B is the right inverse of A.Similarly, I= CA, and C is called the left inverse of A.
Since C=CI= CAB= B, the left inverse of a matrix is equivalent to its right inverse, we
have AA1 = A1 A = I.
Proposition 4.1 For A Rmm , the following conditions are equivalent:
Ahas an inverse A1
rank(A) = m
range(A)= Rm
null(A) = {0}
0 is not an eigenvalue of A
0 is not a singular value of A
det(A)6= 0
Proposition 4.2 (The Sherman-Merriam-Woodburry formular) m  n, Ais diagonal and invertible.We have
(A BD1 C)1 = A1 + A1 B (D CA1 B)1 CA1
mm

nn

Proposition 4.3 (Matrix Inverse) A, B, C, D are matrices



1  1

A
0
A 0
=

0
D1
0 D

1 

In
0
In 0
=

V Im
V Im


1 

A 0
A1
0
=
C D
D1 CA1 D1


1  1

A B
A
A1 BD1

=
0 D
0
D1


1  1
A + A1 BQ1 CA1 A1 BQ1
A B

=
C D
Q1 CA1
Q1
1
Q=D

 CA B(Schur Complement)
A B
is invertible iff Aand Q are invertible
C D



Im BD1
S=
0
I

n 

A B
A BD1 C 0
P=S
=
C D
C
D




Im
0
A BD1 C 0
T=
, PT=
D1 C In
0
D




A B
A BD1 C 0
S
T=
C D
0
D

 



A B
11 12
11 12
1
=
=
, ==
C D
21 22
21 22
1
1
1
1
11 = 11 12 22 21 , 11 12 = 12 1
21 1
22 ,
11 = 22 21

X1

Probabilistic Graphical Model : ... =X N(0,)
Xn
Xi q Xj |Xrex iff ij = 0, Xi q Xj iff ij = 0

Trace

Trace of an (m m) matrix A = [aij ] is defined as tr(A) =

m
P

aii

i=1

Proposition 5.1
trA = tr(AT ) =

tr(1 A + 2 B) = 1 trA + 2 trB


m
P

i=1

tr(AB) = tr(BA), tr(xyT ) =

m
P

(xi yi )

i=1

a1

A = (a1 , , an ), vec(A) = ... .tr(AB) = vec(AT )T vec(B) = vec(BT )T vec(A)
an

tr(ABCD) = vec(DT )T (CT A)vec(B) = vec(AT )T (DT B)vec(C)


= vec(BT )T (AT C)vec(D) = vec(CT )T (BT D)vec(A)
tr(A B) = tr(A)tr(B)

Determinant

A = [aij ]mm , |A| = det(A)


det(A) =

Proposition 6.1

m
Q

i=1

det(A) = m detA
det(AB) = det(A)det(B)
det(AT ) = detA
det(A1 ) =

1
detA

det(Im + AAT ) = det(In + AT A)




A B
det
= det(A)det(C)
0 C


A B
= (1)m det(B)det(C)
det
C 0

D

nn
A B
= det(A)det(D CA1 B)
Anonsingular =
C D


A B
= det(D)det(A BD1 C)
Dnonsingular =
C D


A a
= det(A)det(c bT A1 a) = det(A)(c bT A1 a)
bT c
mm

mn

nn

det(A B) = (detA)n (detB)m


det(A + BC) = det(A)det(I + A1 BC) = detAdet(I + CA1 B)
proof: det(A + BC) = det(A(I + A1 BC)) = det(A)det(I + A1 BC) = detAdet(I +
CA1 B)

Вам также может понравиться