Вы находитесь на странице: 1из 5

Background Material

A computer vision "encyclopedia": CVonline.


http://homepages.inf.ed.ac.uk/rbf/CVonline/

Linear Algebra Review and Matlab Tutorial


Assigned Reading:
Eero Simoncelli A Geometric View of Linear Algebra
http://www.cns.nyu.edu/~eero/NOTES/geomLinAlg.pdf

Linear Algebra:

Eero Simoncelli A Geometric View of Linear Algebra


http://www.cns.nyu.edu/~eero/NOTES/geomLinAlg.pdf

Michael Jordan slightly more in depth linear algebra review


http://www.cs.brown.edu/courses/cs143/Materials/linalg_jordan_86.pdf

Online Introductory Linear Algebra Book by Jim Hefferon.


http://joshua.smcvt.edu/linearalgebra/

Notation

Overview

Standard math textbook notation


Scalars are italic times roman: Vectors are bold lowercase:

n, N x
xT

Row vectors are denoted with a transpose:

Matrices are bold uppercase: Tensors are calligraphic letters:

Vectors in R2 Scalar product Outer Product Bases and transformations Inverse Transformations Eigendecomposition Singular Value Decomposition

Warm-up: Vectors in Rn

Vectors in Rn

We can think of vectors in two ways:

Notation:

Points in a multidimensional space with respect to some coordinate system translation of a point in a multidimensional space ex., translation of the origin (0,0)

Length of a vector:
2 2 x = x12 + x2 + L xn =

x
i =1

2 i

Dot product or scalar product


Scalar Product

Dot product is the product of two vectors Example:

Notation

x1 y1 x y = = x1 y1 + x2 y2 = s x2 y2

x, y
x y = xT y = [x1

It is the projection of one vector onto another

y1 L xn ] M yn

x y = x y cos

x.y

We will use the last two notations to denote the dot product

Scalar Product

Norms in Rn
xy = y x

Commutative: Distributive: Linearity

Euclidean norm (sometimes called 2-norm):


2 2 x = x 2 = x x = x12 + x2 + L + xn =

(x + y ) z = x z + y z

(cx ) y = c(x y ) x (cy ) = c(x y )


(c1x) (c2 y ) = (c1c2 )(x y )

x
i =1

2 i

The length of a vector is defined to be its (Euclidean) norm. A unit vector is of length 1. Non-negativity properties also hold for the norm:

Non-negativity:

Orthogonality:

x 0, y 0 x y = 0 x y

Bases and Transformations

Linear Dependence

We will look at:


Linear combination of vectors x1, x2, xn

Linear Independence Bases Orthogonality Change of basis (Linear Transformation) Matrices and Matrix Operations

c1x1 + c2 x 2 + L + cn x n

A set of vectors X={x1, x2, xn} are linearly dependent if there exists a vector

xi X

that is a linear combination of the rest of the vectors.

Linear Dependence

Bases (Examples in R2)

In R

sets of n+1vectors are always dependent there can be at most n linearly independent vectors

Bases

Bases

A basis is a linearly independent set of vectors that spans the whole space. ie., we can write every vector in our space as linear combination of vectors in that set. Every set of n linearly independent vectors in Rn is a basis of Rn A basis is called

Standard basis in Rn is made up of a set of unit vectors:

1 e

2 e

n e

We can write a vector in terms of its standard basis:


1 e 2 e
3 e

orthogonal, if every basis vector is orthogonal to all other basis vectors orthonormal, if additionally all basis vectors have length 1.

Observation: -- to find the coefficient for a particular basis vector, we project our vector onto it.

i x xi = e

Change of basis

Outer Product
[
L bn ]
,

and a vector x R terms of B

Suppose we have a new basis B = b 1


m

bi R

that we would like to represent in


b2

x2

x
~ x2

~ x
~ x1

x1 x o y = xy T = M [ y1 xn
b1

ym ] = M

A matrix M that is the outer product of two vectors is a matrix of rank 1.

Compute the new components When B is orthonormal


~ x = B 1 x
b T x 1 ~ x= M b T x n

~ x

is a projection of x onto bi

Note the use of a dot product

Matrix Multiplication dot product

Matrix Multiplication outer product

Matrix multiplication can be expressed using dot products

Matrix multiplication can be expressed using a sum of outer products


T a1 BA = b1 L b n M aT n T T T = b1a1 + b 2a 2 + Lb n a n

BA =

b1T

M
bmT

a1 L b1 a1 O b m a1

an

b1 a n bm an

= bi o ai
i =1

Rank of a Matrix

Singular Value Decomposition: D=USVT =


D
U

A matrix D R

I1x I 2

has a column space and a row space

SVD orthogonalizes these spaces and decomposes D

D = USV T

( (

U V

contains the left singular vectors/eigenvectors ) contains the right singular vectors/eigenvectors )

Rewrite as a sum of a minimum number of rank-1 matrices

D= u
r =1 r

ovr

Matrix SVD Properties:

D=USV
D = u ov
R r =1 r r r

Matrix Inverse

Rank Decomposition: sum of min. number of rank-1 matrices

v1T

v2T

..

vRT

u1

u2
R1 R2

uR

Multilinear Rank Decomposition:

D = u ov
r1 =1 r2 =1 r1 r 2 r1

r2

Some matrix properties

Matlab Tutorial

Вам также может понравиться