Вы находитесь на странице: 1из 6

Singular Value Decomposition

So far, we have seen how to decompose symmetric matrices into a product of orthogonal matrices (the eigenvector matrix) and a diagonal matrix (the eigenvalue matrix). What about non-symmetric matrices? The key insight behind SVD is to nd two orthogonal basis representations U and V such that for any matrix A, the following decomposition holds A=U VT where U and V are orthogonal matrices (so U U T = I and V V T = I)

is a diagonal matrix, whose entries are called singular values (hence the meaning behind the term SVD).

689:F03 p.23/28

Singular Value Decomposition


We want to choose the orthogonal matrices in such a way that not only are they orthogonal, but we want the result of applying A to them to also be orthogonal. That is, the vi are are a orthonormal basis set (unit vectors) and moreover, Avi also orthogonal to each other. We can then construct the ui = set. h i h
Avi Avi

to give us the second orthonormal basis

v1

v2

1 u1

2 u2

u1

u2

2 4

1 0

0 2

3 5

So, we get the following relationships: AV = U or U 1 AV = or U T AV =

AV = U or AV V 1 = U V 1 or A = U V T

689:F03 p.24/28

Finding Orthonormal Basis Representations for SVD


How do we go about nding U and V ? Here is the trick. We can eliminate U from the equation AV = U by premultiplying A by its transpose: AT A = (U V T )T (U V T ) = (V T U T )(U V T ) = V T V T

Since AT A is always symmetric (why?), the above expression gives us exactly the familiar spectral decomposition we have seen before (namely, QQT ), except that now V represents the orthonormal eigenvector set of AT A. In a similar fashion, we can eliminate V from the equation AV = U by postmultiplying A by its transpose: AAT = (U V T ) (U V T )T = (U V T )(V T U T ) = U T U T

The diagonal matrix is now the eigenvalue matrix of AT A.

689:F03 p.25/28

Examples of SVD

Find the SVD of the following matrices 2 2 1 2 1 3 5 3 5

A=4 2

A=4

2 1

2 1

689:F03 p.26/28

SVD Technology: How GoogleT M works



GoogleT M uses SVD to accelerate nding relevant web pages. Here is a brief explanation of the general idea. Dene a web site as an authority if many sites link to it. Dene a web site as a hub if it links to many sites. We want to compute a ranking x1 , . . . , xN of authorities and y1 , . . . , yM of hubs. As a rst pass, we can compute the ranking scores as follows: x0 is the number of i 0 is the number of links going out of i. links pointing to i and yi But, not all links should be weighted equally. For example, links from authorities (or hubs) should count more. So, we can revise the rankings as follows x1 i = X
0 yj

=A y

T 0

and

1 yi

j links to i

i links to j

x0 = Ax0 j

Here, A is the adjacency matrix where aij = 1 if i links to j.

689:F03 p.27/28

SVD Technology: How GoogleT M works

Clearly, we can keep iterating, and compute


k xk = AT A xk1 and yi = AAT y k1 i

This is basically doing an iterative SV D computation, and as k increases, the 2 largest 1 eigenvalues dominate. GoogleT M is doing an SVD over a matrix A of size 3 109 by 3 109 !

689:F03 p.28/28

Вам также может понравиться