Вы находитесь на странице: 1из 65

Numerical Linear Algebra

Chris Rambicure Guojin Chen Christopher Cprek

WHY USE LINEAR ALGEBRA?


1) Because it is applicable in many problems . 2) And it s usually easier than calculus

TRUE
Linear algebra has become as basic and as applicable as calculus,and fortunately it is easier.
-Gilbert Strang

Calculus

HERE COME THE BASICS

SCALARS
What you re used to dealing with Have magnitude, but no direction

VECTORS
Represent both a magnitude and a direction Can add or subtract, multiply by scalars, or do dot or cross products

THE MATRIX
It s an mxn array Holds a set of numerical values Especially useful in solving certain types of equations Operations: Transpose, Scalar Multiply, Matrix Add, Matrix Multiply

EIGENVALUES
You can choose a matrix A, a vector x, and a scalar x so that Ax = sx, meaning the matrix just scales the vector X in this case is called an eigenvector, and s is its eigenvalue

CHARACTERISTIC EQUATION
det(M-tI) = 0 M: the matrix I: the identity t: eigenvalues

CAYLEY-HAMILTON THEOREM
IF AND

THEN p(A) = 0, meaning A satisfies its characteristic equation

A Couple Names, A Couple Algorithms

IN THE BEGINNING
(Grassmann s Linear Algebra)

Grassmann is considered to be the father of linear algebra Developed the idea of a linear algebra in which the symbols representing geometric objects can be manipulated Several of his operations: the interior product, the exterior product, and the multiproduct

What s a Multiproduct Equation Look Like?


H1H2 + H1H2 = 0 The multiproduct has many uses, including scientific, mathematic, and industrial Got updated by William Clifford

CLIFFORD S MODIFICATION TO GRASSMAN S EQUATION


H1H2 + H1H2 = 2kij The 2kij is what s referred to as Kronecker s Symbol Both of these equations are used for Quantum Theory Math

VECTOR SPACE
Another idea which is kind of tied with Grassman Vector Space refers to some set of vectors that contains the origin It is usually infinite Subspace is a subset of vector space. It, of course, is also vector space

Cholesky Decomposition
Algorithm developed by Arthur Cayley Takes a matrix and factors it into a triangular matrix times its transpose A=R R Useful for matrix applications Becomes even more worthwhile in parallel

HOW TO USE LINEAR ALGEBRA FOR PDE S


You can use matrices and vectors to solve partial differential equations For equations with lots of variables, you ll wind up with really sparse matrices Hence, the project we ve been working on all year

BIBLIOGRAPHY
Hermann Grassmann. Online. http://members.fortunecity.com/johnhays/grassmann .htm Abstract Linear Spaces. Online. http://wwwgroups.dcs.stand.ac.uk/~history/HistTopics/Abstract_ linear_spaces.html Liberman, M. Linear Algebra Review. Online. http://www.ling.upenn.edu/courses/ling525/linear_al gebra_review.html Cholesky Factorization. Online. http://www.netlib.org/utk/papers/factor/node9.html

Numerical Linear Algebra


Guojin Chen Christopher Cprek Chris Rambicure

Johann Carl Friedrich Gauss

Born: April 30, 1777 (Germany) Died: Feb 23, 1855 (Germany)

Gaussian Elimination
LU Factorization Operation Count Instability of Gaussian Elimination without Pivoting Gaussian Elimination with Partial Pivoting

Linear systems
A linear system of equations (n equations with n unknowns) can be written:

a11 x1 + a12 x2 + ... + a1n xn = b1 a21 x1 + a22 x2 + ... + a2n xn = b2 ... an1 x1 + an2 x2 + ... + ann xn = bn
Using matrices, the above system of linear equations can be written:

Gauss Elimination and Back Substitution

Convert this to triangular form:

Then solve the system by Back Substitution.

LU Factorization
Gaussian elimination transforms a full linear system into an upper-triangular one by applying simple linear transformations on the left. Let A be a square matrix. The idea is to transform A into upper-triangular matrix U by introducing zeros below the diagonal.

LU Factorization
This elimination process is equivalent to multiplying by a sequence of lowertriangular matrices Lk on the left: Lm-1 L2L1A = U

LU Factorization
Setting L = (Lm-1 )-1 (L2)-1(L1)-1 We obtain an LU factorization of A A = LU

In order to find a general solution of a system of equations, it is helpful to simplify the system as much as possible. Gauss elimination is a standard method (which has the advantage of being easy to implement on a computer) for doing this. Gauss elimination uses elementary operations. We can: interchange any two equations multiply an equation by a (nonzero) constant add a multiple of one equation to any other one and aim to reduce the system to triangular form. The system obtained after each operation is equivalent to the original one, meaning that they have the same solutions.

Algorithm of Gaussian Elimination without Pivoting U = A, L = I For k = 1 to m-1 for j = k +1 to m ljk = ujk/ukk uj,k:m = uj,k:m ljkuk,k:m

Operation Count
There are 3 loops in the previous algorithm There are 2 flops per entry For each value of k, the inner loop is repeated for rows k+1, , m. Work for Gaussian elimination is ~2
3

m3 flops

Instability of Gaussian Elimination without Pivoting


Consider the following matrices: A1 = 0
1

1 1
1 1

A2 =

1020 1

Pivoting
Pivots Partial Pivoting Example Complete Pivoting

Pivot

Partial Pivoting

Example
A
2 4 =8 6
2 4 8 6

1 3 7 7

1 3 9 9

0 1 5 8
0 1 5 8

1 1

1 3 7 7

1 3 9 9

8 4 2 6

7 3 1 7

9 3 1 9

5 1 0 8

P1

L1
1 1/2 1/4 3/4 1 1 1 8 4 2 6

7 3 1 7

9 3 1 9

5 1 0 8

8 0 0 0

1/ 2 3/ 4 7/ 4

3/ 2 5/ 4 9/ 4

5 3/ 2 5/ 4 17/ 4

Reference: http://www.maths.soton.ac.uk/teaching/units/ma273/node8.html http://www.maths.soton.ac.uk/teaching/units/ma273/node9.html Numerical Linear Algebra by Lloyd Trefethen and David Bau, III http://www.sosmath.com/matrix/system1/system1.html

Numerical Linear Algebra: The Computer Age


Christopher Cprek Chris Rambicure Guojin Chen

What I ll Be Covering
How Computers made Numerical Linear Algebra relevant. LAPACK Solving Dense Matrices on Parallel Computers.

Why All the Sudden Interest?


Gregory Moore regards the axiomatization of abstract vector spaces to have been completed in the 1920s. Linear Algebra wasn t offered as a separate mathematics course at major universities until the 1950 s and 60 s. Interest in linear algebra skyrocketed.

Computers Made it Practical


Before computers, solving a system of 100 equations with 100 unknowns was unheard of. The brute mathematical force of computers made linear algebra systems incredibly useful for all kinds of applications involving linear algebra.

Computers and Linear Algebra


The computer software Matlab provides a good example: it is among the most popular in engineering applications and at its core it treats every problem as a linear algebra problem. A need for more advanced large matrix operations resulted in LAPACK.

What is LAPACK?
Linear Algebra PACKage Software package designed specifically for linear algebra applications. The original goal of the LAPACK project was to make the widely used EISPACK and LINPACK libraries run efficiently on shared-memory vector and parallel processors.

LAPACK continued
LAPACK is written in Fortran77 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.

Parallel Dense Matrix Partitioning


Parallel computers are well suited for processing large matrices. In order to process a matrix in parallel, it is necessary to partition the matrix so that the different partitions can be mapped to different processors.

Partitioning Dense Matrices


Striped Partitioning
  

Block-Striped Cyclic-Striped Block-Cyclic-Striped Block-Checkerboard Cyclic-Checkerboard Block-Cyclic-Checkerboard

Checkerboard Partitioning
  

Striped Partitioning
Matrix is divided into groups of complete rows or columns, and each processor is assigned one such group.

Striped Partitioning cont


Block-striped Partitioning is when contiguous rows or columns are assigned to each processor together. Cyclic-striped Partitioning is when rows or columns are sequentially assigned to processors in a wraparound manner. Block-Cyclic-Striped is a combination of the two.

Striped Partitioning cont


In a column-wise block striping of an n*n matrix on p processors (labeled P(0), P(1), , P(P-1):


P(I) contains columns with indices (n/p)I, (n/p)I + 1, , (n/p)(I+1) 1. P(I) contains rows with indices I, I+p, I+2p, , I+n-p.

In row-wise striping:


Checkerboard Partitioning
The matrix is divided into smaller square or rectangular blocks or submatrices that are distributed among processors.

Checkerboard Partitioning cont


Much like striped-partitioning, checkerboard partitioning may use block, cyclic, or a combination. A checkerboard-partitioned square matrix maps naturally onto a two-dimensional square mesh of processors. An n*n matrix onto a p processor mesh divides the blocks into size (n/p)*(n/p).

Matrix Transposition on a Mesh


Assume that an n*n matrix is stored in an n*n mesh of processors, so each processor holds a single element. A diagonal runs down the mesh. An element above the diagonal moves down to the diagonal and then to the left to its destination processor. An element below the diagonal moves up to the diagonal and then to the right to its destination processor.

Matrix Transposition cont

Matrix Tranposition cont


An element at initial p8 moves to p4, p0, p1, and finally to p2. If p<n*n, then the tranpose can be computed in two phases.


Square matrix blocks are treated as indivisible units, and whole blocks are communicated instead of individual elements. Then do a local rearrangement within the blocks.

Matrix Transposition cont


Communication and the Local Rearrangement

Matrix Transposition cont


The total parallel run-time of the procedure for transposition of matrix on a parallel computer:

Parallelization of Linear Algebra


Transposition is just an example of how numerical linear algebra can be easily and effectively parallelized. The same techniques and principles can be applied to operations like multiplication, addition, solving, etc. This explains their current popularity.

Conclusion
Linear algebra is flourishing in an age of computers, where there are limitless applications. LAPACK exists as an efficient code library for processing large systems of equations on parallel processing computers. Parallel Computers are very well suited to these kinds of problems.

Useful Links
http://www.crpc.rice.edu/CRPC/brochure/res_la.html http://citeseer.nj.nec.com/26050.html http://www.maa.org/features/cowen.html http://www.nersc.gov/~dhbailey/cs267/Lectures/Lect_10_2000. pdf http://www.cacr.caltech.edu/ASAP/news/specialevents/tutorialnl a.htm http://www.netlib.org/scalapack/ http://citeseer.nj.nec.com/125513.html http://discolab.rutgers.edu/classes/cs528/lectures/lecture7/ http://www.cse.uiuc.edu/cse302/lec20/lec-matrix/lecmatrix.html

Вам также может понравиться