Вы находитесь на странице: 1из 2

Approximating Area under a Curve: x = (b a)/n.

Trapezoid Approx.: Area = x/2(y1+2y2...2yn-1+yn) ; Error: |ET| <= M(b a)3/(12n2) ; where M is max of f(2) on [a,b]
Simpson's Rule: Area = x/3(y0+4y1+2y2+4y3+...+2yn-2+4yn-1+yn) ; Error: |ES| <= M(b a)5/(180n4) ; where M is max of f(4) on [a,b]
Differential Equations
Separable: put variables on different sides and integrate both sides
Population growth/decay: dy/dt=ky y = y0ekt ...k is negative if decay. ln(2)/k is half-life
First Order Linear Equations:
dy/dx + P(x)y = Q(x) : multiply both sides by v(x)
v(x)dy/dx + v(x)P(x)y = v(x)Q(x)
v(x)*y = v(x)Q(x)dx
v is the integrating factor. V = ePdx
Indeterminate Forms: 0/0, /, *0, - , 00, 1 ; if indeterminate, use L'Hopital's Rule. lim(a/b) = lim(da/dx / db/dx). Only if indeterminate!
Improper Integrals: if a limit of integration is undefined, just find limit as variable approaches that one.
SEQUENCES: list of numbers. Converges if limit goes to a constant L. Otherwise, diverges.
Given sequence {an}, sn=a1+...an = nth partial sum
Geometric series = a+ar+ar2+...+sum(arn-1) = a/(1-r). If r >= 1, it diverges.
Converges if limn--> = 0. Diverges otherwise, i.e. if limit does not exist or is different from zero.
Reindexing: To raise starting value, replace n with n h; h is the difference. To lower starting value, replace n with n+h.
INTEGRAL TEST: compare the series with a integral version. e.g. 1/n2 1/x2dx. Both converge or diverge, as long as the series has a sequence
of positive terms and the function is a continuous, positive, decreasing function.
Harmonic Series: 1/np converges for p>1 but diverges for p<=1
COMPARISON TEST: let an, cn, dn be nonnegative series. Suppose dn <= an <= cn for all n>N, N is integer
(a) if cn converges, then an converges
(b) if dn diverges, then an diverges
LIMIT COMPARISON TEST: let an >0 and bn >0 for all n>=N (N an integer).
(1) if limn-->(an/bn) = c >0, then an and bn both converge or both diverge.
(2) if limn-->(an/bn) = 0 and bn converges, then an converges.
(3) if limn-->(an/bn) = and bn diverges, then an diverges.
Absolute Convergence: A series an converges absolutely if |an| converges.
RATIO TEST: let an be any series. If limn-->(an+1/an) = p, then
(a) series converges absolutely if p<1
(b) series diverges if p>1 or p infinite
(c) inconclusive if p=1
ROOT TEST: let an be any series. If limn-->(n|an|) = p, then see above (Ratio Test) for results from p
ALTERNATING SERIES TEST: an alternating series (-1)n+1 un = u1-u2+u3-u4+... converges if all three are true:
(1) un is always positive (2) positive un's are eventually nonincreasing (3) un 0
Alternating series estimation: the error is less than the next unused term, and the sum is between any two successive partial sums, and the
remainder has the same sign as the first unused term.
Rearranging terms of an absolutely convergent series is allowed. Rearranging terms of a conditionally convergent series gets a different answer.
SUMMARY OF TESTS:
1. nth term test if it is not true that an 0, the series diverges
2. geometric series arn converges if |r|<1; otherwise it diverges
3. p-series: 1/np converges if p>1; otherwise it diverges
4. series with nonnegative terms: integral test or compare with known series in (limit) comparison test. Try ratio or root test.
5. Series with some negative terms: does it converge by any of the above? ABSOLUTE convergence implies convergence.
6. Alternating series: an converges if it satisfies Alternating Series Test.
Power Series: cn(x-a)n is a power series with cn is a sequence of constants and a is the center (a constant).
If power series converges at x = k ~= 0, then it converges absolutely for all x with |x|<|k|. If it diverges at x=d, then it diverges for all |x|>|d|
Radius of Convergence: either it's an integer, it's infinity, or it's zero.
(1) If >R>0, power series diverges for |x-a|>R but converges absolutely for |x-a|<R. Endpoints are UNKNOWN! Use other tests!
(2) series converges absolutely for all x (R=)
(3) series converges at x=a and diverges everywhere else (R=0)
Test Power Series for Convergence: (1) use Ratio or Root test to find radius of con. (2) If R is finite, test endpoints. Use comparison, integral, or
alternating series test. (3) if interval of absolute convergence is a-R<x<a+R, the series diverges for |x-a|>R.
You can integrate or differentiate term-by-term in a power series!
TAYLOR SERIES: f(k)(a)/k! * (x-a)k = f(a) + f'(a)(x-a) + f''(a)/2!*(x-a)2 +...+f(n)(a)/n!*(x-a)n+... ; Maclaurin series is where a = 0
Linearization is the first two terms of Taylor Series. Taylor polynomial of order x is generated by first x terms of Taylor Series.
Remainder: Rn(x) = f(n+1)(c)/(n+1)*(x-a)n+1 [the next term in the series!]
Remainder Estimation: |Rn(x)|<=M*|x-a|n+1/(n+1)! ; where |f(n+1)(t)|<=M for all t between x and a, inclusive.
Elementary Row Operations: (1) Replace one row by a sum of itself and a multiple of another row. (2) interchange 2 rows.
(3) multiply a row by a nonzero constant.
Properties of Matrix-Vector Product Ax: If A is mxn matrix, u and v are vectors in Rn, and c is a scalar, then:
(1) A(u+v) = Au+Av ; (2) A(cu) = c(Au)
Parametric Vector Form: x = x1[...] + x2[...] +
Homogeneous equations: when it can be written as Ax=0 it is homogeneous and always has at least one solution (trivial solution, x=0)
it has a nontrivial solution only if the equation has at least one free variable.
Linearly Independent: If the set of vectors has only the trivial solution when set equal to 0. Linearly dependent if it has more (infinite) solutions.
Transformation, mapping, or function: Ax=b, A transforms x into b. from Rn-->Rm to transform. Transformation is linear if:
(1) T(u+v) = T(u) + T(v) for all u,v in domain of T ; (2) T(cu) = cT(u) for all scalars c and all u in the domain of T.
Find the Columns of A by multiplying x1,x2 by T(e2),T(e2) etc., where en is the nth vector in the identity matrix.

A mapping T:Rn-->Rm is onto if each b in Rm is the image of at least one x in Rn. Domain --T--> Range
ONTO: if T(x)=b has at least one solution for all b, the transformation is onto.
A mapping T:Rn-->Rm is one-to-one if each b in Rm is the image of at most one x in Rn.
ONE-TO-ONE: if T(x) = b has either a unique solution or none at all, it is one-to-one (if T(x)=0 has only the trivial solution; lin. Indep.)
Transpose Properties: (1) (AT)T = A ; (2) (A+B)T = AT + BT ; (3) for a scalar r, (rA)T = rAT ; (4) (AB)T = BTAT
Inverse of a Matrix: A-1A= AA-1 = I ; invertible is nonsingular, not invertible is singular matrix.
For a 2x2: A = [a b ; c d] if ad-bc ~=0, then A is invertible and A-1=1/(ad-bc)*[d -b ; -c a]
det(A) [2x2] = ad bc
if A is invertible nxn matrix, for each b in Rn the equation Ax=b has the unique solution x=A-1b
Inverse Properties: (A-1)-1=A ; (AB)-1=B-1A-1 ; (AT)-1 = (A-1)T
TO FIND THE INVERSE: Put identity matrix on one side and apply row operations to [A|I] until it's [I|A -1]
The following are equivalent: (1) A is an invertible matrix ; (2) A is row equivalent to the nxn identity matrix ; (3) A has n pivot positions ; (4) the
equation Ax=0 has only the trivial solution; (5) columns of A form a linearly independent set; (6) The linear transformation x-->Ax is one-to-one;
(7) the equation Ax=b has at least one solution for each b in Rn; (8) the columns of A span Rn ; (9) the linear transformation x-->Ax maps Rn onto Rm
;(10) there is an nxn matrix C such that CA=I; (11) there is an nxn matrix D such that AD=I; (12) AT is an invertible matrix. ;;;;;;
Let A and B be square matrices. If AB=I, then A and B are both invertible, with B=A-1 and A=B-1
LU Factorization: Ax=b ; A=LU ; L has 1's in the diagonal and is lower triangular; U is upper triangular. To find Ax=b, use L(Ux)=b and let y=Ux
(1) Solve for y in Ly=b ; (2) Solve for x in Ux=y
SUBSPACES: Subspace of Rn is any set H in Rn that has all: (1) zero vector is in H ; (2) for each u and v in H, u+v is in H ; (3) for each u in H and
each scalar c, the vector cu is in H.
Col(A) = column space; all linear combinations of the columns of A. Pivot cols of A form a basis for Col(A)
Nul(A) = null space; all solutions of Ax=0
Col(A)+Nul(A)=span(A)
BASIS: A basis is a linearly independent set in H that spans H. Any linearly independent set of exactly p elements in H, a p-dimensional subspace,
is a basis for H. Any set of p elements of H that spans H is a basis for H.
Coordinates: multiply coordinates by the basis to get x.
dim(H) = number of vectors in any basis for H. dim(0)=0
rank(A) = dim(Col(A)), the number of pivot columns in A
rank(A)+dim(Nul(A)) = n, the number of columns of A
Invertible Matrix Theorem: All are equivalent!: (1) the columns of A form a basis of Rn ; (2) Col(A)=Rn ;(3) dim(Col(A))=n ; (4) rank(A)=n ;
(5) Nul(A) ={0} ; (6) dim(Nul(A))=0
Determinant: go down the row or column with the most zeros, adding the determinants of each part with the given row/column removed. Multiply
each mini-determinant by (-1)i+j and by the term where the removed row and column intersect..(-1)i+jaij*detAij
Basis of Col(A) = pivot columns of Matrix A
Eigenvectors: An eigenvector of nxn matrix A is a nonzero vector x such that Ax=x for some scalar . A scalar is an eigenvalue of A if there is a
nontrivial solution x of Ax=x; this x is called an eigenvector corresponding to . Subtract identity vector I* from the matrix to find eigenvector.
Check if Eigenvector: a Matrix A multiplied by an eigenvector gives a scalar multiple of that eigenvector. (the scalar is the eigenvalue)
Check if Eigenvalue: Ax = x has a nontrivial solution if is an eigenvalue of A. (the solution is the eigenvector)
In a triangular matrix, the entries on its main diagonal are the eigenvalues.
det(A-I)=0 nontrivial solutions for are eigenvalues
A is invertible if (1) Zero is not an eigenvalue of A ; and (2) det(A) ~= 0
More Determinant Properties: (1) A is invertible is det(A)~=0 ; (2) det(AB) = (detA)(detB) ; (3) det(AT) = det(A) ; (4) if A is triangular, det(A) =
product of main diagonal entries ; (5) Row replacement on A does not change determinant. Row interchange changes sign of det. Row scaling also
scales det by same scalar factor: det(A) = k*det(B), if B is A with one row scaled by k.
DIAGONALIZATION: A = PDP-1, where D is a diagonal matrix. Useful for finding Ak, as the P's cancel when PDP-1 is squared.
The diagonal entires of D are eigenvalues of A that correspond with the columns in P, which are the eigenvectors of A.
Inner Product = Dot Product = uTv ; Length(v) = norm(v) = sqrt(v[dot]v) ; Vectors are perpendicular if u[dot]v=0
Perpendicular vectors follow Pythagorean theorem ( ||u+v||2 = ||u||2 + ||v||2 )
ORTHOGONAL COMPLEMENT: W, all vectors perpendicular to a subspace W. A vector in the orthogonal complement is orthogonal to each
vector in the basis of W. (RowA) = NulA and (ColA) = Nul(AT)
If S is an orthogonal set of nonzero vectors, then S is linearly independent (and a basis for the subspace spanned by S).
If you have an orthogonal basis for a subspace W, then you can find the weights in the linear combination easily...
y = (y*uj)/(uj*uj)uj , where uj is the jth vector in the orthogonal basis.
ORTHOGONAL PROJECTION: ((y*u)/(u*u))u = projection of y onto u
Orthonormal Columns: if and only if UTU=I, then U has orthonormal columns.
GRAM-SCHMIDT PROCESS: Given a basis {x1,x2,x3,...}, find an orthogonal basis {v1,v2,v3,...} ; v1 = x1
;
v2 = x2 ((x2*v1)/(v1*v1))v1
v3 = x3 ((x3*v1)/(v1*v1))v1 ((x3*v2)/(v2*v2))v2 ; v4 = x4 ((x4*v1)/(v1*v1))v1 ((x4*v2)/(v2*v2))v2 ((x4*v3)/(v3*v3))v3 [continue pattern ]
QR FACTORIZATION: A is mxn matrix with lin. indep. columns. A=QR, where Q is mxn and columns form orthonormal basis for Col(A) and R
is an nxn upper triangular invertible matrix with positive entries on the diagonal. R = QTA.
DIAGONALIZE SYMMETRIC MATRIX: See diagonalization, but P is orthogonal, so P-1=PT. If A is symmetric, any two eigenvectors from
different eigenspaces are orthogonal. An nxn matrix A is orthogonally diagonalizable if and only if A is a symmetric matrix.

Вам также может понравиться