Вы находитесь на странице: 1из 46

Math Workbook v1.

Completely Voluntary Office Hours


7:30-9:30pm @ Steele 214 During the first week of classes

D O O F E E FR
Monday' 9/26/2011' Calculus' Tuesday' Wednesday' Thursday' 9/27/2011' 9/28/2011' 9/29/2011' Linear' Algebra'I' Linear' Algebra'II' ODEs' Friday' 9/30/2011' Complex' Analysis'

FREE D RINKS ON FRI DAY

MISCELLANEOUS&
Trigonometry&
sin(2 ) = 2sin cos cos(2 ) = cos2 sin 2

CALCULUS&
Dieren-a-on&

3x x
45

30

2x
60

! $ 1 cos sin # & = "2% 2 ! $ 1 + cos cos # & = "2% 2

2x x
45

Product$rule$ ( f g)" = f " g + f g" Quo9ent$rule$ ! f $' f ' g f g' # & = g2 "g% Chain$rule$

Integra-on&by&parts&

u dv = uv v du
Integra-on&by&subs-tu-on&
b g(b)

uv dw = uvw uw dv vw du

f ( g(t ))g!(t ) dt =
2

n sides)

d [ f (g( x ))] = f !(g( x )) g!( x ) dx


i 2 kx

g(a )

f ( x ) dx
1 u=5 cos(u) du 2 u=1

ex:$

x cos( x 2 + 1)dx =

sin( ) = sin cos cos sin cos( ) = cos sin sin cos tan tan tan( ) = 1 tan tan

(n 2) n

Fourier&transform&

Cylindrical&coordinates& x = r cos y = r sin z = z Spherical&& &&&coordinates&


x = r sin cos y = r sin sin z = r cos

LHopitals&Rule& ) g( x ) =)))))))))))))))) 0 or F (k ) = f ( x )e dx If lim f )( x ) = lim xc xc ) f (x) f "( x ) f "( x ) Laplace&transform& and) lim))))))))))))))exists,))then) lim = lim x c x c xc g"( x ) g( x ) g"( x ) st
F (s) =

f (t )e
0

dt

COMPLEX&VARIABLES&
Imaginary$axis$
i r = z = x 2 + y 2 z = x + iy = re

Eulers&formula:& ei = cos( ) + i sin( )


sin =
n

dSr = r 2 sin d d dV = r 2 sin dr d d

= arg( z ) x = Re( z ) = r cos( )

y = Im( z ) = r sin( )
Real$axis$

de&Moivres&formula:& z = r n [ cos(n ) + i sin(n )] Complex&exponen-al:& z w = e w log z = e b (log r+i ) Complex&logarithm:& log( z ) = ln z + i arg( z )
log|z|$and$zw$are$mul9valued$ func9ons!$Dene$branch$cuts!$

ei ei 2i

cos =

ei + ei 2

Quadra-c&formula&
x=

z = x iy
zz = z
2

2 The)two)roots)of)a)quadra.c)eqn) ax + bx + c = 0 are)given)by:) b b 2 4 ac

z1 + z2 = z1 + z2

z1z2 = z1z2

z1 + z2 + ... + zn z1 + z2 + ... + z n

2a
2

From)the)discriminant) = ) b 4 ac )))):) two)dis.nct)real)roots)if))>)0,)) one)real)double)root)if))=)0,)) and)two)dis.nct)complex)roots)if))<)0)


)

Taylor&series&expansion&(Maclaurin&series&if&a=0):& f !!(a ) f ( n ) (a ) f ( x ) = f (a ) + f !(a )( x a ) + ( x a )2 + ... ( x a )( n ) 2 n!


Converts)a)func.on)into)a)polynomial)approxima.on)near)a)point)a)

Par-al&Frac-ons&

Common&Series&
Power$series$

Decomposi9on$of$the$denominator$ ~ A1 A2 Ak + + + (ax + b)k ax + b (ax + b)2 (ax + b)k


~ (ax + bx + c)k
2

Geometric$series$

ar
n=1 n

n1

converges)if)|r|$<$1)to))
n

a 1 r r = lim
n

A1 x + B1 Ak x + Bk + + 2 ax + bx + c (ax 2 + bx + c)k

a (z z )
0 n=0 n n=

has)radius)of)convergence)
n

an an+1

repeated$root:$

Heaviside$coverHup$method:$cover$the$asympto9c$terms$ s+3 A A A3 A F (s) = = 1+ 2 + + 4 s(s + 2)2 (s + 5) s s + 2 (s + 2)2 s + 5 s+3 3 3 A1 = s F (s ) s=0 = = = 2 s / (s + 2) (s + 5) s=0 4 5 20 s+3 1 A3 = (s + 2)2 F (s ) s=2 = = 2 s(s + 2) (s + 5) s=2 6

Laurent$series$

a (z z )
0 p

has)annulus)of)convergence)

P$series$

n
n=1

converges)if)p$>$1;))harmonic$series$if)p)=)1)

Common&Tests&for&Convergence&for& an
Divergence$test:$$$$If) lim a )))))))))),)or)does)not)exist,)then)the)series)diverges.) n 0
n
n=1

( s + 3 +. !d $ A2 = # {(s + 2)2 F (s )}& =* " ds % s=2 ) s(s + 5) ,

=
s=2

7 36

Ra9o$test:$$$$If)))))))))))))))))))))))),)then)the)series)converges)(absolutely).)) $ lim n+1 < 1


n

a an

n a <1 Root$test:$$$$$If) lim ))))))))))))))))))),)then)the)series)converges.) n n

LINEAR'ALGEBRA'
Dot'(inner)'product:'
b' a!b' a' projec8on $of$a$onto$b$

Useful'deni1ons'
magnitude&of&a&vector&

2 2 a = a a = a12 + a2 + + an

a b = a1b1 + a2 b2 + = a b cos
x y z

= unit&(normalized)&vector& a
determinant&of&a&matrix&

a a

eigenvalue&equa8on& Av = v eigenvalues$$and$eigenvectors$v& nd$eigenvalues$by$solving$$det ( A I ) = 0 nd$eigenvector$for$each$eigenvalue$ by$solving$ ( A i I ) vi = 0

Cross'product:' magnitude$=$area$
a b = a1 b1 a2 b2 a3 b3

a11 a31

a12 a32

a13 a23 a33

FIELDS'AND'FLOWS'
Div'

det A = A = a21 a22

= a11 ( a22 a33 a23a32 ) a12 ( a21a33 a23a31 ) + a13 ( a21a32 a22 a31 )
% inver8ble&matrices& " a 1 $ 22 a12 ' 1 2x2&ex:& A = A$ # a21 a11 ' & $$ det A 0 $Ax = $$$$$$always$has$a$unique$soln$ b zero$is$not$an$eigenvalue$ matrix$is$square$and$has$full$rank$
0.06$

v v v v v = x + y + z = i magnitude$of$vector$ x y z xi eld$vs$source/sink$ f f f f + y + z = Grad' f = x y z xi rate$&$direcBon$of$ x


change$in$scalar$eld$f$

Triple'product:' volume$of$parallelepiped'
a ( b c ) = b1 c1 a1 a2 b2 c2 a3 b3 c3

Worked'example'problem:'Markov$(stochasBc)$matrix$
Each$year,$$6%$of$the$urban$populaBon$ move$to$rural$areas$while$9%$of$the$rural$ 0.94$ populaBon$move$to$urban$areas:$ urban$ (node$1)$

0.09$

rural$ (node$2)$

0.91$

system&of&equa8ons:'

uk+1 = 0.94uk + 0.09 rk rk+1 = 0.06uk + 0.91rk

matrix&form:' ! u $ ! $! u $ # k+1 & = # 0.94 0.09 &# k & # " rk+1 & % " 0.06 0.91 %# " rk & %

Each$aij$describes$ the$interacBon$ btwn$nodes$i$and$j&

nd&the&eigenvalues&:' 1 = 1, 2 = 0.85 0.94 0.09 = ( 0.94 ) ( 0.91 ) ( 0.09 ) ( 0.06 ) = ( 1) ( 0.85) 0.06 0.91 nd&the&eigenvector&for&1:' " 0.94 0.09 %" v1 1 $ '$ 0.06 0.91 $ 1 ' # &$ # v2

x
rotaBon$of$ vector$eld$v$

y y vy

z v = ijk k z x j vz

% " % '=$ 0 ' ' & # 0 &

0.06 v1 + 0.09 v2 = 0 0.06 v1 0.09 v2 = 0 2 v2 = v1 3

steady& state& ! 1 $ ! $ # & or # 0.6 & # 2 3 & " 0.4 % " %

Curl'

v =

x vx

PROBABILITY'
Repeats&allowed&

Permuta1ons$
(order&ma)ers)&
&&&

Repeats&not&allowed& n! n! or$ (n k )! ni !

Laplacian' 2 f 2 f 2 f 2 f = f = f = 2 + 2 = 2 y xi ux$density$of$the$gradient$ x
ow$of$a$scalar$eld$f&

Combina1ons'

" n + k 1 % " n + k 1 % ( n + k 1)! ! n $ ! n $ n! $ '=$ '= # &=# &= k n 1 k ! n 1 ! k n k k ! n ( ) " % " ( k )! & # & % (order&d/n&ma)er)& #

Divergence'(Gausss)'theorem' Net$sources/sinks$in$a$ dS volume$V$==$net$ow$ = ""$$$$$$$$$$$ div$a dV a" n Greens'theorem' $ M L ' ) dx dy C ( L dx + M dy) = D & x y ( Net$uid$ouXlows$in$a$volume$$ %
==$total$ouXlow$about$an$enclosing$surface$area$
V S

across$a$surface$S$$

Gaussian'(normal)'distribu1on' Binomial'distribu1on'
n

Bell$curve$about$mean$$with$variance$2$

P( x ) =

1 2 2

( x ) 2 2 2

Probability$of$k$occurrences$for$n$trials,$ each$with$probability$of$success$p$
n k =0 k =0

k nk P(k; n, p) = ( n k ) p (1 p )

nk k k nk ( x + y )n = ( n y = ( n k )x k )x y

Binomial&theorem&

Con1nuity'equa1on' f + v = s, Rate$of$change$of$uid$density$ t f$and$divergence$of$ux$v$==$ net$sources/sinks$s$ s = 0 if$f$is$conserved$ GSC'BOOTCAMP!'


More&informa8on&on&the&wiki:&www.bootcatmp.calech.edu& Join&our&mailing&list:&bootcamp@caltech.edu&

( x1 + x2 + ... + xm )n =

k1, k2 ,..., km

n k1, k2 ,..., km

km Mul8nomial&theorem& )x1k1 x2k2 xm

Bootcamp Workbook: Calculus and Dierential Equations


September 6, 2011

Introduction

Hey! This is in case its been a while since you learned the basics of calculus and dierential equations. We assume youve seen most of this stu before and that some of it is laughably easy for you. Other parts of it may jog your memory so that when the rst few problem sets hit, youll have the basic tools ready to hand. A few things may have been unaccountably left out of your education so far, in which case, dont worry, just look it up on Wikipedia or something. None of this is compulsory. Its here for students who think they might have forgotten some of the basics, and/or students who want to reassure themselves prior to their upcoming classes. Welcome to Caltech!

2
2.1

Calculus
Dierentiation Techniques
d exp(sin(x)) dx 2. Use the product rule to evaluate d 3 x exp(x) dx 3. Use the quotient rule to evaluate d x3 + 3 dx cos(x) 1. Use the chain rule to evaluate

2.2

Integration Techniques

1. Use integration by parts to evaluate the following indenite integral: Z xex dx 2. Use the substitution x = cos() to evaluate the following integral: Z
1 1

1 1 x2

dx

3. Use partial fractions to evaluate the following integral: Z 1 dx 2 x + 3x + 2

2.3

Taylor Series
2

1. Find the Taylor series for f (x) = p1 e x /2 about zero up to O(x5 ). 2 Integrate it to estimate the probability that a given standard-normallydistributed event falls within 0.05 of the mean. 2. Give the Taylor series for ln(x) about x = 1 up to O((x 1)3 ).

2.4

Partial Derivatives
@f @x .

1. If f (x, y ) = x2 y + exy , calculate

2. Let z = x + y and suppose that u = z x. First calculate the partial u derivative of u with respect to x with y held constant @ @ x y . Then calculate the partial derivative of u with respect to x with z held constant Are these two quantities the same? Why, or why not?
@u @x z .

2.5

The Multivariable Chain Rule


dx dt

1. Let u = x2 + xy , with 2. Suppose that

= 1 and

dy dt

= 2. Write

du dt

in terms of x and y .

@F =y @x @F =x+2 @y

and

Use the chain rule to rewrite these equations in terms of the new variables u = x + y and v = x2 .

2.6

Line Integrals

1. A unit mass moves through a gravitational eld. The gravitational eld exerts a force F of 10N on this mass, in the y direction. The position of the mass at a time t is given by s = (x, y ) = (t, 5 t2 ). Find the work done on the mass by the gravitational eld between t = 0 and t = 2, using the line integral Z F ds 2. A straight wire stretches between the points (0, 0) and (1, 1). The mass per unit length of the wire is given by (x, y ) = yex + x. Find the total mass of the wire.

2.7

Lagrange Multipliers

A positively charged particle is constrained to lie on the circle x2 + y 2 = 1. It reacts to the electric potential generated by another positively charged particle 1 p 1 xed at (2, 2), which is given by 4 . Use Lagrange multipliers 2 2
(x 2) +(y 2)

to nd the point of lowest potential energy on the unit circle, towards which the constrained particle will travel. Check that the answer accords with your physical intuition.

3
3.1

Dierential Equations
Separation of Variables

Find the general solution to the following dierential equation using separation of variables: dy = x2 y + y dx

3.2

Integrating Factors

Consider a circuit involving a resistance R and an inductance L. After the switch is closed at time t = 0, the current i in this circuit obeys the following dierential equation: di + Ri = E (t) dt where E (t) = t is the time-dependent voltage, L = 1 and R = 2. Solve for the current as a function of time, using an integrating factor. If youve forgotten, an integrating factor is that thing where you take e to the power of a particular function... L 3

3.3

Second-Order Linear Homogeneous ODEs

A unit mass is attached to a spring. Let y be the displacement of the mass from its equilibrium position. The force of the spring acting on the mass is given by ky where k is the spring constant. Assume that the drag force acting on the mass is given by FD = 4y . At time t=0, the mass is released from where it has been held steady at y = 1. Calculate the resulting motion of the mass in the following cases: 1. k = 3 2. k = 4 3. k = 5 Graph the behaviour of the resulting solutions. (You can use software like Mathematica if you like. Obviously. I mean, all of this is voluntary to begin with.)

3.4

Systems of dierential equations

1. Write the following second-order dierential equation as a pair of coupled rst-order dierential equations: y + 6y +9=0 Hint: Start by setting y = v. 2. Solve the following pair of coupled rst-order dierential equations

x = x + 2y y = x + 4y

Hint: Eigenvalues and eigenvectors may be useful here.

ACM 100 Bootcamp: Vectors Adapted from Appendix A of Analysis of Transport Phenomena by William Deen
September 1, 2011

Vector Representation
Components
You are probably already familiar with the concepts of scalars and vectors. A scalar is a quantity with solely a magnitude, while a vector is a quantity with magnitude and direction. For example, the speedometer in your car tells you your speed (a scalar), while your velocity (a vector) is the speed you are traveling in a certain direction. A vector in 2-dimensional space, i.e. in the xy -plane, can be represented by an ordered pair of scalars called components. There are several dierent ways to represent a vector. One may use boldface, as follows: v = vx 1x + vy 1y . (1) In Equation 1, v is a vector in 2-D space, and vx & vy are the scalar components in x and y , respectively. The terms 1x and 1y are unit vectors in the x and y directions. Unit vectors have a magnitude of 1. &y , ex & ey , e1 & e2 , and i & j . We could Other representations of unit vectors in 2-D space include x also arrange the scalar components into row or column arrays, which is useful when performing operations between vectors and matrices (two-dimensional arrays): v = vx vy (row form), (2) vx v= (column form). (3) vy This notation can be extended to 3-D space by adding a component in the third direction: v = vx 1x + vy 1y + vz 1z , or to N -D space by having a component for each of the N directions: v = v1 e1 + v2 e2 + + vN eN =
N X i=1

(4)

vi ei .

(5)

Magnitude and Unit Vectors


The length, magnitude, or norm of a vector v is denoted by v . We can take the Euclidean norm of the vector to nd the length. For example, the norm of a vector in 3-D space is: q 2 + v2 + v2 . v = v1 (6) 2 3 As mentioned above, unit vectors have a magnitude of one. To make any vector into a unit vector, just divide each component by the magnitude: = v/ v . v (7)

Tensors
Each component in a vector is associated with one direction. For example, the vector v in 3-D space has three components: v = v1 e1 + v2 e2 + v3 e3 . (8) The component v1 is associated with the e1 direction, etc. Vectors are the appropriate representation for magnitudes are associated with one direction, but what if we have a quantity associated with two or more directions? For example, each component of stress is associated with two directions: one direction describes the unit normal of a surface or plane, while the second direction describes the direction of the force acting on that plane. With three possible orientations of a plane and three directions of force, there are nine total components of the stress tensor. Much like each component of a vector is associated with a unit vector, each component of a second-order tensor is associated with a unit dyad. = 11 e1 e1 + 12 e1 e2 + 13 e1 e3 + 21 e2 e1 + 22 e2 e2 + 23 e2 e3 + 31 e3 e1 + 32 e3 e2 + 33 e3 e3 . (9)

Tensors can be even higher order than second-order! For example, the most general linear relation between two second-order tensors (e.g. the stress and strain in a continuum) is a fourth-order tensor. The number of components required to fully specify an N th order tensor in D-dimensional space is DN . The arrow-inspace representation that works so well for vectors ceases to be useful for higher tensorial order than one. Fortunately, there are other ways to represent tensors. Two-dimensional arrays can faithfully represent the components of a second-order tensor: 2 3 11 12 13 = 4 21 22 23 5 . (10) 31 32 33

Index Notation
A convenient way to represent tensorial quantities compactly is using index notation. To explain index notation, we will use the orthonormal basis {e1 , e2 , e3 }. The ortho- prex in orthonormal represents the fact that basis vectors are orthogonal (mutually perpendicular) to one another. The -normal sux represents the fact that the vectors in the basis are normalized, or are unit vectors. Here are the steps to convert a vector from Gibbs (boldface) notation to index notation: v = v1 e1 + v2 e2 + v3 e3 = 1. Write the vector v in terms of its components.
(1) (2) 3 X i=1

vi ei = vi .

(3)

2. Sum over a dummy index (in this case, i) that iterates through all components. 3. Summation symbols and unit vectors are made implicit. You dont need to go through all these steps every time you convert from Gibbs notation to index notation just recognize that a quantity with one subscript index is a vector. We may do the same for a second-order tensor: 3 3 (1) (2) X X (3) = 11 e1 e1 + 12 e1 e2 + + 33 e3 e3 = ij ei ej = ij .
i=1 j =1

1. Write the tensor

in terms of its components.

2. Sum over dummy indices (in this case, i and j ) that iterate through all components. 3. Summation symbols and unit dyads are made implicit. Note that we needed two distinct dummy indices for a second-order tensor. For an N th -order tensor, we would need N distinct dummy indices. Later, we will show conventions for repeated indices in index notation. 2

Vector Operations
Vector Addition
Vectors are added in a componentwise fashion. For example, to add the 2-D vectors a and b in the xy -plane, we simply sum their x components together and sum their y components together: a + b = ( a x + b x ) 1x + ( a y + b y ) 1y . The addition of vectors is a commutative operation, i.e. a+b=b+a (addition is commutative). (12) (11)

Vector addition is also associative, i.e. the order of operations for successive vector addition does not matter: (a + b) + c = a + (b + c) (addition is associative). (13) The additive identity for vector addition is the zero vector, or the vector with all scalar components equal to zero. In 2-D, the zero vector is: 0 = 0 1x + 0 1y (zero vector). (14)

The additive inverse for a vector v is the vector where each component is the negative of that in v : v = ( vx ) 1x + ( vy ) 1y (additive inverse). (15)

Using the arrow in space representation of vectors, addition of two vectors a and b is possible by translating b such that the tail of vector b is coincident with the tip of vector a. The vector sum a + b is simply the vector from the tail of vector a to the tip of the translated vector b (See Figure 1). Note that the same resultant vector is achieved from b + a by translating a such that the tail of vector a is coincident with the tip of vector b.

a+b b

1y a 1x
Figure 1: Vectors a = 2 1 and b = 1 3 are added to form the vector sum a + b = 3 4 .

Scalar Multiplication
Multiplying a vector v by a scalar s is done by multiplying each scalar component of v by the scalar s. Multiplying a vector by a scalar returns a vector: sv = svx 1x + svy 1y (scalar multiplication). (16)

Scalar multiplication is distributive over addition of vectors: s (a + b) = sa + sb and over addition of scalars: ( s + t ) a = s a + ta (distributive), (distributive). (17) (18)

The multiplicative identity for scalar multiplication is simply the scalar 1: 1v = v (multiplicative identity). (19)

Using the arrow in space representation of vectors, scalar multiplication by s is a dilation, lengthening (or shortening, for 0 < |s| < 1) the vector v by a factor s. Negative scalars reect the arrow through the origin.

2a 1y a/2 1x
2 1 by 2, 1/2, and 1.

Figure 2: Scalar multiplication of the vector a =

Dot Product
The dot product (also known as the inner product or scalar product) of two vectors is given by the formula: a b = a b cos ab (dot product), (20)

where ab is the angle ( 180 ) between vectors a and b. We see right away that the dot product for orthogonal (perpendicular) vectors must be identically zero, as the cosine of 90 is zero. We also see that the dot product of a vector with itself is simply the magnitude squared. Thus, another way to dene the norm of a vector uses the dot product: a = aa (norm of a vector). (21)

The dot product of any two dierent orthonormal basis vectors {e1 , e2 , e3 } is zero, as they are mutually perpendicular. The dot product of an orthonormal basis vector with itself is one. We can write these two facts compactly using the Kronecker delta: ei ej = ij , where the Kronecker delta is dened as: ij = ( 1, 0, i = j, i=j (Kronecker Delta). (23) (22)

The dot product of vectors is commutative and distributive: a b = b a; Dot Product as Projection Suppose we have some vector a that we would like to project onto another vector b. If we think of the vector a as the hypotenuse of a right triangle, the length of the projection of a onto b is given by the leg of the right triangle parallel to b. This projection has length a cos ab , which is equal to the dot product a b (See Figure 3). We now have a geometric interpretation of the dot product of vectors a and b: the product of the magnitudes of b and the projection of a onto b. Conversely, a b is also equal to the product of the magnitudes of a and the projection of b onto a. a ( b + c) = a b + a c. (24)

a 1y 1x

b a cos

Figure 3: Projecting a vector a onto another vector b is easily visualized with a as the hypotenuse of a right . triangle. The projection is the leg parallel to b. The length of the projection is equal to a b 5

Cross Product
The cross product (or vector product) of two vectors in 3 dimensions is dened as: where eab is a unit vector perpendicular to the plane passing through vectors a and b. Note that if the vectors a and b are parallel or antiparallel, there are innitely many planes passing through a and b. However, the sin ab results in parallel or antiparallel vectors having a zero cross product. The convention for the cross product is for the set of vectors {a, b, a b} to be a right-handed set: when you curl the ngers of your right hand from the vector a to the vector b, the thumb of your right hand will point in the direction of a b. Figure 4 depcits this right-hand rule. ab a b sin ab eab (cross product), (25)

1y

b a a b

1x
Figure 4: For the above example, a b points in the +z -direction (out of the page). z -direction (into the page), while b a points in the

Using our orthonormal basis {e1 , e2 , e3 } for 3-dimensional space, we see that, as long as we prescribe the basis to be a right-handed set of unit vectors, the cross product of any two basis vectors is given by the compact expression: 3 X ei ej = ijk ek . (26)
k=1

In Equation 26, we introduced the 8 > <1 ijk 1 > : 0,

permutation symbol ijk dened as follows: ijk = 123, 231, or 312, ijk = 132, 213, or 321, otherwise ab= b a.

(permutation symbol).

(27)

The cross product of two vectors is anticommutative. Switching the order of vectors negates the result: (28) (29) The cross product is distributive over vector addition:

Using the fact that the cross product is distributive over vector addition, we can write the cross product out in terms of vector components: One way to remember the cross product is to take the determinant of a 3 3 matrix with rows that are the components of a, the components of b, and the unit basis vectors respectively: 2 3 a1 a2 a3 a b = det 4 b1 b2 b3 5 . (31) e1 e2 e3 6 a b = ( a 2 b3 a 3 b2 ) e 1 + ( a 3 b 1 a 1 b3 ) e 2 + ( a 1 b2 a 2 b1 ) e 3 . (30)

a ( b + c) = a b + a c.

Cross Product as Area Consider a parallelogram whose two sides are dened by the vectors a and b (See Figure 5). The area of this parallelogram is given by a b sin ab , which is the magnitude of the cross product a b. Thus, we have a geometric interpretation of the cross product.

b a
a sin

1y b 1x
Figure 5: A parallelogram formed by two vectors (a and b) has an area equal to a precisely the magnitude of a b Scalar Triple Product as Volume Suppose we wanted to nd the volume of a parallelepiped, or a 3-dimensional gure with six parallelogram faces. We can use the cross and dot products of vectors a, b, and c dening three of the edges (See Figure 6). The absolute value of the scalar triple product will give us the volume of the parallelepiped: (a b) c = a b sin ab c cos a
b,c

b sin ab , which is

(scalar triple product).

(32)

Cyclic rearrangements of the arguments do not change the value of the product: ( a b ) c = ( b c) a = ( c a ) b = c ( a b ) = a ( b c) = b ( c a ) . (33)

a b ,c

a b ,c

c b a
a b a b

c cos

a b

a b = a b sin (a b) c = a b sin

c cos

a b ,c

Figure 6: The scalar triple product gives the volume of the parallelepiped formed by a, b, and c.

Dyadic Product
A third method of vector multiplication, the dyadic product, results in a second-order tensor. The three components of both vectors are multiplied together in all nine possible combinations, and each product is associated with a unit dyad: ab = a1 b1 e1 e1 + a1 b2 e1 e2 + + a3 b2 e3 e2 + a3 b3 e3 e3 The product is easy to visualize using row and column arrays: 2 3 2 a1 a 1 b1 ab = 4 a2 5 b1 b2 b3 = 4 a2 b1 a3 a 3 b1 (dyadic product). 3 a 1 b3 a 2 b3 5 . a 3 b3 (34)

a 1 b2 a 2 b2 a 3 b2

(35)

The dyadic product is not necessarily commutative. The dyadic product commutes if and only if b is a scalar multiple of a. The resulting dyad is symmetric: ab = ba The dyadic product is distributive over vector addition: c (a + b) = ca + cb. (37) b = ca. (36)

Extensions to Tensors
The denition of the dot product can be extended to cases where either or both arguments are dyads. Consider the unit vectors ei , ej , ek and el selected from the orthonormal basis {e1 , e2 , e3 }. The following operations hold: ei ej ek = jk ei , (38) ej ek el = jk el , ei ej ek el = jk ei el , ei ej : ek el = jk il . (39) (40) (41)

Index Notation of Dot, Cross, and Dyadic Products


Using the Kronecker delta, the index notation representation of the dot product can be determined: ab =
(1) 3 X i=1 3 X j =1 3 X 3 X i=1 j =1 3 X 3 X i=1 j =1 3 X X i=1 j =i 3 X i=1

ai ei

bj e j =

(2)

a i bj e i e j =

(3)

ai bj ij =

(4)

a i bj =

(5)

a i bi = a i bi .

(6)

(42)

1. The vectors a and b are written as sums over distinct dummy indices. 2. The distributive properties of scalar multiplication and dot products are used to bring all terms inside the summations. 3. The denition of the dot product of orthonormal basis vectors is applied 4. The denition of the Kronecker delta is applied to the innermost summation 5. Summing over j = i is equivalent to changing every instance of j to i. 6. Summation symbols are made implicit. The index notation for a b is thus ai bi . We may do the same for the cross product: ab =
(1) 3 X i=1

ai ei

3 X j =1

bj e j =

(2)

3 X 3 X i=1 j =1

a i bj e i e j =

(3)

3 X 3 X i=1 j =1

a i bj

k=1

3 X

ijk ek =

(4)

3 X 3 X 3 X

ai bj ijk ek = ai bj ijk (43)

(5)

i=1 j =1 k=1

1. The vectors a and b are written as sums over distinct dummy indices. 2. The distributive properties of scalar multiplication and cross products are used to bring all terms inside the summations. 3. The denition of the cross product of orthonormal basis vectors is applied 4. The distributive property of scalar multiplication is used again to bring all terms inside the summation over k . 5. Summation symbols and unit vectors are made implicit. The dyadic product also has a simple representation in index notation: ab =
(1) 3 X i=1

ai ei

3 X j =1

bj e j =

(2)

3 X 3 X i=1 j =1

a i b j e i e j = a i bj .

(3)

(44)

1. The vectors a and b are written as sums over distinct dummy indices. 2. The distributive properties of the dyadic product is used to bring all terms inside the summations. 3. Summation symbols and the unit dyad are made implicit. This brings up important points about dummy indices: A dummy index can be repeated no more than twice in an index notation expression. For example, the expression ai bi ci is ill-dened in index notation. Summation symbols are implicit for each distinct dummy index. Unit basis vectors are implicit for non-repeated or free indices. You can determine the tensorial order of an expression by counting the number of free indices. 9

Vector Rotation
Rotation in a plane Consider a vector a in the xy -plane. The x and y components of a can be expressed in terms of the magnitude a and angle from the x-axis: ax = a cos ; ay = a sin . (45) Suppose we want to rotate the vector a about the origin by some angular increment (see Figure 7). Lets call the rotated vector a . Because were rotating and not dilating the vector, the magnitude of the vector remains unchanged: a = a . (46) The new angle from the x-axis is incremented by : . (47)

We can use Equation 45 to write the components of the rotated vector a in terms of its magnitude and direction: a cos ; a sin . (48) x = a y = a Using Equations 46 and 47, and using trig sum identities, the rotated components may be expressed as: a x = a (cos cos ( ) sin sin ( )) ; a y = a (sin cos ( ) + cos sin ( )) . (49)

Recognizing that we can use Equation 45 to factor out the components from the reference frame, the rotated components may be expressed as: a x = ax cos ( ) ay sin ( ) ; a y = ax sin ( ) + ay cos ( ) . (50)

We can write the previous pair of equations in vector form: a = R ( ) a, where the tensor R is the rotation tensor: R cos ( ) sin ( ) sin ( ) cos ( ) .

a 1y 1x
Figure 7: Rotating the vector a from the angle to the angle .

10

Rotation in 3-D space Suppose the vector a is now in three-dimensional space and we want to rotate it by the same angular increment about the z -axis. If we project this problem of rotation onto the xy -plane, we recover exactly what was just shown in the previous section. The z -component of a is unchanged by the rotation deformation. Thus, in 3-D, the rotation tensor is given by: 2 3 cos ( ) sin ( ) 0 R 4 sin ( ) cos ( ) 0 5 (rotation about z -axis). 0 0 1 Not all rotations must occur about a coordinate axis. What if we wanted to rotate a by the angular increment about some arbitrary vector b? It would help for us to transform the vector a from components in the orthonormal Cartesian basis B {1x , 1y , 1z } to components in a new orthonormal basis B 1 x , 1y , 1z This transformation of bases is given by the matrix B , dened as: 2 3 1x 1 1y 1 1z 1 x x x B 4 1x 1y 1y 1y 1z 1y 5 . 1x 1 1y 1 1z 1 z z z

The transpose of this tensor B T transforms a vector from the new orthonormal basis B to the original basis B . If we select the new orthonormal basis B such that the vector about which to rotate (b) is coincident with 1 z , then we can use the rotation matrix R to rotate vectors about b when they are expressed in the basis B ! Thus, the rotation transformation is: a = B T R B a.

11

Vector Dierential Operators


Gradient
The gradient operator is denoted by the symbol r (nabla). The gradient of a scalar quantity returns a vector. In cartesian coordinates, we can write the gradient of some scalar eld (x, y, z ) as: r = x 1x + y 1y + z 1z .

An example of gradient with which you may already be familiar is elevation. Suppose h (x, y ) gives us the elevation of an area with latitude and longitude specied by x and y . We can nd the gradient of the elevation as: h h rh 1x + 1y . x y This gradient vector rh tells us the vector direction of the steepest ascent at any point. The gradient vector is larger in regions of steeper ascent. Lastly, note that the gradient is everywhere normal to the iso-elevation lines. Thus, if you travel perpendicular to the gradient of a scalar eld, you should stay on a surface of constant scalar. The gradient has a representation in index notation, as shown: r =
(1)

x1

e1 +

x2

e2 +

x3

e3 =

(2)

In step (1), we write out the gradient component by component. In step (2), we condense the three terms into a sum over a dummy index i. In step (3), the summation symbol and the unit vector are made implicit. Not only can we take the gradient of a scalar, but we can also take the gradient of a vector: rv = v1 v2 v3 v1 v2 v3 e1 e1 + e1 e2 + e1 e3 + e2 e1 + + e3 e2 + e3 e3 . x1 x1 x1 x2 x3 x3

3 X i=1

xi

ei =

(3)

xi

Taking the gradient of a vector quantity returns a second-order tensor. We see now that the gradient operator increases the tensorial order of a quantity by 1. The index notation for the gradient of a vector is: rv =
(1)

v1 v3 (2) X X vj (3) vj e1 e1 + + e3 e3 = ei ej = . x1 x3 xi xi i=1 j =1

In step (1), we write out the gradient component by component. In step (2), we condense the nine terms into a double sum over the dummy indices i and j . In step (3), the summation symbols and unit dyads are made implicit.

Divergence
The divergence operator is denoted by r or div (). It is somewhat like a dot product between a gradient operator and a tensor. It reduces the tensorial order of a quantity by one. For example, we can take the divergence of a vector to yield a scalar: rv = vx vy vz + + . x y z

The divergence of a second-order tensor results in a vector: xx yx zx xy yy zy xz yz zz r = + + 1x + + + 1y + + + 1z . x y z x y z x y z The index notation form of the divergence is as follows: 3 3 3 3 (1) (2) X X vj (3) X X vj rv = e1 + e2 + e3 ( v 1 e1 + v 2 e2 + v 3 e3 ) = ei ej = ij x1 x2 x3 xi xi i=1 j =1 i=1 j =1 12

(4)

In step (1), we write out r and v component by component. In step (2), we condense the terms into a double sum notation over the dummy indices i and j . In step (3), we apply the denition of the dot product of orthonormal unit vectors to result in the Kronecker delta. In step (4), we apply the denition of the Kronecker delta. In step (5) we remove the redundant summation over j = i and change each instance of j to i within the summation. In step (6), the summation over the dummy index i is made implicit. Note that we have a repeated dummy index i, so the summation is implicit but there is no implicit unit vector. The index notation form for the divergence of a second-order tensor can be found in the same manner: r
(1)

3 X 3 X vj (5) X vi (6) vi = = . xi xi xi i=1 j =i i=1

x1

e1 +

x2

e2 +

3 3 3 3 3 3 (2) X X X jk (3) X X X jk e3 (11 e1 e1 + + 33 e3 e3 ) = ei ej ek = ij ek x3 xi xi i=1 j =1 i=1 j =1


k=1 k=1 (4) 3 XX 3 3 3 X jk (5) X X ik (6) ik = ek = ek = . xi xi xi i=1 j =i i=1 k=1 k=1

In step (1), we write out r and component by component. In step (2), we condense the terms into a triple sum notation over the dummy indices i, j , and k . In step (3), we apply the denition of the dot product of an orthonormal unit vector and unit dyad to result in the Kronecker delta. In step (4), we apply the denition of the Kronecker delta. In step (5) we remove the redundant summation over j = i and change each instance of j to i within the summation. In step (6), the summations over the dummy indices i & k and the unit vector ek are made implicit.

Curl
The curl operator is denoted by r or curl () . It is somewhat like a cross product between a gradient operator and a vector. It maintains the tensorial order of a quantity. For example, the curl of a vector is as follows: vy vx vz vy vx vz rv = 1x + 1y + 1z . y z z x x y Cross products introduce a chirality or handedness to a quantity thus, the curl of a true vector is a pseudovector and the curl of a pseudovector is a true vector. The representation of the curl in index notation is: 1 ! 0 3 ! 3 3 3 3 3 3 X X X v v (1) (2) X X (3) X X j j rv = ei @ v j ej A = ei e j = ijk ek xi xi xi i=1 j =1 i=1 j =1 i=1 j =1
k=1 (4)

Steps (1) and (2) are the same as for the dot product. In step (3), we apply the denition of the cross product to obtain the summation over a third dummy index k of the permutation symbol. In step (4), we bring all terms inside of the three summations over i, j , and k . This is the same principle used in step (2). Lastly, the summation symbols and the unit vector ek are made implicit in step (5). The curl of the uid velocity vector is known as the vorticity, a measure of the local angular rate of rotation in the uid. Flows with zero vorticity are known as irrotational.

3 X 3 X 3 X vj xi i=1 j =1 k=1

ijk

ek

(5)

vj xi

ijk

Position Vector
The position vector is often denoted by x. It denotes the position of a point in space with respect to some reference point or origin. In cartesian component representation, the position vector is: x = x1x + y 1y + z 1z . 13

We may also write the position vector in index notation: x = x 1 e1 + x 2 e2 + x 3 e 3 =


3 X i=1

x i ei = x i .

The position vector is rst expressed component by component, then the terms are combined into a summation over all components using the dummy index i. The unit vectors and summation symbol are lastly made implicit. If we take the gradient of the position vector, we nd: rx = x1 x2 x3 x1 x2 x3 e1 e 1 + e 1 e2 + e1 e3 + e2 e1 + + e3 e2 + e3 e3 x1 x1 x1 x2 x3 x3

The three coordinates x1 , x2 , and x3 vary independently of one another. Thus, the derivatives x / x are equal to unity when = and are otherwise equal to zero. rx = 1e1 e1 + 0e1 e2 + 0e1 e3 + 0e2 e1 + + 0e3 e2 + 1e3 e3 We already have a method of representing this quantity it is the second-order identity tensor: r x = I. In index notation, we can write the gradient of the position vector as: xj = ij . xi The divergence of the position vector may be expressed as: rx= x1 x2 x3 + + = 3. x1 x2 x3

This result may make more sense if we think about it in index notation. First, we write the divergence in terms of a dummy index: xi rx= = ii . xi The quantity ii has an implicit summation over all three coordinates, i.e. we want the trace of the identity tensor, which is three in three-dimensional space. Intuitively, one can guess that the curl of the position vector will be zero since it always radiates outward from the origin: x3 x2 x1 x3 x2 x1 rx= e1 + e2 + e 3 = 0. x2 x3 x2 x1 x1 x2 We can see this also using index notation: rx= xj xi
ijk

= ij

ijk

iik

= 0.

In the rst step, we write the curl in index notation. We apply the denition of xj / xi in the next step, introducing the Kronecker delta. Next, we apply the denition of the Kronecker delta, setting each instance of the dummy index j to i. However, the permutation symbol is equal to zero should any of the indices be equal. Thus, the curl of the position vector is zero. p The magnitude of the position vector is represented by r = x x = x2 + y 2 + z 2 , or the radial distance from the origin to the end of the position vector. Thus, if we want to normalize the position vector, we divide it by r: n = x/ |x| = x/r. 14

The unit normal n represents the outward-facing unit normal vector to the surface of a sphere centered at the origin. How would we nd the gradient of a function which only depends on the magnitude r? rf ( r ) = f (r ) r df = . xi xi dr

We used the chain rule of dierentiation to break up the gradient, but how do we evaluate the quantity r/ xi ? Remember that r represents the magnitude of the position vector: r 1 1 xi 1 /2 = ( xj xj ) = ( xj xj ) = (ij xj + xj ij ) = . 1 / 2 xi xi xi 2r r 2 (xj xj ) Thus, we have two important formulas for taking gradients of functions which contain the position vector and the magnitude of the position vector: r x = I; rf ( r ) = x df . r dr

In index notation, these two formulas are written as: xj = ij ; xi f (r ) = xi df . r dr

xi

15

Sample Problems: Exercises


Intro to Index Notation. (2-4 in Leal) Write the following expressions using index notation 1. A = r u (A is a scalar, u is a vector) 2. A = ru (A is a second-order tensor, u is a vector) 3. = r u ( and u are vectors) 4. C = (x y) z (C, x, y, and z are vectors) 5. AT A x = AT b (A is a second-order tensor, x and b are vectors) Vector Dierential Operator Identities. (2-1 in Leal) Prove the following identities, where scalar, u and v are vectors: 1. r ( v ) = r v + v r 2. r ( v ) = r v + r v 3. r (u v ) = v r u 4. r (u v ) = v ru urv u r v + ur v vr u is a

Note: The following permutation symbol identity will prove useful: jki jmn = km in kn im . (51)

Validity of Expressions in Index Notation. Are the following index notation expressions valid? If so, what is the corresponding Gibbs notation form? If not, whats wrong? 1. d = ai Cij bj 2. xi = ijk aj bk + ai bj ck dk 3. Cim = ijk jkl klm 4. Ail = Cij Djk Ekl + Clm Emn Dni

16

Sample Problems: Applications


Hinged Gate
A static uid of density is contained behind a gate of height h and width W (in the z direction). The uid depth is also h. The free surface of the uid is open to the atmosphere. We wish to determine the torque per unit width that the uid exerts on the hinge (See Figure 8).

g=

g 1y

1y 1x

Figure 8: Fluid of density is contained by a hinged gate. The depth of uid and the height of the gate are h. The origin of the coordinate system is at the hinge. 1. Determine the pressure prole p (x, y, z ) in the uid. For hydrostatics, the governing equation for the pressure is: 0 = r p + g . (52) 2. Determine the net traction vector t (n) at any point on the gate. The traction vector is the force per unit area, and is given by: t (n) n . (53) In Equation, n is the outward-facing unit normal on the object, and static uid, the uid stress tensor is given by: 3. The torque L on a body is given by: L x t (n) dS, (55) pI ,

is the uid stress tensor. For a (54)

where p is the uid pressure and I is the second-order identity tensor.

where x is the moment arm measured from the point of rotation. What is the torque per unit width on the hinge?

17

4. Suppose the gate were hinged from the top rather than the bottom (See Figure 9). Before doing any work, would you expect the torque per unit width to be larger or smaller than in the previous case? What is the torque per unit width in this case?

g=

g 1y

1y 1x

Figure 9: Gate is hinged from the top rather than the bottom. All other parameters are the same.

18

Crash Course in Linear Algebra


Jim Fakonas August 28, 2011

Denitions

The following terms and concepts are fundamental to linear algebra. They are arranged roughly in the order in which they would be introduced in a typical introductory course. Matrix Product The matrix product of an m n matrix A and an n p matrix B is the m p matrix AB whose (i, j )th element is the dot product of the ith row of A with the j th column of B : 1 10 0 | | | | A1 C B A2 C B C CB B T T T B C ( B ) ( B ) . . . ( B ) C B . 1 2 n C= B . A@ @ . A | 0 B B B @ Am {z
mn

A1 ( B T ) 1 A2 ( B T ) 1 . . .

}|

A1 ( B T ) 2 A2 ( B T ) 2

... ... .. . ...

n p

{z

A1 ( B T ) n A2 ( B T ) n . . .

1 }

Here, Ai is the ith row of A and (B T )j is the j th column of B (i.e., the j th row of B transpose). More compactly, (AB )ij =
n X

Am ( B T ) 1

Am (B T )2 {z
m p

Am ( B T ) n

C C C. A

Aik Bkj .

k=1

A special case of matrix multiplication is the matrix-vector product. The product of an m n matrix A with an n-element column vector x is the m-element column vector Ax whose ith entry is the dot product of the ith row of A with x. Vector Space A vector space is any set V for which two operations are dened: 1

Vector Addition : Any two vectors v1 and v2 in V can be added to produce a third vector v1 + v2 which is also in V. Scalar Multiplication : Any vector v in V can be multiplied (scaled) by a real number1 c 2 R to produce a second vector cv which is also in V. and which satises the following axioms: 1. Vector addition is commutative: v1 + v2 = v2 + v1 . 2. Vector addition is associative: (v1 + v2 ) + v3 = v1 + (v2 + v3 ). 3. There exists an additive identity element 0 in V such that, for any v 2 V , v + 0 = v.

4. There exists for each v 2 V an additive inverse ( v ) = 0.

v such that v +

5. Scalar multiplication is associative: c(dv ) = (cd)v for c, d 2 R and v 2V. 6. Scalar multiplication distributes over vector and scalar addition: for c, d 2 R and v1 , v2 2 V , c(v1 +v2 ) = cv1 +cv2 and (c+d)v1 = cv1 +cv2 . 7. Scalar multiplication is dened such that 1v = v for all v 2 V . Any element of such a set is called a vector ; this is the rigorous denition of the term. Function Space A function space is a set of functions that satisfy the above axioms and hence form a vector space. That is, each vector in the space is a function, and vector addition is typically dened in the obvious way: for any functions f and g in the space, their sum (f + g ) is dened as (f + g )(x) f (x) + g (x). Common function spaces are Pn , the space of nth -degree polynomials, and C n , the space of n-times continuously dierentiable functions. Inner Product An inner product h , i on a real vector space V is a map that takes any two elements of V to a real number. Additionally, it satises the following axioms: 2. It is bilinear : for any v1 , v2 , v3 2 V and a, b 2 R, hav1 + bv2 , v3 i = ahv1 , v3 i + bhv2 , v3 i and hv1 , av2 + bv3 i = ahv1 , v2 i + bhv1 , v3 i. 3. It is positive denite : for any v 2 V , hv, v i holds only for the case v = 0. 1. It is symmetric : for any v1 , v2 2 V , hv1 , v2 i = hv2 , v1 i.

0, where the equality

These axioms can be generalized slightly to include complex vector spaces. An inner product on such a space satises the following axioms:
1 More general denitions of a vector space are possible by allowing scalar multiplication to be dened with respect to any arbitrary eld, but the most common vector spaces take scalars to be real or complex numbers.

1. It is conjugate symmetric : for any v1 , v2 2 V , hv1 , v2 i = hv2 , v1 i, where the overbar denotes the complex conjugate of the expression below it. 2. It is sesquilinear (linear in the rst argument and conjugate-linear in the second): for any v1 , v2 , v3 2 V and a, b 2 C, hav1 + bv2 , v3 i = ahv1 , v3 i + bhv2 , v3 i and hv1 , av2 + bv3 i = ahv1 , v2 i + bhv1 , v3 i. 3. It is positive denite, as dened above. The most common inner product on Rn is the dot product, dened as hu, v i
n X i=1

ui v i

for

u, v 2 Rn .

Similarly, a common inner product on Cn is dened as hu, v i


n X i=1

ui v i

f or

u, v 2 Cn

Note, however, that in physics it is often the rst vector in the angled brackets whose complex conjugate is used, and the second axiom above is modied accordingly. In function spaces, the most common inner products are integrals. For example, the L2 norm on the space C n [a, b] of n-times continuously dierentiable functions on the interval [a, b] is dened as hf, g i = for all f, g 2 C n [a, b]. Linear Combination A linear combination of k vectors, v1 , v2 , . . . , vk , is the vector sum S = c1 v1 + c2 v2 + + ck vk for some set of scalars {ci }. Linear Independence A set of vectors is linearly independent if no vector in it can be written as a linear combination of the others. For example, the vectors (1, 1) and (1, 1) are linearly independent, since there is no way to write (1, 1) as a scalar multiple of (1, 1). The vectors (1, 1), (1, 1), and (1, 0) are not linearly independent, however, since any one can be written as a linear combination of the other two, as in (1, 1) = 2 (1, 0) (1, 1). Span The span of a set of vectors {v1 , v2 , . . . , vk }, denoted span(v1 , v2 , . . . , vk ) is the set of all possible linear combinations of the vectors. Intuitively, the span of a collection of vectors is the set of all points that are reachable by some linear combination of the vectors. For example, the vectors v1 = (1, 0, 0), v2 = (0, 2, 0), and v3 = (0, 5, 5) span R3 , since any threecomponent vector can be written as a sum of these three vectors using 3 Z
b

f (x)g (x) dx
a

the proper coe cients. In contrast, the vectors v1 , v2 , and v4 = (1, 1, 0) span only the x-y plane and not all of R3 , since none of these vectors has a component along the z direction. Basis and Dimension A basis of a vector space is any set of vectors which are linearly independent and which span the space. The vectors v1 , v2 , and v3 in the previous example form a basis for R3 , while the vectors v1 , v2 , and v4 do not form a basis of either R3 (they do not span the space) or R2 (they are not linearly independent). There are innitely many bases for any given space (except for the trivial space, consisting only of the number 0), and all of them contain the same number of basis vectors. This number is called the dimension of the space. Range/Image/Column Space The range of a matrix, also known as its image or column space, is the space spanned by its columns. Equivalently, it is the set of all possible linear combinations of the columns of the matrix. For example, if one views an m n matrix as a linear transformation operating on Rn , then the range of the matrix is the subspace of Rm into which it maps Rn . Rank The rank of a matrix is the dimension of its column space (or that of its row space, since it turns out that these dimensions are equal). Equivalently, it is the number of linearly independent columns (or rows) in the matrix. Kernel/Null Space The kernel or null space of an m n matrix A, denoted ker(A), is the set of all vectors that the matrix maps to the zero vector. In symbols, ker(A) = {x 2 Rn |Ax = 0}.

The following terms relate only to square matrices. They are arranged roughly in order of conceptual importance. Eigenvalues and Eigenvectors An eigenvector of a matrix A is a vector v which satises the equation Av = v for some complex number , which is called the corresponding eigenvalue. Note that might have a nonzero imaginary part even if A is real. (See below for more information about eigenvalues and eigenvectors.) Invertible An invertible matrix is any matrix A for which there exists a matrix B such that AB = BA = I , where I is the identity matrix of the same dimension as A and B . The matrix B is said to be the inverse of A, and is usually denoted A 1 : AA 1 = A 1 A = I . A matrix which is not invertible is called singular. 4

Diagonalizable A diagonalizable matrix A is any matrix for which there exists an invertible matrix S such that S 1 AS = D, where D is a diagonal matrix (i.e. all of its o-diagonal elements are zero). A square matrix which is not diagonalizable is called defective. Orthogonal (Unitary) An orthogonal matrix is any square matrix A with real elements that satises AT = A 1 , so AT A = AAT = I . Equivalently, a real, square matrix is orthogonal if its columns are orthonormal with respect to the dot product. A unitary matrix is the complex equivalent; a complex, square matrix is unitary if it satises A = A 1 (where A AT ), so A A = AA = I . Symmetric (Hermitian) A symmetric matrix is any real matrix that satises AT = A. Similarly, a Hermitian matrix is any complex matrix that satises A = A. Normal A normal matrix is any matrix for which AT A = AAT (or A A = AA for complex matrices). For example, all symmetric and orthogonal matrices are normal. Positive and Negative Denite and Semidenite A positive denite matrix is any symmetric or Hermitian matrix A for which the quantity v T Av 0 for all v , with the equality holding only for the case v = 0. If the inequality holds but there is at least one nonzero vector v for which v T Av = 0, then the matrix is called positive semidenite. Likewise, a negative denite matrix is any symmetric or Hermitian matrix A for which v T Av 0 for all v , with the equality holding only for v = 0. If v T Av 0 for all v but v T Av = 0 for at least one nonzero v , the matrix A is called negative semidenite.

Important Concepts

This section contains brief explanations of several essential concepts in linear algebra. It is intended for students who are familiar with the mechanics of linear algebra but who might not have had a full course in the subject.

2.1

The Meaning of Ax = b

One of the most important equations in linear algebra is the simple matrix equation Ax = b. In this equation, A is a given m n matrix, b is a given vector in Rm , and the problem is to solve for the unknown vector x in Rn . This equation arises wherever one must solve m linear equations for n unknowns. Notice that the matrix-vector product Ax on the left-hand side is nothing other than a linear combination of the columns of A with coe cients given by

the elements of 0 A11 A12 B A21 A22 B B . @ . . Am 1 Am 2

x: ... ... .. . ... 0 A1n A2n . . . 10 CB CB CB A@ x1 x2 . . . 1 C B C B C=B A @ 0 1 A11 x1 + A12 x2 + + A1n xn A21 x1 + A22 x2 + + A2n xn . . . 1 C C C A

That is, the question of whether or not Ax = b has a solution is essentially the question of whether or not the vector b lies in the space spanned by the columns of A. There are three possibilities: 1) it does not, in which case there is no x for which Ax = b; 2) it does, and there is one and only one x that solves Ax = b; or 3) it does, and there are innitely many vectors that solve Ax = b. In case (1), no matter how the columns of A are weighted, they cannot sum to give the vector b. In case (2) the columns of A are linearly independent and b lies in their span. Finally, in case (3), the columns of A are linearly dependent and b lies in their span, so there are innitely many linear combinations of them which sum to give b. The conceptual power of this interpretation is that it lends a geometric signicance to the algebraic equation Ax = b. One can picture the n columns of A as vectors in Rm . Together, they span some space, which could be all of Rm or only some proper subspace. In the former case, the vectors can be combined, with an appropriate choice of scalar coe cients, to produce any vector in Rm . In the latter case, in contrast, there are some vectors Rm that lie outside of the span of the columns of A. If the vector b in Ax = b happens to be one of those vectors, then no possible linear combination of the columns of A can reach it, and the equation Ax = b has no solution.

B B = x1 B @

Amn xn 1 0 A12 A11 B A22 A21 C B C C + x2 B . . . @ . A . . Am 2 Am 1

Am1 x1 + Am2 x2 + + Amn xn 0 1 A1 n B A2 n C C C B C C. C + + xn B . @ . A . A Amn

2.2

The Eigenvalue Equation, Av = v

Another ubiquitous and fundamentally important equation in linear algebra is the relation Av = v , where A is an n n matrix (notice that it must be square in order for the left- and right-hand sides to have the same dimension), v is an n-element vector, and is a constant. The task is to solve for all eigenvectors v and corresponding eigenvalues that satisfy this relation. Rearranging the eigenvalue equation gives (A I )v = 0, (1)

where I is the n n identity matrix. The only way for this equation to have a nontrivial solution is for the matrix (A I ) to be singular, in which case its determinant is zero. This fact provides a useful method for nding the

eigenvalues of small matrices. For example, to nd the eigenvalues of 1 3 A= , 3 1 rst construct the matrix (A det(A I ) and then require that its determinant vanish: 1 3 I )= det = (1 )2 9 = 0 3 1 ! ( 2)( 4) = 0

The eigenvalues of A are therefore 1 = 2 and 2 = 4. To nd the eigenvectors, substitute these numbers into (1). For example, 1 3 1 v1 = 0 3 1 1 ! (1) 3 3 v1 0 = (2) 3 3 0 v1 1 (2) (1) (1) ! v1 = v1 ! v1 = v1 1 Thus, v1 =
(2)

v1

(1)

and therefore the eigenvector v1 corresponding to 1 (1) v1 = v1 , 1

is

where v1 is an arbitrary scalar (note that A(cv1 ) = 1 (cv1 ), so eigenvectors are only unique up to a multiplicative constant). But what does it mean for a vector to be an eigenvector of a matrix? Is there a geometric interpretation of this relationship similar to that given for Ax = b in the previous section? It turns out that there is. It is helpful to think of square matrices as operators on vector spaces, as any linear transformation on Rn (or Cn ) can be represented by an n n matrix. Examples of such transformations include rotations, reections, shear deformations, and inversion through the origin. The eigenvectors of such a matrix, then, are special directions in that space that are unchanged by the transformation that the matrix encodes. A vector that points in one of these directions is scaled by a constant factor (the eigenvalue, ) upon multiplication by the matrix, but it points along the same axis as it did before the matrix operated on it. This geometric interpretation is especially useful when zero is an eigenvalue of a matrix. In that case, there is at least one vector (there can be more) that, when multiplied by the matrix, maps to the zero vector. Let A be such a matrix, and let v0 be an eigenvector of A whose eigenvalue is zero. Then, Av0 = 0v0 = 0. Likewise, any scalar multiple cv0 also maps to zero: Acv0 = cAv0 = c0 = 0. As a result, an entire axis through Rn (here A is taken to be n n and real, as an example) collapses to the zero vector under the transformation represented 7

(1)

by A, so the image of A is a lower-dimensional subspace of Rn . For example, if A is a 3 3 real matrix with one eigenvector whose eigenvalue is zero, then there is a line in R3 that collapses to zero under multiplication by A, and the image of A is therefore a two-dimensional plane in R3 . As a result, any vector b that lies outside of this plane lies outside the span of A, so there is no vector x that solves Ax = b. This is why a matrix is singular if one of its eigenvalues is zero; if it were invertible (i.e. not singular), there would always exist such an x (equal to A 1 b) for every b.

Exercises
1. Which of the following sets are vector spaces? (a) C3 , the set of all ordered triples of complex numbers, with scalar multiplication dened over the complex numbers (b) C3 , with scalar multiplication dened over the real numbers (c) R3 , the set of all ordered triples of real numbers, with scalar multiplication dened over the complex numbers (d) The subset of R2 enclosed by the unit circle, with scalar multiplication dened over the real numbers (e) The line y = 4x (i.e. the subset of R2 comprised by the points on this line), with scalar multiplication over R (f) The line y = 4x + 2, with scalar multiplication over R (g) The subset of R3 bounded above by the plane z = 10, with scalar multiplication over R p (h) The functions f (x) = x3 , g (x) = cos( x), and h(x) = 1, where x 2 [0, 1], and all linear combinations of these functions with real coe cients, with function addition and scalar multiplication dened as usual 2. Consider the vectors 0 1 1 v1 = @ 1 A , 2 1 3 v2 = @ 0 A , 1 0 0 1 1 2 A. 3

and

Are these vectors linearly independent? What is the dimension of the space they span? Use two methods to answer these questions: (a) Recall that if these vectors are linearly independent, then none can be written as a linear combination of the others. As a direct result, the only solution to the equation c1 v1 + c2 v2 + c3 v3 = 0 (2)

v3 = @

is the trivial solution, c1 = c2 = c3 = 0. Write (2) in matrix form and solve it using substitution, row reduction, or MATLAB (make sure you are at least familiar with the other methods before you use MATLAB, though). Is the trivial solution the only solution? (b) Notice that the matrix that you constructed in (2a) is square. Find its eigenvalues. What do they tell you about the vectors v1 , v2 , and v3 ? (Hint: What is the null space of this matrix?) 3. Give an example of a basis for each of the following vector spaces. What is the dimension of each? (a) C4 , the space of all ordered quadruples of complex numbers (b) P3 , the space of all polynomials of degree less than or equal to three that have real coe cients (d) C 2 [0, 1], the space of all twice-dierentiable functions dened on the interval [0, 1] 4. Find the eigenvalues and eigenvectors of the following matrices. (a) 0 3 @ 0 2 cos sin 0 0 1 1 2 1 A 2 sin cos (c) M22 , the space of all 2 2 matrices with real elements

(b)

5. Put the following system of equations into matrix form and solve for x, y , and z . 2x + 3y + z = 4 x + 3y + z = 1 x + 2z = 2

Appendix: Venn Diagram of Matrix Types

1. The following statements about a matrix A are equivalent: (a) A is invertible. (b) The equation Ax = b always has a unique solution. (c) Zero is not an eigenvalue of A. (d) detA 6= 0 (e) A is square and has full rank.

2. PSD: Positive Semidenite PD: Positive Denite NSD: Negative Semidenite ND: Negative Denite

10

COMPLEX ANALYSIS EXAMPLES 1. Introduction Complex analysis is the study of functions f so that f (z ) is a complex number for every complex number z . We will generally restrict our attention to functions f that are complex dierentiable; also called analytic functions or holomorphic functions. Such functions have many nice properties, some of which are listed in Section 3. The power of complex analysis as a tool in the physical sciences comes from using the methods of complex analysis to calculate denite integrals (see Section 6). As another motivation, consider the problem of the harmonic oscillator. In the underdamped case, the motion can be described as x(t) = Ae
t

cos(t + )

or

x(t) = Ae

sin(t + )

for an appropriate choice of constants. It is often more helpful to think about this as the real or imaginary parts of the expression e
+( +i )t

for an appropriate choice of constants. Indeed, the way we solve the harmonic oscillator is by guessing a solution of the form Aeat and adjusting the constants A and a so that the dierential equation is solved. This is impossible without complex numbers. We see then that even though the solution is real valued, we must use complex functions to arrive at the solution. As another example, in quantum mechanics, one solves a dierential equation for a wave function in either position space or momentum space. To go between the two, one uses the Fourier Transform, often written as Z 1 (x) = f e 2ixt f (t)dt.
1

Such integrals are often di cult to compute, but the hardest examples require moving the path of integration from the real axis to another curve in the plane. Why can we get away with this? Unfortunately the answer is not simple, but it can be understood once we know something about analytic functions. A challenging example is given in the exercises section. As a nal motivation example, consider the problem of nding the equilibrium state of an insulated heated metal plate. More specically, suppose we are given a metal plate with a reasonable boundary (i.e. no fractal boundaries) as in the picture. Suppose also we have set up a very elaborate system of heaters an coolers so that we can control the temperature on the boundary of the plate. If we assume the plate is insulated (i.e. no heat is lost), then as time tends to innity, the temperature distribution inside the plate will approach an equilibrium state. Thinking of the plate as a region in R2 and the equilibrium distribution of temperature as a function f (x, y ), then f satises the Heat Equation @2f @2f + = 0. @ x2 @ y2
1

COMPLEX ANALYSIS EXAMPLES

Figure 1. A metal plate with smooth boundary.

Figure 2. A metal plate with non-smooth (but still reasonable) boundary.

Of course the function f depends on how we set the temperature at the boundary. The function f is known as the solution to the Dirichlet Problem with given boundary data. Solving this problem when the plate is the shape of a disk with radius 1 is easy. If the boundary temperature is a xed function g (ei ) then the equilibrium temperature at a point z inside the disk is Z 2 1 |z |2 d f (z ) = g (ei ) i . |e z |2 2 0 Notice that the function g does not even have to be continuous for this integral to make sense. Now the obvious question arises: what if the plate is not in the shape of a disk with radius 1? It turns out that complex analysis provides the right tool for this problem. We must appeal to the Riemann Mapping Theorem. This result essentially asserts the existence of a function that allows us to start with our plate, deform it until it is a disk of radius 1, solve the problem on the disk of radius 1 as above, and then deform the disk back to the shape of the original plate. Such a function is called a conformal map from the plate to the disk. Proving this function exists is not easy, but is a standard part of a rst course in complex analysis. Actually nding a formula for the map is very di cult (for a general plate), though there are methods available to estimate it. The remainder of these notes are meant to serve as a reminder of some of the basic terminology, notation, and background necessary to hit the ground running when you rst encounter complex analysis at Caltech (whether in a course or in research). This set of notes is not a prerequisite for any course at Caltech; it is meant only to serve as a reminder of what you might have forgotten or as a primer for what you are about to learn. The fourth section of the Practice Problems requires more than the material presented here to solve. They are included as examples of the kinds of things one can calculate with complex analysis tools. 2. Useful Formulas Let us begin with a quick review of some basic concepts and notation. Remember that the complex number i is dened so that i2 = 1. Now every complex number can be written uniquely as z = x + iy where x and p y are real numbers. One can also express z in polar i co-ordinates as z = re where r = x2 + y 2 is the norm or modulus of z and is the

COMPLEX ANALYSIS EXAMPLES

argument or phase of z . The expression of z in polar co-ordinates is not unique! Indeed 1 = e2i = e4i = e6i = e8i = . This comes from the fact that the argument of a complex number is not a well-dened quantity. We will discuss this more in Section 4. However, one can always dene arg(x + iy ) = arctan(y/x) when x and y are both positive. To go the other way, when one is given a complex number in polar co-ordinates, one can write it as x + iy by setting x = r cos() and y = r sin(). This is a result of the famous Euler (pronounced Oiler) formula eix = cos(x) + i sin(x). The complex conjugate of x + iy is x iy . The complex conjugate of rei is re i . In general, when taking the complex conjugate of an expression (no matter how complicated), one simply changes every instance of an i to a i. To multiply two complex numbers, one writes (x + iy )(u + iv ) = xu yv + i(yu + xv ) , reit Reis = rRei(t+s) . To divide complex numbers, one has to make the denominator real by multiplying the top and bottom by the conjugate of the denominator. We get ux + vy vx uy u + iv = 2 +i 2 . 2 x + iy x +y x + y2 3. Analyticity An analytic function can be characterized in many dierent ways. One way is to say that a function f is analytic in a region if every point w 2 is the center of an open disk of (possibly innite) radius R > 0 completely contained in so that for every z in this disk one has 1 X f (z ) = an (z w)n .
n=0

The main idea is that insisting that a function of a complex variable be analytic is actually a strong condition and so we can make some very strong conclusions. Here are some consequences of analyticity: if f is analytic in a region then... (1) f is innitely dierentiable; (2) if is a smooth curve whose interior is inside then I f (z )dz = 0; (3) if C is a circle whose interior is in then I n! f ( ) (n) f (z ) = d 2 i C ( z )n+1

for z inside C , where f (n) denotes the nth derivative of f ; (4) if D is a closed disk inside then sup |f (z )| = sup |f (z )|.
z 2D z 2@ D

This property is called the Maximum Modulus Principle ; (5) f () := {w : w = f (z ) for some z 2 } is an open set;

COMPLEX ANALYSIS EXAMPLES

(6) if the closed disk centered at z of radius R is contained inside then Z 2 d f (z ) = f (z + Rei ) . 2 0 This is called the Mean Value Property for analytic functions. (7) As a consequence of (6), we know that u(x, y ) = Re[f (x + iy )] and v (x, y ) = Im[f (x + iy )] are both harmonic functions, i.e. @2u @2u + =0 @ x2 @ y 2 and similarly for v . Example. An example of a seemingly nice function that is not analytic is the function f (z ) = z . If we think of this as a function of two real variables x and y and think of f as f (x, y ) = (x, y ) then the function f is continuous and in fact innitely dierentiable in both coordinates and has smooth gradient. However, f is not an analytic function of a complex variable. Example. Consider the function f (z ) = (1 z ) 1 . This function is analytic in the entire complex plane except at the single point 1. We can therefore set = C \ {1} so that 0 is the center of an open disk of radius 1 that is completely contained inside . In that disk, we write 1 X 1 = zn. 1 z n=0 It is important to note that this series does not converge when z is not inside the disk of radius 1 centered at 0. Let us examine how to nd the series representation for f (z ) around a point other than 0; for example around the point 3. In this case, we have n 1 1 1 1 1 1 1X z+3 = = = = , 1 z 1 (z + 3) + 3 4 (z + 3) 4 1 (z + 3)/4 4 n=0 4 which is a Taylor Series that converges for all z in an open disk centered at 4. 4. Complex Logarithm The logarithm of a positive number x can be easily dened by Z x 1 (4.1) ln(x) = dt. 1 t This is a simple denition that requires no knowledge of the exponential function or the number e. However, for complex numbers we can dene a logarithm, but it will not be as easy. Consider the following line of reasoning: Let z be a complex number. We want to dene log(z ) to be a complex number w so that ew = z . However, such a choice of w is not unique. For example, one can set log(1) = 0 or log(1) = 2 i. Furthermore, we want our logarithm to be an inverse to the exponential function. It makes sense to want to dene log(ez ) = z for all complex numbers z . However, in this case we would dene log(e2i ) = 2 i and log(e0 ) = 0, but you will notice that we just dened log(1) in two dierent ways. Obviously we have some di culties to overcome. 3 and of radius

COMPLEX ANALYSIS EXAMPLES

To get around this, we make what is called a branch cut. A branch cut is a curve that has one end at 0 and never intersects itself and tends to innity. This curve helps us dene the logarithm by telling us where we will introduce the discontinuity for the logarithm function. We then dene log(z ) = ln(|z |) + i arg(z ) where the ln(|z |) term is to be interpreted as in equation (4.1). The argument function obviously has a discontinuity though we have much exibility as to where we introduce this discontinuity. The set of points at which we let the arg function be discontinuous is what we call our branch cut. A common choice of branch cut is along the positive real axis. In this case arg(z ) will always lie between 0 and 2 . Another common choice is the negative real axis, where the arg function takes values between and .

Figure 3. A graph of the multi-valued function arg (z ).

Figure 4. The graph of the single-valued function with a discontinuity along the branch cut.

In order to make the arg function continuous in the whole plane, we would need to use some very advanced complex analysis that is the beyond the scope of these notes. Figure 3 though shows what is happening. As we travel (counterclockwise) around 0, the arg function increases. In order to make it continuous, we would need to let it keep increasing as we move around and around the origin. This would lead to the arg function being multi-valued, which causes problems. To get around this, we take a section of this spiral on which the arg function is single-valued. The price we pay is that it is no longer a continuous function. Figure 4 shows such a section where the branch cut is a straight line (i.e. the set of points along which the graph is discontinuous is a straight line), but a branch cut can in general be any curve that does not intersect itself, starts at 0, and goes out to innity. Another possible choice of branch cut is shown in Figure 5.

COMPLEX ANALYSIS EXAMPLES

150

100

50

- 150

- 100

- 50

50

100

150

- 50

- 100

Figure 5. A possible branch cut for the logarithm. With the choice of branch cut from Figure 5, we would write log(1) = 0, log(10i) = i , 2 log( 10) = ln(10) + i , 7 log( 50i) = ln(50) + i , 2 log(100) = ln(100) + 6 i, 13 log(100i) = ln(100) + i , 2 and so on. You can see that as we work our way around the spiral, the argument function is increasing, but if we cross the spiral we see a discontinuity in the argument function. For example, on the segment of the positive real axis that includes 1 and has endpoints lying on the spiral, the argument function takes the value 0. On the segment of the positive real axis that includes 25 and has endpoints lying on the spiral, the argument function takes the value 2 i. On the segment of the positive real axis that includes 50 and has endpoints lying on the spiral, the argument function takes the value 4 i. Therefore, if we dene to be the open set that is the complex plane C with the pictured spiral removed, then log(z ) is an analytic function of z on . Now we can dene fractional powers of a variable z by using the logarithm. p Example. It is obvious that 12 = ( 1)2 = 1 so how do we dene 1? In general, we dene p z = elog(z)/2 . Therefore, we need to choose a branch cut for the logarithm to dene the square root function. If we choose the branch p cut along the negative real axis, then from our discussion above it becomes clear that 1 = 1. With this same branch cut, one has p i /4 p i = e3i/4 . However, if we choose our branch cut to be along the positive real axis, then i=e . As a nal word of warning, because of this branch cut phenomenon, when dealing with the complex logarithm, IT IS NO LONGER TRUE THAT log(zw) = log(z ) + log(w).

COMPLEX ANALYSIS EXAMPLES

5. Contour Integrals We want to use complex analysis as a tool for evaluating di cult integrals. In order to do this we need to rst review how to calculate a contour integral. A contour is a closed curve that does not intersect itself (also called a simple closed curve). A contour divides the complex plane into two regions; one of them is bounded and the other is unbounded. We call the bounded region the interior of the contour. Here are some important points to keep in mind: (1) If f is analytic in an open set containing a contour and its interior then I f (z )dz = 0. (2) If 1 and 2 are two contours with 1 contained in the interior of 2 (with perhaps some overlap with 2 ) and f is analytic in the region between 1 and 2 then I I f (z )dz = f (z )dz.
1 2

(3) If

is the contour

but traversed in the opposite direction then I I f (z )dz = f (z )dz.

If no orientation is specied on a contour, it is assumed that the contour is given the counterclockwise orientation. This means that if you were standing on the contour and walking in the direction of the orientation then the interior of the contour would always be on your left hand side. R 1 dz where is the circle of radius 1 centered at 0. We write Example. Let us calculate z i z = e where 2 [0, 2 ]. We calculate dz = iei d. We can therefore write our integral as Z 2 Z 2 1 i ie d = i d = 2 i. ei 0 0 Of course this does not depend on our parametrization, but lets just check to make sure. If we instead parametrize the contour by z = e2i for 2 [0, 1] then dz = 2 ie2i d and so our integral becomes Z 1 Z 1 1 2 i 2 ie d = 2 i d = 2 i. 2 i 0 e 0 If we wanted to calculate the same integral around the same contour but with the opposite orientation (i.e. clockwise) then (3) above tells us that the answer would be 2 i.

Sometimes we are presented with a contour integral that is too hard to calculate, so we need to estimate it. One useful estimate is sometimes called the ML-bound. The ML-bound allows us to put an upper bound on the magnitude of a contour integral in terms of the function being integrated and the length of the contour. More precisely, suppose we are R asked to integrate f (z )dz where f is some function that is analytic in a neighborhood of the contour . Let L = length( ) , M = max |f (z )|.
z2

COMPLEX ANALYSIS EXAMPLES

Then

Applying this to the above example, we see that L is the length of the circumference of a circle of radius 1 so L = 2 . The function 1/z has absolute value 1 whenever z is a point in the unit circle, so in this case M = 1. Therefore, the ML-bound implies Z 1 dz 2 , {|z |=1} z which the example calculation showed is indeed the case. 6. Further Topics Ultimately we study complex analysis as a computational tool. The goal is to be able to R1 dx calculate things like 1 1+ . One can also use complex analysis to nd roots of analytic x4 functions. Here are some key words and theorems that you might keep an eye out for as you learn more complex analysis: (1) The Residue Theorem: The key to calculating hard denite integrals. (2) Rouch es Theorem: Finds zeros of functions close to roots of other functions. (3) The Argument Principle: Useful for counting zeros and poles. (4) The Riemann Mapping Theorem: Useful for mapping arbitrary simply connected regions to a disk. (5) Cauchy Estimates: Restricts how fast an analytic function can grow as |z | ! 1. (6) Paley-Wiener Theorem: Relates analytic functions to their Fourier Transform. (7) Schwartz Reection Principle: One very common method of analytic continuation. (8) Runges Theorem: Describes when an analytic function can be uniformly approximated by polynomials. (9) Montels Theorem: Describes when a sequence of holomorphic functions has a uniformly convergent subsequence.

f (z )dz M L.

COMPLEX ANALYSIS EXAMPLES

7. Practice Problems 7.1. Analyticity. A complex analytic function can be expressed in many dierent ways. However, some formulas are only valid in certain regions of the plane. Find the radius of convergence of the following Taylor series: P n n (1) 1 n=0 2 z P n (2) 1 n=0 nz P zn (3) 1 n=0 n! (4) P1
n=0

2n! zn!

n!

7.2. Complex Logarithm. The logarithm is the most common example of a function that can be locally dened as analytic, but requires a branch cut to be dened locally. Furthermore, we need the logarithm to dene fractional powers of a variable z . Express the following as complex numbers, where we assume the logarithm is dened with branch cut along the negative real axis: (1) log(1) (2) log(e2i ) (3) log(2i) p (4) i

7.3. Contour Integrals. Contour integrals arise frequently when complex analytic functions are involved. Calculate the following integrals where is the curve indicated: H 2 (1) z dz where is the circle with center 0 and radius 1 traversed counterclockwise. H 1 (2) dz where is the circle with center 0 and radius 1 traversed counterclockwise. z H 1 (3) dz where is the circle with center 0 and radius 2 traversed counterclockwise. z 2 +1 H 1 (4) dz where is the circle with center 0 and radius 1 traversed counterclockwise. z 2 +1 2 7.4. Denite Integrals (Optional). Our primary motivation for studying complex analysis is because of its utility as a computational tool. In particular, one can use the Residue Theorem to calculate some otherwise very di cult integrals of common functions. After a full course in complex analysis, you should be able to calculate the following denite integrals. If you cant, you should just remember that they can be calculated using tools from complex analysis, so that if you ever need to calculate such an integral, just consult your nearest complex analysis textbook or expert.

10

COMPLEX ANALYSIS EXAMPLES

(1) (2)

R1 R1

dx 1 1+x4 e 2ixt dx 1 cosh( x)

(i.e. calculate the Fourier Transform of 1/ cosh( x))

References
[1] [2] [3] [4] L. Ahlfors, Complex Analysis, Third Edition, McGraw-Hill, 1979. M. Homan and J. Marsden, Basic Complex Analysis, Third Edition, W. H. Freeman, 1998. S. Lang, Complex Analysis, Fourth Edition, Springer Graduate Texts in Mathematics, 1998. E. B. Sa and A. Snider, Fundamentals of Complex Analysis with Applications to Engineering, Science, and Mathematics; Third Edition, Prentice-Hall, 2003. [5] R. Shakarchi and E. M. Stein, Complex Analysis, Princeton University Press, Princeton, NJ, 2003.

Вам также может понравиться