Вы находитесь на странице: 1из 4

A Brief Bit About Taylor Series

John Haga It was mentioned in class that Taylor series are possibly the most useful consequence of calculus. In the real world, exactness isnt always important (talk to any physicist and theyll tell you that things get very very blurry when you look on a smaller and smaller scale... the weird thing is, that the blurriness isnt just a consequence of our poor ability to see, but is in fact the nature of the universe itself). Because of this fact, we can get quite far by just knowing how to make approximate calculations. While we have this freedom, its also important to be able to know exactly how close of an approximation we need to make. The theory behind Taylor series allows us to make controlled approximations of actual calculations (controlled in the sense that we can calculate with some certainty how close our approximation is to the real value). This gives us tremendous power: we can take nasty smelly functions like ex and ln(x) and express them in terms of basic arithmetic operations that can be handled by a computer. In fact, when you plug in ln(2.3) into your calculator and 0.832909122935 pops out, your calculator is NOT actually evaluating ln(2.3), but instead is nding a close approximation using algorithms developed that utilize the fundamental ideas behind Taylor series. This is almost always adequate for government work. Now that you know how powerful this tool is, lets see how it works by working through an example. Recall that the Taylor series for a given function f (x) is given by the following formula:

f (x) around x = c =
k=0

f (k) (c) (x c)k k!

What does this gobbledygook mean? It means that if were extremely lucky, we can nd a power series representation for our function that EQUALS our function (at certain places)! This turns f (x) into a0 + a1 x + a2 x2 + + ai xi + So what? Well.. if this innite sum converges to our function, then we have that f (x) a0 + a1 x + a2 x2 + + an xn which is great. Why is it great? Because it can be calculated with a nite number of arithmetic operations (i.e. so easy a computer can do it). Of course, this is only benecial if we can USE it somehow. We can. To see this, lets calculate the Taylor series for the function f (x) = cos(x), nd the interval of convergence of the series, and then show that on the interval, the series is in fact equal to cos(x).

Step 1: Calculate the Taylor Series To calculate the Taylor Series, we need to know the point about which were going to expand (the point c). This is either given information, or is left up to you to choose. In the event that you have to choose, pick a value for c at which the function behaves well (i.e. dont pick c = 0 for f (x) = ln(x)). Since f (x) = cos(x) behaves well everywhere, we can pick any c we like. For simplicity, lets expand cos(x) about c = 0 (recall that in this case, the series we obtain is given the special name Maclaurin series). As always, the ugliest part of the calculation is nding a general expression for f (k) (c). Letting c = 0 lets make a table of values in hopes of guring out what such an expression would be:

n 0 1 2 3 4 5 6

f (n) (x) cos(x) sin(x) cos(x) sin(x) cos(x) sin(x) cos(x)

f (n) (0) 1 0 1 0 1 0 1

So it appears that f (n) (0) = 1 or 1 when n is even, and 0 otherwise. Writing out the series for cos(x) we get a sum that looks something like this:
k=0

f (k) (c) (x c)k = k!

1 0 0 1 2 0 1 0 1 6 x + x1 + x + x3 + x4 + x5 + x + 0! 1! 2! 3! 4! 5! 6! 1 2 1 1 x + x4 x6 + 2! 4! 6! 1 2k (1)k x (2k )!

= 1

=
k=0

And this is our Taylor series for f (x) = cos(x) taken about the point x = 0. Excellent.

Step 2: Finding the Interval of Convergence: Now that we have our Taylor series, its important to know exactly where it realistically represents our function. The rst thing we must do is see where the series itself actually converges (since if it diverges someplace, it cant hardly represent our function). To get this information, we use the Ratio Test: lim ak+1 ak = = lim
1 (1)k+1 (2(k+1))! x2(k+1) 2k (1)k (21 k)! x

lim

(2k )! x2 (2k + 2)!

1 k (2k + 2)(2k + 1) 2 = |x | 0 = |x2 | lim = 0<1

This gives us that the Taylor series centered at c evaluated at some point x is given by S (x) = 1 1 2 1 1 x + x4 x6 + . 2! 4! 6!

Keep in mind that S (x) is a function. We have that the Taylor series for cos(x) converges absolutely for all x (i.e. that the radius of convergence is ). Wonderful. So what? Is S (x) = cos(x)? Who knows. Lets gure it out.

Step 3: Demonstrating that the Taylor series for cos(x) converges to cos(x): Before we start, lets restate Taylors theorem: THEOREM: Suppose that f has (n + 1) derivatives on the interval (c r, c + r) for some r > 0. Then, for x (c r, c + r), f (x) Pn (x) and the error in using Pn to approximate f (x) is Rn = f (x) Pn (x) and moreover Rn (x) = for some number z between x and c. f (n+1) (z ) (x c)n+1 (n + 1)!

The proof of Taylors Theorem is given in your text, and you can use it freely in your homework and on quizzes without proving it. If lim Rn (x) = 0 for all values of x then we have that our series
n

function S (x) = cos(x). Now we calculate Rn (x). We have that every derivative of cos(x) is either cos(x) or sin(x) and for all z we have that 1 cos(z ), sin(z ) 1. Recalling that in this case that c = 0 we can write: 1 f (n+1) (z ) n+1 1 xn+1 x xn+1 (n + 1)! (n + 1)! (n + 1)!

Just as in class, we will indirectly calculate the limit of interest. Consider


n=0

1 xn+1 . We (n + 1)!

can use the ratio test to test the convergence of this series: lim an+1 an = =
1 (n+1)+1 ((n+1)+1)! x 1 n+1 n (n+1)! x

lim lim

1 x (n + 2) 1 = |x| lim n (n + 2) = |x| 0


n

= 0<1 1 xn+1 = 0 n (n + 1)! for all x because if it werent 0, the series would diverge by the k th-term test. Thus Rn (x) 0 as Which gives us that the series converges for all x. In fact, this means that lim

n for all x, as desired. This means that the Taylor series function for cos(x), the function we called S (x) is equal to cos(x) everywhere on the real line. Part 4: Using the Taylor Expansion to Approximate cos(x) at some point of interest, to given degree: Lets pretend were a calculator and someone presses cos(0.1) and were expected to show 8 signicant gures of accuracy (i.e. as many as will show in our little window). What this means, is that we should use the Taylor expansion of cos(x) to obtain an estimate of cos(0.1) accurate to within 107 . How far do we have to sum? Lets see.
f n+1 (z ) n+1 (n+1)! x

Since weve expanded cos(x) around x = 0, and by Taylors Theorem, we have that Rn (x) = for some z between 0 and 0.1. Since 1 f (n+1) (z ) < 1 we have that f (n+1) (z ) 1 (0.1 0) (0.1) (n + 1)! (n + 1)!

and as long as the right hand side is less than 107 we have the requisite accuracy. We can solve this equality by using trial and error. It turns out that as long as n > 9 the inequality holds (you may groan when I write trial and error, but it only takes about a minute with a calculator to nd that n larger than 9 will work). Using Excel to calculate partial sums of this expansion, one can obtain the following table: n an = (1)n 2n 0.1 (2n)! 1 0.005 4.16667 106 1.38889 109
n

Sn =
k=0

(1)k 2k 0.1 (2k )! 1

0 1 2 3

0.995 0.99500417 0.99500417

Notice that we didnt have to go out as far as n = 9. The approximation that we used before merely tells us that if we go out as far as n = 9, we are guaranteed the accuracy we need (but in some situations it overshoots by a lot as you can see). Plugging cos(0.1) into your a calculator we obtain cos(0.1) 0.995004165 which agrees with our calculated approximation.