Вы находитесь на странице: 1из 12

Taylor’s Formula

Christian Jelo R. Artoza


May 12, 2020

1 Discussion
Imagine you are studying the mechanics of the pendulum. You want to give a mathematical description
of the vertical distance between any point on the trajectory of the pendulum and the lowest point on that
trajectory by writing it as a function.

It turns out that this function is dependent on the cosine of the angle θ that the pendulum makes as it
moves. Let f be this function, and R be the length of the string of the pendulum. It turns out that

f (θ) = R(1 − cos θ).

That is a pretty good formula since the cosine is easy to evaluate when θ = π6 , or π3 , etc. But the problem is
that the cosine is not easy to compute for many values of θ which we might encounter more when studying
pendulums and all that. In other words, it would be harder to compute cos(1) than the cosine of the angles
mentioned above, and we may encounter more of those angles in the future.

Now, instead of using a calculator or a table of values in computing values of the cosine, what if there
is a function which gives us the same values but are easier to deal with? Such as a polynomial function?
In other words, is there a polynomial whose graph is exactly the same as the cosine function? One that
intersects the cosine curve at all points. What could be a way to find a polynomial function which is exactly
or is a good approximation for the cosine function?

The method in finding polynomial functions, whose values can easily be computed, that are exactly or
approximately equal to non polynomial functions such as the sine, the cosine, the exponential, etc., and
using them to compute the values of such non polynomial functions, will be the focus of this chapter. This
includes giving an estimate of how good an approximation are these polynomials.

THE POLYNOMIAL.
Let us say we want to find a polynomial that is a good approximation for the value of the cosine at points
close to 0. Let f (x) = cos x. Our goal is to find a polynomial P (x) such that

f (x) ≈ P (x)

on the interval which we desire. Our first goal is to build intuition on this method so we will not yet be too
general or too specific about some things such as intervals.

Let us start with any polynomial we could think of then finding better polynomials from there. Imag-
ine the graph of the cosine. At x = 0 we know that f is equal to 1 and is the maximum for x close to
zero. Additionally, the graph of f over the interval we chose is always bending down. So, we also want this
polynomial P to behave the same as f . Thus, it’s maximum value must also be at x = 0 and equal to 1,
and bends downward, and since we are only interested for values close to 0 we will give little attention to
the behavior of f and the P we desire for large values of x.

1
The polynomial P can be written as

P (x) = c0 + c1 x + c2 x2 + ... + cn xn ,

for any positive integer n. So if we want P (0) to be equal to 1, then

1 = P (0) = c0 .

In other words, the zeroth coefficient must be equal to 1. Therefore, our new polynomial P is

P (x) = 1 + c1 x + c2 x2 + ... + cn xn .

The next thing is to make P 0 (0) be equal to zero so that P (0) is the maximum. We have,

0 = P 0 (0) = c1 .

So, in order to obtain a polynomial whose maximum on the interval we chose is P (0) = 1, the first coefficient
c1 must be equal to 0. Hence, we have our new and better polynomial

P (x) = 1 + c2 x2 + ... + cn xn .

Finally we want a polynomial that is bending down over the interval we chose. This means that the second
derivative of P must be negative for all x. We also want P 00 (x) to be equal to f 00 (x) at x = 0 so that they
bend down or curve at the same rate, otherwise the curve of P , although bending downward, may be wider
or narrower than that of f . We have,

P 00 (x) = 2c2 + ... + n(n − 1)cn xn−2 .

Since f 00 (0) = − cos(0) = −1,


−1 = P 00 (0) = 2c2 .
So, if we want the graph of P to bend the same way, downward and curves the same way, as that of f , then
the second coefficient must be
1
c2 = − .
2
Thus, we have our new and improved polynomial
1
P (x) = 1 − x2 + ... + cn xn .
2
I can continue to find c3 up to cn for P (x) or I could stop here and check if the approximation is already
good enough for values close to zero. The wise answer would be to stop and check, so let’s ignore the terms
of the polynomial whose coefficients are c3 , .., cn . Thus, we have our current polynomial

1
P (x) = 1 − x2 . (1)
2

Let x = 0.1. Then,


1
P (0.1) = 1 − (0.1)2 = 0.995.
2
Now, by some method (such as using a table of values), the value of f at this point turned out to be

f (0.1) = cos(0.1) = 0.955004 ≈ P (0.1).

So our approximation is very close to the exact value of cos x that we can already use P (x) as a substitute
for it. i.e.,
R
f (θ) = R(1 − P (x)) = x2 .
2

2
However, this approximation is only good over the interval we have chosen. If we try to check the approxima-
tion for values, say, x = π, then we will see that P (π) is not even close to the value of cos π. So, if we want a
polynomial that approximates the function better and for greater values of x, then we must continue add more
terms to the current P (x) by finding c3 up to cn . And we will soon found out why the more terms, the better.

Basically, the gist of this method is to find the right coefficients c1 , ..., cn in order to obtain the polyno-
mial that is equal to or approximates the cosine function (or any non polynomial function we want to find an
approximation to). Once we have found such a polynomial, it is now easier to compute the values of these
non polynomial functions than before.

In the next section, we will find a general way to express the n-th coefficients of a polynomial in terms
of the polynomial’s n-th derivative at x = 0, or simply P (n) (0).

EXPRESSING c0 , ..., cn IN TERMS OF THE DERIVATIVE OF P AT x = 0.


Suppose we have a polynomial P which has a derivative (up to the n-th order) over an interval. It can
then be written as
P (x) = c0 + c1 x + c2 x2 + ... + cn xn ,
for any positive integer n. Let k be a positive integer which is ≤ n. Then we know, from our study of
derivatives, that the k-th derivative of P is

P (k) (x) = k! ck + an expression containting x as a f actor.

This is because if we take the derivative of the polynomial for k times, the terms

c0 , c1 x, c2 x2 , ..., ck−1 xk−1

or simply the expression before the term ck xk , are equal to zero (again, because of repeatedly taking the
derivative of P up to the k-th derivative). On the other hand, if we repeatedly take the derivative of P up
to the k-th derivative, the terms with xj as a factor with j > k are left, which leaves us the expression on
the right of k! ck .

What the expression for P (k) (x) means is that the k-th derivative of P gives us a polynomial whose
constant term is the k-th term of P times k! which resulted from repeatedly differentiating P . Hence, if
we take P (k) at x = 0 we will be left with just the constant term. That is,

P (k) (0) = k! ck ,

which gives us the expression for the coefficient ck (for any integer k) that we desire:

P (k) (0)
ck = (2)
k!

This formula is very useful especially in the study of polynomials that approximate functions. Notice that
we can obtain the coefficient of each term of equation (1) using this formula: Previously, we found the
coefficients because we wanted to satisfy:

1. P (0) = f (0), so that f and P have the same y-intercept;


2. P (1) (0) = f (1) (0), so that they have the same maximum and rate at which the values change; and
3. P (2) (0) = f (2) (0), so that they both bend at the same direction and curve the same way.

This is also the case if we want to find coefficients of P up to the n-th degree. We can find cn (for any
integer n ≥ 0) by letting P (n) (0) = f (n) (0) and noew that we already have a formula, we can simply use
equation (2) instead of doing trial and error. Thus, if we want to find the coefficients for the polynomial

3
that approximates the cosine function up to the nth degree (same as what we have done above) using the
formula (2) instead of trial and error, then we’ll find that:

f (0) (0)
c0 = = 1,
0!

f (1) (0)
c1 = = 0,
1!

f (2) (0) 1
c2 = =− ,
2! 2

...

f (n) (0)
cn = .
n!
So far, we tried to build intuition by doing a trial and error in finding a polynomial that approximates the
cosine, a non polynomial function; We have also derived an important formula, which is an expression for
any coefficient of a polynomial. We did all this in order to compute the values of certain functions such as
the sine, cosine, exp, in an easier way.

Now, the next thing we are going to do is to combine everything we did into one simple formula which
gives us a general expression for the polynomial that approximates a function f , such as polynomial (1) and
the cosine, and we will call this approximation the Taylor polynomial.

DEFINITION. The Taylor Polynomial


The Taylor polynomial of degree ≤ n for the function f is the polynomial

f (2) (0) 2 f (n) (0) n


Pn (x) = f (0) + f (1) (0)x + x + ... + x .
2! n!

Notice that this is just the same expression for the definition of the polynomial but instead of using c0 , ..., cn
we have written each coefficient of the Taylor polynomial terms of f (n) (x).

EXAMPLE. Let f (x) = ex . Then, f (n) (0) = e0 = 1 for any integer n ≥ 0. Hence, the Taylor polyno-
mial for this function is the polynomial

x2 xn
Pn (x) = 1 + x + + ... + .
2! n!

The polynomial for this specific function lets us compute numbers such as e0.1 , e2 , and other values of f .
We can even compute the value of the number e itself!
Let n = 6 (could be any n). Then,

x2 x3 x4 x5 x6
Pn (x) = 1 + x + + + + + .
2! 3! 4! 5! 6!
Since we are computing e, we will take the value of this polynomial approximation for ex at x = 1. This

4
gives us,
1 1 1 1 1
Pn (x) = 2 + + + + +
2! 3! 4! 5! 6!

1 1 1 1 1
=2+ + + + +
2 6 24 120 720

= 2.718055556

which is a good approximation for the number e.

THE REMAINDER TERM


We will now move on to something equally important to the Taylor polynomial which we have derived
by finding an expression for each coefficient of a polynomial that will give an approximation to any non-
polynomial function. I may say that this one is a part of the equation in studying the behavior of a certain
function. Observe the graph below.

The red curve is the cosine function, or the function we want to give an approximation to; and the
blue curve is the Taylor polynomial. Notice that the Taylor polynomial is exactly equal to the function at
x = 0. This is why we let the n-th derivative of our function to be the same as the Taylor polynomial, i.e.,
f (n) (x) = P (n) (x) for any positive integer n. Their 0-th derivatives are the same, and their values change
the same way, which is what we want the Taylor polynomial to do.

Now notice another thing which is relevant in this discussion. That is, the Taylor polynomial only gives us
a good approximation for values near x = 0. In other words, their graphs only look the same at an interval
near x = 0, and even at that interval, we would not know if the blue and the red curves completely intersect.
So, we have to look closer. But why bother looking lol. Anyway, we want to know how close these curves are.
That is, we want to know how good is the Taylor polynomial in approximating the function and at which
points is it good enough (in the case of that graph, it is good for values near x = 0). We want to exactly
give a value of how good this approximation is by knowing how near the value of the Taylor polynomial is

5
to the exact value of the function at any point x. The clue word is near, and so what we are looking for is
the distance between the y-values of these curves. Therefore, we write

R(x) = f (x) − P (x)

where P is the Taylor polynomial, f is the function we give an approximation to, and R gives us the value
of how good P is in approximating f (how near is P to f ,,, the difference between the curve of P and that
of f ,,, need I say more?). We picked the letter R for that function to call it the Reminder term or the
Remaining term, whatever you like to call it. It’s actually easier to write this equation as

f (x) = P (x) + R(x)

and say that R is the remaining term to give us the exact value of f at any point x or the shortcoming of
P . It is also written as
f (x) − Pn−1 (x) + Rn (x)
which means that P is the polynomial of n − 1-th degree and R is the term/polynomial of n-th degree. We
may say that remainder term is a part of this Taylor polynomial, but we will find out that it is expressed
differently among the other terms of the polynomial. In fact, this expression I’m referring to is actually just
an estimate of the remainder, and we will talk about this in the next section of the chapter.

SOME THINGS TO PROVE ABOUT THE REMAINDER TERM


In the next section we will be dealing with the Taylor polynomial (or Taylor’s formula) theoretically, and
in our attempt to prove this derivation/formula we will find that the remainder term is the integral

b
(b − t)n−1 (n)
Z
f dt
a (n − 1)!

Which, I know, looks a bit messy, but in finding an estimate to this remainder term we will find that this
integral can be expressed in a slightly less complicated way but the same way as the terms of the Taylor
polynomial. That is, we will find that Rn can be expressed as

f (n) (c) n
Rn (x) = x
n!

for some number c between 0 and x. The thing that makes it different from the rest of the terms of the
Taylor polynomial is that this term is taken at an intermmediate value of the n-th derivative of f at a point
over a certain interval, which we denoted by c, and not at x = 0.

We will clear everything up in the next section of this chapter when we finally study this remainder term
which we have introduced. For now we will only be showing what will show up as we deal with the theory.
With all this messy formulas and integrals we have seen, just remember, while looking at them, that the
remainder term is the distance between the value of the function and its Taylor polynomial at any point x.
If we let this x = π, then this remainder term simply gives us the length of the green vertical broken line
here (check graph below):

And while you’re at it, notice that the length which the remainder term gives us becomes larger as we
pick points that are farther from x = 0. Hence, what we said earlier about how good the polynomial is in

6
approximating the function is true. It gives us a good one on an interval near x = 0, and less good ones for
points farther from it.

TAYLOR’S FORMULA
As promised, this section will be all about dealing with Taylor’s formula theoretically, but before that
we will change some letters in the definition to help us when studying the special and important cases of
Taylor’s formula. Instead of using x = 0 as the point at which the function and its Taylor polynomial are
exactly equal, we will use an arbitrary point a; and instead of using x we will be using the letter b. Hence,
this definition
f (2) (0) 2 f (n) (0) n
f (x) = f (0) + f (1) (0)x + x + ... + x ,
2! n!
will be the special case when a = 0 and b = x.

THEOREM 1.1. Let f be a continuous function on the interval between two numbers a and b.
Assume that f has n derivatives on this interval, and that all of them are continuous. Then,

f (2) (a) f (n−1) (a)


f (b) = f (0) (a) + f (1) (a)(b − a) + (b − a)2 + ... + (b − a)n−1 + Rn
2! (n − 1)!

where Rn (which is called the remainder term) is the integral.

b
(b − t)n−1 (n)
Z
Rn = f dt
a (n − 1)!

As said before this messy-looking integral, that is the remainder term, will be simplified into an expression
which is easier to memorize because it is expressed the same, but not completely the same, way as the rest
of the terms of the polynomial. Now let us explore some cases and examples using this frickingly awesome
theorem.

Example: Computing the sine. Let’s say we want to know the value of sin x in a flash or without
any help from tables and calculators in order to express a piece of derivation in a more simple way. So, we
call Theorem 1.1. for help. The theorem tells us that the sine function is equal to the polynomial whose
coefficients are of the form
f (k) (a)
k!
for k = (0, ..., n − 1). Thus, we will find that


 sin a if n − 1 = 2m, where m ≥ 0 is even

− sin a if n − 1 = 2m, where m ≥ 0 is odd
f (n−1) (a) =


 cos a if n − 1 = 2m + 1, where m ≥ 0 is even
− cos a if n − 1 = 2m + 1, where m ≥ 0 is odd

The coefficients of the Taylor polynomial is settled at this point. The only thing to do now is to choose what
the arbitrary number a will be. In computing, we are also to choose up to what degree will our polynomial
be, i.e., what is n. Note that the higher the degree, the better the approximation. Suppose I let a = π/4
and n = 6. Then, our polynomial will be.
√ √  √  √  √  √ 
2 2 π 2 π 2 2 π 3 2 π 4 2 π 5
sin b = + b− − b− − b− + b− + b− + R6 .
2 2 4 2 · 2! 4 2 · 3! 4 2 · 4! 4 2 · 5! 4

7
We can input any number x between π/4 and a number b in our polynomial to compute for sin b. We can
also let [ π/4, x ] be the interval, provided that π/4 ≤ x ≤ b. So instead of the one above, we write
√ √  √  √  √  √ 
2 2 π 2 π 2 2 π 3 2 π 4 2 π 5
sin x = + x− − x− − x− + x− + x− + R6 .
2 2 4 2 · 2! 4 2 · 3! 4 2 · 4! 4 2 · 5! 4
The following shows the graph of the sine function (red) and its Taylor polynomial (blue) which we have
obtained using Theorem 1.1.

As expected, they intersect at an interval near π/4. Had we chose a = 0, the curves will intersect and be
nearer at points close to??? You guessed it, a = 0. Furthermore, as we add more terms to the polynomial
up to the largest n-th degree we could think of, everything about the curve of the polynomial will change
except for the values of the coefficients for the terms 0 to n − 1 which were already settled.

As you can see, this theorem serves as a powerful mathematical tool which we can use both for computing
elementary function we can think of that cannot be easily computed by hand and imitating their graphs.
The latter may not be as useful as the former, but as we study more about this formula, we shall learn more
applications. And maybe if we look at the problems of other fields, such as physics and engineering, we will
be able to apply it more often since mathematics, in those fields, is seen as a language in cracking the secrets
of the universe (wow)

THE MOST IMPORTANT CASE.


What was said to be the most important case of this theorem is when a = 0, and it is probably because
choosing it is the most convenient one in solving most problems involving computations of functions. It
simplifies most of the Taylor polynomials of functions such as the trigonometric ones, and the exponential
function. From the previous example if I let a = 0, finding what the coefficients are would be easier.

Basically if n − 1 is even, then the coefficient is just 0 since this case gives us a sine function, and sin 0 = 0.
Hence, the terms of odd degree are the only ones that will occur in our polynomial, which is easier to write.
We are then down to two cases, which are
(
(n−1) 1 if n − 1 = 2m + 1, where m ≥ 0 is even
f (a) =
−1 if n − 1 = 2m + 1, where m ≥ 0 is odd

This is becuase cos 0 = 1. We can then simplify this into one expression. Thus, we write

f (n−1) (a) = (−1)2m+1 ,

and so the Taylor polynomial of the sine function in this case would be

x3 x5 (−1)2m+1 2m+1
x− + − ... + x + Rn .
3! 5! (2m + 1)!

8
Letting n = 6, we have
x3 x5
sin x = x − + + R6
3! 5!
which when graphed, looks like the figure below.

Now, as you and I can see, this case looked like it made a polynomial that gave us a better approximation
than the previous example. So, if we choose an interval near 0, the extent of how good that approximation
is may be larger than that of the other cases such as when a = π/4. That’s a good hypothesis to keep in our
pockets for future use. Maybe we will know more about the difference between this important case and the
other ones as we go deeper into our study of Taylor’s formula.

We will now prove Theorem 1.1. with what we know so far. However, this will be different from the
derivation of the polynomial, and a bit more trickier since we will be using the basic properties of the inte-
gral to show where the formula came out from. The integral form of the remainder term will also come out
naturally in the proof. To prove such a tricky theorem, we will first go with the special cases and generalize
from there. We could first try to think about how to approach this, but not spend a much of our time for this.
Mr. Taylor already proved it for us, and so we could just take advantage of the formula and learn from the
technique he used on how it was proven for future use. Not to discourage you, but it took mathematicians
years to develop Calculus up to this point. If we spend most of our time pretending to think and come
up with these theorems on our own as if we invented Calculus ourselves, then we would be dead before we
complete the half of this course. However, if you really want it, then be sure to pass it to your grandchildren
or students. Let’s now get right into it.

PROOF OF THEOREM 1.1


We will first go over special cases until we get a grip of the technique then go to the general case. Before
everything else, note that a function is the integral of it’s derivative. Which is why the distance is the integral
of velocity. This will be crucial in our proof together with a technique in integration.

Special Case. Let f be a function of t which is continuous over the interval between a and b and has
n derivatives, with n being a positive integer. Then,
Z b
f (b) − f (a) = f (1) (t) dt.
a

Let us apply integration by parts to the right hand side of the equation above. Let u = f (1) (t), and dv = dt.
Then, du = f (2) (t) dt and hold your horses for we will have to choose another integral for dv = dt instead
of v = t, which is v = t − b = −(b − t), with b as the constant term. If we differentiate that we would still
get dv = dt since b is yes, a constant. Now don’t ask me why we chose that instead of the usual one shut up

9
and figure it out yourself. Hence, we have
Z b b Z b
(1) (1)
f (t) dt = −(b − t)f (t) + (b − t)f (2) (t) dt

a a a
Z b
= −(b − (b))f (1) (b) − [−(b − (a))f (1) (a)] + (b − t)f (2) (t) dt
a
Z b
= (b − a)f (1) (a) + (b − t)f (2) (t) dt.
a

See how the n = 1 Taylor polynomial showed up? We can continue showing each term of the polynomial by
repeatedly applying integration by parts. Let u = f (2) (t), and dv = (b − t)dt. Then
1
du = f (3) (t) dt and v = − (b − t)2 .
2
Thus,
b b
1 b
Z Z
1
f (1) (t) dt = (b − a)f (1) (a) − (b − t)2 f (2) (t) + (b − t)2 f (3) (t) dt

a 2 a 2 a

1 b
Z
(1) 1 2 (2) 1 2 (2)
= (b − a)f (a) − (b − b) f (b) + (b − a) f (a) + (b − t)2 f (3) (t) dt
2 2 2 a
f (2) (a) 1 b
Z
= f (1) (a)(b − a) + (b − a)2 + (b − t)2 f (3) (t) dt
2 2 a

Next, let u = f (3) (t) and dv = (b − t)2 dt. Then,


1
du = f (4) (t) dt and v = − (b − t)3 .
3
Thus,
b b Z b
f (2) f (3)
Z
(1) (1) 2 3 1
(b − t)3 f (4) (t) dt

f (t) dt = f (a)(b − a) + (b − a) − (b − t) +
a 2 3·2 a 3·2 a
Z b
f (2) f (3) (b) f (3) (a) 1
= f (1) (a)(b − a) + (b − a)2 − (b − b)3 + (b − a)3 + (b − t)3 f (4) (t) dt
2 3·2 3·2 3·2 a
f (2) f (3) (a) 1 b
Z
(1) 2 3
= f (a)(b − a) + (b − a) + (b − a) + (b − t)3 f (4) (t) dt.
2 3! 3! a
And so on.

General Case. Again, we would like to prove that the theorem is true for every n, and hence we must
prove that
Z b
(1) f (2) 2 f (3) (a) 3 1
f (b) − f (a) = f (a)(b − a) + (b − a) + (b − a) + ... + (b − t)n−2 f (n−1) (t) dt.
2 3! (n − 2)! a
Where
b
f (n−1) (a)
Z
1
(b − t)n−2 f (n−1) (t) dt. = (b − a)n−1 + an integral expression.
(n − 2)! a (n − 1)!
By showing that the integral form of the term on the rightmost part of the equation is equal to what we want
it to be (which is that equation below it), we prove the theorem. Let u = f (n−1) (t) and dv = (b − t)n−2 dt.
Then,
1
du = f (n) (t)dt and v=− (b − t)n−1 .
n−1

10
Thus,
b b Z b
f (n−1) (t)
Z
1 n−2 (n−1) n−1 1
(b − t)n−1 f (n) (t) dt

(b − t) f (t) dt = − (b − t) + (n − 1)(n − 2)!
(n − 2)! a (n − 1)(n − 2)! a a

b
f (n−1) (b) f (n−1) (a)
Z
1
=− (b − b)n−1 + (b − a)n−1 + (b − t)n−1 f (n) (t) dt
(n − 1)! (n − 1)! (n − 1)! a

b
f (n−1) (a)
Z
1
= (b − a)n−1 + (b − t)n−1 f (n) (t) dt.
(n − 1)! (n − 1)! a

Therefore,
f (2) f (3) (a) f (n−1) (a)
f (b) − f (a) = f (1) (a)(b − a) + (b − a)2 + (b − a)3 + ... + (b − a)n−1
2 3! (n − 1)!
Z b
1
+ (b − t)n−1 f (n) (t) dt.
(n − 1)! a
By adding f (a) to both sides and denoting the integral expression by Rn , we finally get what the theorem
tells us, which is
f (2) f (3) (a) f (n−1) (a)
f (b) = f (a) + f (1) (a)(b − a) + (b − a)2 + (b − a)3 + ... + (b − a)n−1 + Rn
2 3! (n − 1)!
This concludes the proof. I like to think that we denoted the integral to be Rn or the remainder term because
it is the term left unevaluated. In fact, that integral expression right there is the one we talked about on
the section introducing the remainder term. We will then know more about it on the next section of the
chapter.

2 Exercises
1. Let f (x) = ln(1 + x)
(a) Find a formula for the derivative of f (x). Start with f (1) (x), then f (2) (x). Work out f (k) (x) for
k = 3, 4, 5, then write down the formula for arbitrary k.
(b) Find f (k) (0) for k = 1, 2, 3, 4, 5. Then show in general that
f (k) (0) = (−1)k+1 (k − 1)!.

(c) Conclude that the Taylor polynomial Pn (x) for ln(1 + x) is


x2 x3 x4 xn
Pn (x) = x − + − + ... + (−1)n+1
2 3 4 n
SOLUTION.
(a)
f (1) (x) = (1 + x)−1
f (2) (x) = −(1 + x)−2
f (3) (x) = 2(1 + x)−3
f (4) (x) = −2 · 3(1 + x)−4
f (5) (x) = 2 · 3 · 4(1 + x)−5
f (k) (x) = (−1)k+1 (k − 1)! (1 + x)−k

11
(b)

f (1) (0) = 1
f (2) (0) = −1
f (3) (0) = 2
f (4) (0) = −2 · 3
f (5) (0) = 2 · 3 · 4
f (k) (0) = (−1)k+1 (k − 1)! (1)−k
= (−1)k+1 (k − 1)!

(c) Let cn be a number such that


f (n) (0)
cn =
n!
which will be the coefficient of the polynomial Pn . Then,

f (2) (0) 2 f (n) (0) n


Pn (x) = f (0) (0) + f (1) (0)x + x + ... + x
2! n!
1 2 2·3 4 (−1)n+1 (n − 1)! n
= 1 · x − x2 + x3 − x + ... + x
2 3! 4! n!
x2 x3 x4 xn
= x− + − + ... + (−1)n+1
2 3 4! n!

12

Вам также может понравиться