Вы находитесь на странице: 1из 8

In light of a recent class unit on Taylor series, I was inspired to create an approximation of my

own. Though no general approximation came to mind, I began with the idea of approximating a
sinusoidal curve by a polynomial.
Plot@8Cos@xD, 1 - .6 x ^ 2, 1 - .5 x ^ 3<, 8x, 0, Pi  2<D

1.0

0.5

0.5 1.0 1.5

-0.5

But where to start? First, I stuck the curve y = cos x on the interval I0, Π2 M. Then, concerned
simply with what kind of polynomial growth (a polynomial with a constant and a single power-
term) would best approximate it. The general function, then, is
f HxL = C + k xn . The constants C and k are easy to determine, as our function is bound in the
range (0,1) on the interval I0, Π2 M. Since cos 0 = 1, f H0L = C = 1. Great. Since cos Π2 = 0,
f I Π2 M = 1 - kI Π2 M = 0 ” k = I Π2 M , and now we have the function in terms of
n n

x and n : f HxL = 1 - I 2Πx M . Below are the two graphs, f HxL and cos x, with the former function
n

varying with different values of n:


In[1]:= Table@8Plot@8Cos@xD, 1 - H2 x  PiL ^ n<, 8x, 0, Pi  2<D, "=n" N@n, 1D<, 8n, 0, 5, 1<D

Out[1]=
2 Cosine Approximation.nb

1.0

0.8

0.6

Out[1]= :: , 0>,
0.4

0.2

0.5 1.0 1.5

1.0

0.8

0.6

: , 1. =n>,
0.4

0.2

0.5 1.0 1.5

1.0

0.8

0.6

: , 2. =n>,
0.4

0.2

0.5 1.0 1.5

,
Cosine Approximation.nb 3

1.0

0.8

0.6

: , 3. =n>,
0.4

0.2

0.5 1.0 1.5

1.0

0.8

0.6

: , 4. =n>,
0.4

0.2

0.5 1.0 1.5

1.0

0.8

0.6

: , 5. =n>>
0.4

0.2

0.5 1.0 1.5

The one for n = 2 looks pretty close, but how are we going to find exactly the value of n that is
works the best?

Well, my idea was to match the areas under the two curves. If their areas were equal, they would
certainly have to be pretty darn close, right? (Closeness will be defined later.) The area under
the cosine curve on that interval is a simple one:

Ù0 cos x â x = @sin xD0 = 1


А2 А2

The area under f HxL must also be equal to 1, so we have

А2 2 n 2 n xn+1 А2 РР1


4 Cosine Approximation.nb

The one for n = 2 looks pretty close, but how are we going to find exactly the value of n that is
works the best?

Well, my idea was to match the areas under the two curves. If their areas were equal, they would
certainly have to be pretty darn close, right? (Closeness will be defined later.) The area under
the cosine curve on that interval is a simple one:

Ù0 cos x â x = @sin xD0 = 1


А2 А2

The area under f HxL must also be equal to 1, so we have

1 = Ù0 K1 - I Π2 M xn â x = Bx - I Π2 M F = I Π2 - M
А2 n n xn+1 А2 Р1
n+1 0 2
× n+1
2 1 n 2
Π
=1- n+1
= n+1
” n= Π-2
, and we have the value of n for which the polynomial is closest
to the cosine curve. Below is a graph of the two functions as well as a graph of the error.
Plot@8Cos@xD, 1 - H2 x  PiL ^ H2  HPi - 2LL<, 8x, - 1, 1 + Pi  2<D
Plot@Cos@xD - H1 - H2 x  PiL ^ H2  HPi - 2LLL, 8x, 0, Pi  2<D
1.0

0.5

-1.0 -0.5 0.5 1.0 1.5 2.0 2.5

-0.5

-1.0

0.010

0.005

0.5 1.0 1.5

-0.005

-0.010

-0.015

What this did was make the two functions intersect once somewhere in the middle of the inter-
val, so that the area between them was minimized. (It is interesting to me that the value for n
wasn't some transcendental number.) Even so, this isn't an amazing approximation: my objective
way of measuring it was to find the total area under the error curve, which we'll use mathemat-
ica to calculate. Note that the area is actually zero, if we count area under the x-axis as negative;
for this calculation, we'll pretend both are positive.
Cosine Approximation.nb 5

What this did was make the two functions intersect once somewhere in the middle of the inter-
val, so that the area between them was minimized. (It is interesting to me that the value for n
wasn't some transcendental number.) Even so, this isn't an amazing approximation: my objective
way of measuring it was to find the total area under the error curve, which we'll use mathemat-
ica to calculate. Note that the area is actually zero, if we count area under the x-axis as negative;
for this calculation, we'll pretend both are positive.
FindRoot@Cos@xD - H1 - H2 x  PiL ^ H2  HPi - 2LLL, 8x, .5<D

8x ® 0.870184<

à HCos@xD - H1 - H2 x  PiL ^ H2  HPi - 2LLLL â x - à HCos@xD - H1 - H2 x  PiL ^ H2  HPi - 2LLLL â x


.870184 Pi2

0 .870184

0.0132332

But how good is an "accuracy" value of .0132332? We can do the same thing with the taylor
polynomials for the cosine curve: the nth one defined as Úni=0
H-1Li x2 i
H2 iL!
. By the polynomial for
n = 2 the taylor polynomial already appears to have better accuracy than my approximation, and
we can find a value for the accuracy as I've defined it:
6 Cosine Approximation.nb

Manipulate@Plot@8Cos@xD, 1 - x ^ 2  2 + x ^ 4  24<, 8x, a, b<,


PlotLabel ® 8"The Fourth Taylor Polynomial to the Cosine Curve"<D, 8a, - 1, 2<, 8b, - 1, 2<D

NBà H1 - x ^ 2  2 + x ^ 4  24 - Cos@xDL â xF
Pi2

8The Fourth Taylor Polynomial to the Cosine Curve<


1.0

0.8

0.6

0.4

0.2

0.5 1.0 1.5 2.0


-0.2

-0.4

0.00452486

Already the value is .00452, less than half my approximation's .0132.

But, I did come up with another way to make my own approximation work- though I'm not sure
if it's any better. We can try fitting the arclength of the cosine curve to that of my curve, and see
if it gets us better results. Unfortunately, arclength is a much harder value to calculate, so we'll
leave it to Mathematica.
Π

NBà
2
Sin@xD2 + 1 â x, 10F
0

, xF + 1 â xF, 8n, 1.79, 1.791, .0001<F


Π n 2

TableBNBà
2
2x
DB1 -
0 Π

Out[14]= 1.910098895

Out[15]= 81.91004, 1.91005, 1.91006, 1.91006, 1.91007,


1.91008, 1.91009, 1.9101, 1.9101, 1.91011, 1.91012<
Cosine Approximation.nb 7

In[16]:= N@2  HPi - 2LD

Out[16]= 1.75194

Mathematica wouldn't calculate the value outright, so by inspection I was able to attain n =
1.7907 as an estimate good to 3 decimals. (The value we obtained earlier,
2
n= Π-2
has numerical value 1.75194, for comparison.) Below is the graph of the two functions-
cosine and our approximation- and the error function them.
Plot@8Cos@xD, 1 - H2 x  PiL ^ H1.7907L<, 8x, - 1, 1 + Pi  2<D
Plot@Cos@xD - H1 - H2 x  PiL ^ H1.7907LL, 8x, 0, Pi  2<D
In[17]:=

1.0

0.5

-1.0 -0.5 0.5 1.0 1.5 2.0 2.5


Out[17]=

-0.5

-1.0

0.005

0.5 1.0 1.5

-0.005
Out[18]=

-0.010

-0.015

-0.020

The error, then, is the error under that curve, which we can now calculate:
In[19]:= FindRoot@Cos@xD - H1 - H2 x  PiL ^ H1.7907LL, 8x, .7<D

Out[19]= 8x ® 0.700836<

à HCos@xD - H1 - H2 x  PiL ^ H1.7907LLL â x - à HCos@xD - H1 - H2 x  PiL ^ H1.7907LLL â x


.700836 Pi2
In[20]:=
0 .700836

Out[20]= 0.0143521

This is greater than the previous value, but perhaps the fact that I integrated the error function is
biased toward the one where I integrated to equate the two functions- there must be some
method for this I don't know, but to make it even maybe I'll take the arclength of both error
functions to see which is larger (making that one less "accurate").
8 Cosine Approximation.nb

NBà Sqrt@1 + D@HCos@xD - H1 - H2 x  PiL ^ H2  HPi - 2LLLL, xD ^ 2D â xF


Pi2
In[21]:=
0

NBà Sqrt@1 + D@HCos@xD - H1 - H2 x  PiL ^ H1.7907LLL, xD ^ 2D â xF


Pi2

Out[21]= 1.57207

Out[22]= 1.57233

Interestingly, the second test still goes to our first approximation, so perhaps the test I created
wasn't a flawed one after all.

In any event, this approximation seems to me to be a cool one. I might try using the same idea
but adding more terms in- that is, using the same method to get an approximation with only a
constant and and one term, then start throwing other terms in and seeing if they help the approxi-
mation at all.

Вам также может понравиться