Вы находитесь на странице: 1из 6

Welcome to Calculus.

I'm Professor Ghrist.


We're about to begin Lecture 57, Calculus
Redux.
We'd now come to the end of our story.
It's been a long and difficult journey.
And I'm guessing you probably felt like
giving up more than once along the way.
But you didn't, you persevered and you
made it to the end.
And now, it's time to enjoy the fruits
of that hard labor.
In this lesson, we're going to slow down
and take a
reflective look at what we can and cannot
do with Calculus.
If we take a moment and think about all of
the things that we have
learned in this course, we see that
there's quite a lot that we can do.
We understand limits, derivatives,
integrals, ordinary differential
equations,
and Taylor series.
However, there are a few things
that we can't do.
And in this lesson, we're going to examine
the boundary between what is possible and
is just beyond our reach.
Let's begin with an example of a difficult
integral.
It is a fact that the integral from minus
infinity to infinity
of e to minus x squared over 2 dx is the
square root of 2 pi.
Now you may ask, well, who cares about
such a result?
Well, you care about this result if you
want to do anything in probability
or statistics because this is the result
that says that
a standard Gaussian is in fact a
probability density function.
So, let's see if we can solve this
using the tools that we've learned in this
course.
And we know from the standard expansion
for e to
the e x that e to the minus x squared
over 2 is the sum over n 1 over n
factorial times quantity negative x
squared over 2 to the nth.
We know that this converges absolutely.
And we know that because of that absolute
convergence, we
can integrate term by term.
So let's try that.
We can take this sum and integrate the
individual terms.

Reversing the integral and the sum, what


do we get?
Well, we get the sum over n of
negative 1 to the n times 1 over n
factorial 2 to
the n x to the 2n plus 1 over 2n plus 1.
That's great.
But you have to evaluate that
integral from negative infinity to
positive infinity?
That seems nonsensical, and in fact,
that's not going to work.
We have failed.
Doing things term by term is not going to
help for this improper integral.
Here's a different strategy.
Let's show that e to the minus x squared
over 2 equals the limit as n goes to
infinity of cosine raised to the nth power
of x over square root of n.
This is not so well-known of a result, but
it's very suggestive.
It's as if you're taking the
first lobe of the cosine function and
stretching the
base out to infinity while squeezing the
tail down.
Well, let's see.
How would we prove this, if we denote by
L this limit then we could take the log of
both sides.
Assuming that we can reverse the limit and
the log, then we pull down the exponent of
n and we consider log of cosine of x of a
root n.
In the limit as n is going to infinity,
then x over
root n is getting small and we can perform
a Taylor expansion
of cosine about 0 that gives us the
dominant terms 1 minus
1 half x squared over n, all other terms
are big O of
x to the 4th over n squared, and hence
ignorable in the limit.
So what do we get?
We get that the log of L is equal to
negative 1 half x squared.
And that means that L is indeed e to the
minus 1 half x squared.
Now, why might this be useful?
Well, if we want to integrate this
going from negative infinity to infinity,
well, it's a little bit better.
I claim that it's possible to evaluate the
limit of
this integral, but it gets very delicate
because we have to worry about the bounds.
And the fact that it's an improper
integral, and there's

a limit there's not enough room on this


slide to
give a proper argument.
You have to learn a little bit more about
how limits and integrals interact.
So we've failed, but this integral is easy
when you use the techniques that
you'll learn in multi-variable calculus.
I guarantee
you that you will do this integral, and it
will take just a minute.
Let's turn to a different problem that
we can almost do.
This one, coming from ordinary
differential equations.
Recall the result with which we began this
course, namely Euler's theorem, that e to
the it equals cosine t
plus i times sine of t.
Now, we took this as a
given, but we didn't prove it.
How might you give a proof of something
like this?
Well, one obvious thing to do is write
everything
out in terms of Taylor series and show
that the
Taylor expansion for e to the it is equal
to
that of cosine t plus i times sine of t.
However, you may recall that we use this
result
to derive the series expansions for cosine
and sine
so that's a little bit of circular logic.
What
else could we do to try?
Well, consider the following.
If we let z be equal to e to the it, we're
going to think
of z as a function of t.
Then, there's an approach
using ordinary differential equations,
because the one differential equation that
you know for sure is that e to the
constant times t is the
solution to the differential equation z
prime equals that constant times z.
In this case, the constant is i square
root of negative 1.
Now this shouldn't be too weird.
z of t is just a function.
It now has a real and an imaginary
part.
Now let's write out the real and the
imaginary parts of z as follows.
z of t equals x
of t plus i times y of t, where x and
y are real functions.
Now what happens when

we multiply this by i?
Well, i times e equals i times x plus
i squared times y.
The i squared becomes a negative 1, and we
can reverse the order so that we keep it
real part an n imaginary.
Now we know that z prime equals i times z.
So what is z prime?
Well, I can use the linearity of the
derivative and
say that z prime equals x prime plus i
times y prime.
And now my differential equation z prime
equals
iz really turns into a system of two
differential equations.
One for the real part that says x prime
equals negative y
and one for the imaginary part that says y
prime equals x.
Now this is a system of two differential
equations.
There's no imaginary numbers in here.
These are both real functions but they are
not independent, they are coupled.
The x prime depends on y, the y prime
depends on x.
We have not learned how to solve systems
of
coupled ordinary differential equations.
And you might look at this and say, well,
if x were cosine
of t and y was sine of t, then this would
work
since the derivative of cosine is minus
sine and the derivative of sine is cosine.
That's fine.
But this is not a principled or systematic
approach, it's just a guess.
When you do take Multi-variable Calculus,
this will be an easy result.
You will learn methods for solving systems
of coupled linear ordinary
differential equations from which will
follow easily Euler's Theorem.
Let's turn to one last problem that we
can't do, this one involving series.
It is a result that we've mentioned
several times that the sum over
n of 1 over n squared equals pi squared
over 6.
You know that the series converges, you
know
how to bound the error for finite
approximation.
But how do you show that the exact result
is pi squared over 6?
Well, let's give it a try.
We're going to show as much of the proof
of this as we can on one

slide.
Let's begin with the function u equals arc
sine of x and in a somewhat unmotivated
step, we're going to integrate
u du as u goes from 0 to pi over 2.
When we do so, we get, of course, u
squared over 2
evaluated at the limits yielding pi square
over 8.
Note the presence of a pi squared.
That is a critical piece.
Now when we substitute in arc sine of x
for u, we get the integral of
arc sine dx over square root of 1 minus x
squared.
Changing the limits, this becomes the
integral
as x goes from 0 to 1.
Now, we don't want to evaluate this
integral, but we already know the answer.
It's pi squared over 8.
What we want to do is substitute in the
Taylor series for arc sine of x.
We've run across this once or twice
before.
It's an unusual looking series.
The coefficients involve the products of
odd numbers and
the numerator, the products of even
numbers and the denominator, and an x to
the n over 2n plus 1.
Now, this is looking rather complicated.
We can integrate this series term by term,
but integrating that
x to the n over square root of 1 minus x
squared is highly non-trivial.
That is doable by the methods of this
class.
You can do it with integration by parts
and a reduction formula.
I'm not going to show you all of those
steps and it wouldn't exactly fit on this
slide.
But trust me that when you do so, you will
get after a lot of simplification.
The sum
then goes from 1 to infinity of 1 over
quantity 2n minus 1 squared.
that's so close to what we were looking
for.
This is the sum of the odd numbers in the
denominator squared.
Well, again, with just a little bit more
of an
argument involving a geometric series, one
can show that that
sum is 3 4th of the sum over n of 1 over
n squared.
And that sum of 1 over n squared is what
we were looking for knowing that this is

really 4 3rds times pi


squared over 8 yields pi squared over 6.
Now, you could certainly ask for a clearer
or more complete proof.
The one that I have sketched is due to
Euler, the master himself.
And it is, if my research is correct, his
fourth published proof of that result.
But it's certainly not easy.
Now, you are going to expect that I'm
going to tell you that this is easy
when you learn Multi-variable Calculus.
Well, you may see a proof of this
using Multi-variable and it will be easier
to follow than the one that I have
sketched out that I am not aware of any
proof of this result
that I would call easy.
Some mathematical truths are deep.
They are difficult, and they require
an extraordinary amount of effort to
ascertain.
Some truths
are right at the boundary between what we
can do and what we can't, and they are
worth striving for.
And that is the end.
You made it.
Congratulations, you got all the way to
the end.
It was a hard, and long, class, but you
learned a lot.
Take a moment, relax, prepare for the day
of judgment,
otherwise known as the final exam, and
enjoy the fruits of your hard work.
>> Cut.
>> Oh, is that it?
Are we done?
Yes.
I'm so happy.
Oh, thank
God.
Oh, good job, Jordan.
>> Thanks.
You
too.
>> I'm going to take a nap.

Вам также может понравиться