Академический Документы
Профессиональный Документы
Культура Документы
May, 2006
Department of Mathematics
Contents
Abstract
Acknowledgments
1
History
vii
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
5
9
11
13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
15
16
18
23
25
30
33
35
35
36
43
46
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iv Contents
5
Bibliography
49
49
50
51
53
57
Abstract
Identified as one of the 7 Millennium Problems, the Riemann zeta hypothesis has successfully evaded mathematicians for over 100 years. Simply
stated, Riemann conjectured that all of the nontrivial zeroes of his zeta function have real part equal to 1/2. This thesis attempts to explore the theory
behind Riemanns zeta function by first starting with Eulers zeta series
and building up to Riemanns function. Along the way we will develop
the math required to handle this theory in hopes that by the end the reader
will have immersed themselves enough to pursue their own exploration
and research into this fascinating subject.
Acknowledgments
I would like to thank Professor Yong, Professor Ward, and especially Professor Raugh for helping this thesis come into being. I could not have accomplished this without their input, their faith and their nearly unending
patience. I would also like to thank Libby Beckman for being my late night
thesis buddy on many occasions.
Chapter 1
History
Every great story has a beginning, and like many adventures in mathematics, this story begins with Leonard Euler. In 1737 Euler proved that the
infinite sum of the inverses of the primes
1 1 1 1
1
+ + + +
+
2 3 5 7 11
diverges [Edw]. Euler no doubt realized that this observation not only supported the already known fact that the primes are infinite, but also hinted
that the primes are fairly dense among the integers. In addition to this,
Euler also proposed what is now known as the Euler product formula,
1
1
=
s
n
prime p 1 +
n =1
1
ps
History
Theorem 1.0.1 (Prime Number Theorem).
(x)
x
ln x
as
x ,
where ( x ) is the number of primes less than or equal to x (where 1 is not considered to be prime).
Gauss later refined this to the following statement.
Theorem 1.0.2 (Prime Numer Theorem II).
( x ) Li( x ) =
Z x
dt
2
ln t
x .
as
(0.89)
Z x
dt
2
ln t
Z x
dt
2
ln t
Z
x
t ( t2
dt
,
1) ln t
and runs over the non trivial zeros of his zeta function. Unfortunately
he was unable to show (or at least he did not show in the paper) whether
3
Li(x) was the dominating term or even if the series converged. The real
importance of his paper lay in the novel tools he developed and utilized.
For the first time a mathematician was using tools like Fourier transforms
and complex analysis to say something about the nature of prime number
distribution. It truly was revolutionary, and it is not surprising that there
was no more substantial work in the field for 30 years after his publication, almost as if it took the mathematical world that much time to digest
Riemanns ideas.[Edw]
Then in 1896 Jacques Hadamard and C. J. de la Vallee Poussin simultaneously and independently proved the prime number theorem using many
of the tools that Riemann had developed and used. Their proofs showed
that Riemanns formula for ( x ) was in fact correct and that the largest
term was Li(x). Even though the main ideas that Riemann proposed were
now rigorously proved, there was still one more conjecture that remained
unverified.
In his 1859 paper, Riemann mentioned that he considers it very likely
that the nontrivial zeros of (s), which will be defined during the course
of this thesis, all lie on the line <(s) = 1/2. This statement would later
become known as the Riemann zeta hypothesis and has managed to stump
mathematicians for over a century. The purpose of this thesis is to explore
some of the different facets of Riemanns zeta function and introduce some
of the math that will assist us. Chapter 2 will start by introducing Eulers
zeta series, which is essentially the predecessor of Riemanns zeta function.
The chapter will also introduce concepts such as convergence, Barenoulli
numbers, and the gamma function in order to construct a solid base to
launch from. Chapter 3 is concerned with the values of Eulers zeta series.
It will present 2 proofs for the values of the function at the even integers
as well as provide a method to approximate the values elsewhere on the
positive real line. In order to accomplish the proofs presented, the chapter
also introduces Fourier series and the Euler Maclaurin summation formula.
Chapter 4 uses a transformational property of Jacobis theta function to
help derive the functional equation in order to extend the Euler zeta series
to the complex plane. This extension of the series then becomes Riemanns
zeta function. Finally, Chapter 5 examines Riemanns famous hypothesis
hypothesis.
and the many consequences of its truth including Lindelofs
Chapter 2
Introduction
This chapter will be concerned with the Euler zeta series, which is the function
1
(s) = s ,
n
n =1
where s is a real number greater than 1. All of the ideas and facts that are
shown in this chapter will continue to be true after the function is extended
to the complex plane because when Riemanns zeta function is restricted
to real s > 1, it becomes Eulers zeta series. For this reason, we shall discover as much as we can using the Eulers zeta series rather than the more
complicated Riemann zeta function.
Before anything else, it behooves us to ask whether the function even
makes sense, but before we can answer that we need to establish what an
infinite series is and what is required for an infinite series to make sense.
Therefore we will have a short section which introduces infinite series and
some characteristics that will prove useful throughout our exploration.
2.2
Quite simply, a series is an infinite sum. The interesting part of the theory surrounding series is whether they sum to something besides infinity.
1 = .
k =1
What about the following infinite sum (known as the harmonic series)
1
1 1 1 1 1 1
= 1 + + + + + + + ?
k
2 3 4 5 6 7
k =1
Even though there are an infinite number of terms, we can see that the
terms get smaller and smaller, and toward the end of the sequence the
terms are practically zero. It seems like this would imply that the sum is
finite, but consider the following grouping
1
1
1 1
1 1 1 1
k = (1) + 2 + 3 + 4 + 5 + 6 + 7 + 8 +
k =1
1
1
1 1
1 1 1 1
+
+
+
+
+ + +
+
2
2
4 4
8 8 8 8
1
1
1
1
+
+
+
+
=
2
2
2
2
With this grouping, we can see that the harmonic series sums to something
that is greater than an infinite number of halves whose sum is clearly infinite. Therefore the harmonic series shows that even if the terms of a series
approach zero, this is not sufficient for a series to sum to a finite number. The question then becomes, when will an infinite series sum to a finite
number? To answer that we introduce the notion of convergence.
Given a sequence { an } of real numbers. We say that the series an is
convergent and converges to S as n if for all e > 0 there exists an N such
that
n
a
S
< e for all n > N
k
k =1
If no such S exists, then we say that the series diverges [Kno].
In other words, our series converges to S if the partial sums become
arbitrarily close to S as n goes to infinity. Note that the partial sum of a
series is simply the sum of the first n numbers, where n is finite. Another
way to think about convergence is that the partial sums of the series are all
bounded.
ak
and
k =1
bk
k =1
an
n =1
f ( x )dx
and
an
n =1
either both diverge or both converge. In other words, if the integral is finite then
the series converges. Otherwise the series diverges.
Using the integral test we can now finally prove that (s) converges for
s > 1 and thus makes sense.
(s) =
f ( n ).
n =1
Note also that f ( x ) is continuous, positive and decreasing (given that s > 1)
and that
Z
1
1
1
1
1
f ( x )dx =
lim s1 s1 =
(0 1) =
x
s1
s1
s1
x
1
1
R
Thus the integral test tells us that since 1 f ( x )dx converges then 1 f (n)
converges as well for s > 1.
Note that the requirement that s be greater than 1 makes sense because
(1) is the harmonic series which we showed diverges. Whats remakable
is that as soon as s crosses that threshold, than the series instantly becomes
finite.
Although they will not be useful until the next chapter, the Bernoulli
numbers and Bernoulli polynomials play a very important role in the theory behind both Eulers zeta series and Riemanns zeta function and thus
merit mention in the chapter on Fundamentals. The next two sections will
be devoted to their descriptions and properties.
Bernoulli Numbers 9
2.3
Bernoulli Numbers
Bernoulli was well aware of the formulas for the partial sums of the powers
of the integers, the first few being:
n
n ( n + 1)
2
k2
n(n + 1)(2n + 1)
6
k3
n2 ( n + 1)2
.
4
k =1
n
k =1
n
k =1
km = m + 1
i
(n + B)m+1 Bm+1 ,
(2.1)
k =0
where Bk is the kth Bernoulli number. Using the binomial formula on (2.1)
yields
"
#
n 1
m
1
m+1
m
m +1 j
Bj .
n
(2.2)
k = m+1
j
j =0
k =0
From here, we can produce the recursive relationship between the Bernoulli
numbers by plugging n = 1 into equation (2.2), resulting in
m
m+1
0=
Bj .
(2.3)
j
j =0
Using the recursive formula and B0 = 1 we have a fairly straight forward,
although not very speedy, method for computing the Bernoulli numbers.
For reference, the first couple Bernoulli numbers are:
B0 = 1,
1
B1 = ,
2
B2 =
1
,
6
B3 = 0,
B4 =
1
,
30
B5 = 0.
Notice that both the third and fifth Bernoulli numbers are zero, and in fact
Bk = 0 for all odd k > 1. This fact will be proved rigorously momentarily.
10
x
Bk x k
=
k! .
ex 1
k =0
(2.4)
This relation is very important since the frequent appearance of this generating function amid formulas involving Riemanns zeta function will be an
indication of the involvement of the Bernoulli numbers.
Theorem 2.3.1. For odd k > 1, Bk = 0.
Proof. After moving the k = 1 term of the sum over to the other side, equation (2.4) becomes
x
B1 x
Bk x k
=
(2.5)
ex 1
1!
k!
k =0,k 6=1
Focusing on the left hand side of the equation we have
ex
x
B x
1
1
1!
=
=
=
=
ex
x
2
x
2
x
2
x
x
+
1 2
ex 1
2
+
ex 1 ex 1
x
e +1
ex 1
x/2 x/2
e + e x/2
e
.
e x/2
e x/2 e x/2
Now notice that substituting x for x in the last line leaves the expression
unchanged. Therefore the right hand side of equation (2.5) must also remain unchanged by the same substitution. This means that
k =0,k6=1
Bk x k
Bk ( x )k
=
k!
k!
k =0,k 6=1
(2.6)
and therefore that Bk = (1)k Bk for all k 6= 1. While this is trivially true for
even k, this forces Bk = 0 for all odd k 6= 1.
Bernoulli Polynomials 11
2.4
Bernoulli Polynomials
Z 1
0
Bk ( x )dx = 0, k 1.
Using these facts, we can easily compute the first couple Bernoulli polynomials:
B0 ( x ) = 1
B1 ( x ) = x
1
2
B2 ( x ) = x2 x +
1
6
3
1
B3 ( x ) = x3 x2 + x
2
2
B4 ( x ) = x4 2x3 + x2
1
.
30
xezx
Bk (z) x k
=
.
ex 1
k!
k =0
(2.7)
You may have noticed that the Bernoulli polynomials that we listed take on
their corresponding Bernoulli numbers at x = 0. This is in fact true for all
Bernoulli polynomials, as evidenced by the result of substituting z = 1 into
the generating function,
k =0
B xk
xe(0)x
x
Bk (0) x k
= x
= x
= k .
k!
e 1
e 1
k!
k =0
12
k =0
xe(1z)x
(e x ) xezx
( x )ez(x)
Bk (z)( x )k
Bk (1 z) x k
= x
= x
=
=
k!
e 1
(e ) 1 e x
e x 1
k!
k =0
( Bk ( x + 1) Bk ( x ))tk
k!
k =0
te(x+1)t
te xt
et 1
et 1
tet e xt te xt
et 1
xt
te (et 1)
=
et 1
= te xt
t( xt)k
=
k!
k =0
( k + 1 ) x k t k +1
( k + 1) !
k =0
(kx k1 )tk
k!
k =1
The index difference between the beginning and the end is not a problem
since the first term of the left hand side is zero because
B0 ( x + 1) B0 ( x ) = 1 1 = 0.
Thus Bk ( x + 1) Bk ( x ) = kx k1 .
Gamma Function 13
2.5
Gamma Function
Another useful function when dealing with the theory surrounding Eulers
zeta series is the gamma function. Essentially, the gamma function is an
extension of the factorial function to all real numbers greater than 0 [Sto].
(s) =
Z
0
e x x s1 dx for s > 0.
(2.8)
While Euler was the first to introduce this function, he defined it slightly
differently and Legendre was the first to denote it as (s).
We should note that this function can actually be extended to all complex numbers so that it only has simple poles at the negative integers (and
zero). And it is for this reason that we can continue to use (s) even when
we begin to work with Riemanns zeta function in the realm of complex
numbers.
Be careful when researching this subject using different sources because
Gauss [Edw] also defined this function, except that he denotes it by (s)
where
( s ) = ( s + 1).
This thesis will be using the more popular notation of (s) as defined in
(2.8).
The rest of this section will derive (or sketch the idea of the derivation)
of several identities which will prove useful as we manipulate (s) and (s)
over the course of this thesis.
What is known as the functional equation of (s),
(s + 1) = s(s),
(2.9)
123N
( N + 1) s .
(s + 1)(s + 2)(s + 3) (s + N )
(2.10)
14
( N + 1) s = (1) s (1 + 1) s (2) s (2 + 1) s ( N 1) s ( N 1 + 1) s ( N ) s ( N + 1) s
Then we can write (2.10) as an infinite product instead of a limit, yielding
( s + 1) =
nns (n + 1)s
.
(s + n)
n =1
n =1
1 s
s 1
1+
1+
.
n
n
s 1
1+
n
n =1
s 1
= 1+
1+
n
n =1
1
s2
.
= 1 2
n
n =1
(s + 1)(s + 1) =
1+
s 1
1 s
1
+
n
n
n =1
s
1
s
1
s
1
1+
1+
n
n
n
1
n
s
1+
(2.11)
Although it wont be proved until section 3.3, let us take for granted that
we have the product formula for sine,
sin x = x
n =1
x2
1 2
n
.
Plugging (2.11) into the sine formula then yields the identity
sin(s) =
s
.
( s + 1) (1 s )
(2.12)
1/2 .
(2.13)
2
2
Chapter 3
Introduction
Now that we have defined and verified that (s) converges for real s > 1 it
would be nice to figure out what the series converges to. This goal divides
itself nicely into two parts: the values at the even integers and the values
everywhere else.
Two proofs will be presented of the formula for the values of Eulers
zeta series at the even integral points, but before we proceed we need to
introduce Fourier series and prove that we can integrate an infinite series
because both of which are required for the proofs that will be presented.
Therefore the next section will introduce infinite series of functions and
integration while the following section will introduce the theory of Fourier
series and help establish a useful identity which will serve as the starting
point for the first proof.
16
3.2
Several proofs in this chapter require that we swap an infinite sum and
an integral. In other words, we want to know whether we can integrate
an infinite sum term by term and still arrive at the integral of the entire
sum. Before we present that theorem we should reintroduce convergence
since we are now talking about infinite sums of functions, which are subtly
different then infinite sums of numbers. All of the definitions and theorems
in this section are discussed in more detail in [Kno].
Given a sequence of functions f n ( x ), we say that the infinite sum
f n (x)
n =1
| f n ( x ) f ( x )| < e
for all
n > n0
f n (x)
n =1
| f n ( x ) f ( x )| < e
for all
n > n0
for all x D.
As a side note, clearly uniform convergence is stronger than point-wise
convergence since any function which converges uniformly obviously converges at each point.
Just like in the case of convergence, strictly using the definition to check
for uniform convergence can be a daunting task, therefore mathematicians
have developed several tests to help spot and show uniform convergence.
Here are two of these tests.
Theorem 3.2.1 (Abels Test for Uniform Convergence). Given a series
f (x) =
un ( x )
n =1
with un ( x ) = an f n ( x ). If
f (x) =
f n ( x ),
n =1
f (x) =
f n (x)
n =1
f ( x )dx =
n =1
f n ( x ) dx =
n =1
f n ( x )dx .
18
3.3
Fourier Series
a0
+ ( an cos(nx ) + bn sin(nx )) ,
2
n =1
(3.1)
where the coefficients of the sines and cosines are given by Eulers formulas:
an =
bn =
1
f ( x ) cos(nx ) dx
Z
1
f ( x ) sin(nx ) dx.
Z
(3.2)
(3.3)
and
sin(nx ) cos(mx ) dx = 0
for integral m, n.
Fourier Series 19
Lemma 3.3.1. If a function k( x ), and its first derivative k0 ( x ) are sectionally
continuous in the interval [ a, b], then the integral
K =
Z b
a
k ( x ) sin(x ) dx
tends to zero as .
A proof of this Lemma, as given by [CJ], is easily obtained through
integration by parts of K . To clarify, by sectionally continuous we mean
that f ( x ) can be broken up into a finite number of subintervals such that
f ( x ) is continuous on the open subintervals and approaches definite limits
at the endpoints. This essentially means that f ( x ) can only have a finite
number of discontinuities on the interval [ a, b]. If both f ( x ) and f 0 ( x ) are
sectionally continuous we say that f ( x ) is sectionally smooth. In addition to
this we require that the value of f ( x ) at any discontinuity be the average of
the limiting values from each side.
To summarize, a function f ( x ) can be represented as a Fourier series
provided the following properties hold:
f ( x ) = f ( x + 2 )
f ( x ) is sectionally smooth.
At every point x we have
1
f (x) =
2
lim f ( a) + lim f ( a)
a x+
a x
As an exercise we will now derive the product formula for sine, which
will be useful later on in another proof. This derivation comes from [CJ].
To start off our derivation we will find the Fourier series of cos(x ) on
the interval (, ) where is a real number that is not an integer. Note
that all three properties are satisfied, so our quest for a Fourier series is
feasible. Since cos(x ) is an even function we immediately know that our
Fourier series will be a cosine series, thus bn = 0. Then using equation (3.2)
20
=
=
=
=
=
1
cos(x ) cos(nx )dx
Z
1
cos(( + n) x ) + cos(( n) x )dx
0
1 sin(( + n) ) sin(( n) )
+
+n
n
1 ( n) sin(( + n) ) + ( + n) sin(( n) )
2 n2
1 2 sin(x ) cos(nx )
2 n2
n
2(1)
sin( )
( 2 n2 )
Z
and
sin( ) 2 sin( )
cos(nx )
.
+
(1)n 2
n2
n =1
Since our function satisfied the property that the value at every point (specifically the endpoints and ) was the average of the limits from both
sides, we can plug x = into our Fourier expression without any trouble.
After doing this and dividing by sin(x ) (which we can do since is not
an integer) we get
cot( ) =
1
2
1
+
.
2 n2
=1
1
2
=
n =1
n2
.
2
(3.4)
From here we can use Abels test to verify that the series is uniformly convergent, so that we can integrate the series term by term. We can write the
Fourier Series 21
previous series as an f n () where
an =
1
n2
and
f n () =
2
n2
2
( n +1)2
<
2
n2
= f n ()
for all n, and inspection also shows us that for [0, 1/2) we have
f n () =
2
n2
<
1/2
<
<
= 2/3.
1 2
1 1/4
3/4
2 n2
n =1
an f n () =
n =1
cot( )
d.
(3.5)
d =
2 2
n
0
0 n =1
On the left hand side of (3.5) we have
Z x
1
sin(x )
sin(a)
cot( )
d = log
lim
a 0
x
a
0
sin(x )
.
= log
x
On the right hand side of (3.5) we can swap the sum and integral because
we showed that equation (3.5) was uniformly convergent. Therefore we
22
2
n2 2 d =
n =1
=
=
=
=
=
This gives us
Z x
2
2 2
n
n =1 0
x2
log 1 n2
n =1
x2
lim log 1 2
n
n =1
x2
lim log 1 2
n
n =1
x2
log lim 1 2
n
n =1
x2
log 1 2 .
n
n =1
sin(x )
x2
log
= log 1 2 .
x
n
n =1
sin(x ) = x
n =1
x2
1 2
n
.
(3.6)
3.4
In the next proof we will need to differentiate an infinite series, and in order
to do that, we need to verify that we can differentiate term by term and that
the resulting series will converge to the derivative of the initial series.
Theorem 3.4.1. If
f (x) =
f n (x)
n =1
f n0 ( x )
n =1
d
d
d
f n (x) =
f (x) =
f n (x)
dx
dx n
dx
=1
n =1
for all x D [Kno].
As an exercise in using this theorem we will differentiate
u2
log 1 + 4k2 2
k =1
(3.7)
term by term and verify uniform convergence. The derivative of this series
will help us in a later proof.
Proof. First we need to confirm that this series is point-wise convergent.
Note that
u2 1
u2 1
(3.8)
log 1 + 42 k2 = log 1 + 42 k2 .
k =1
k =1
From Knopp, we know that an infinite product
(1 + a k )
k =1
u2 1
42 k2
k =1
24
2u
4k2 2 + u2 .
(3.9)
k =1
4 2
>1
u
4 2 k2
> k2
u
4 2 k2
+ u > k2
u
1
1
< 2
4 2 k2
k
+u
u
u
4 2 k2
+ u2
<
1
k2
u
1
<
42 k2 + u2 k2 .
k =1
k =1
u
42 k2 + u2
k =1
converges uniformly and therefore (3.9) converges uniformly as well. Thus
we can differentiate term by term and equation (3.9) does indeed converge
to the derivative of (3.7).
3.5
In 1644 Pietro Mengoli proposed a problem to all mathematicians. Determine the value of the infinite sum
1
.
2
n
n =1
This problem, which is known as the Basel problem, was eventually solved
by Euler in 1735 after multiple renowned mathematicians had failed. Eulers solution granted him instant fame in the mathematical community,
but that was hardly a reason for Euler to stop there. Almost as if to rub
it in their faces Euler derived and proved the general formula for which
the Basel problem is a special case [Dun]. The purpose of this section is to
present this formula, which happens to be the value of Eulers zeta series
at the even integers, as well as two proofs.
Theorem 3.5.1. For k 1
(2k ) =
3.5.1
We start with the product formula for sin( x ), which was produced at the
end of Section 3.3,
x2
sin( x ) = x 1 2 2 .
k
k =1
After taking the natural logarithm of both sides and substituting x = iu/2
we have
2 !
iu2
iu
iu
log sin
= log
+ log 1 2 2
,
2
2
k
k =1
and after some simplification we get
iu
log sin
2
iu
= log
2
u2
+ log 1 + 2 2
4k
k =1
.
We can then differentiate both sides with respect to u. Note that we can differentiate the infinite sum term by term because we proved that we could
26
1
i cos iu2
2u
=
+
.
2
iu
2 sin 2
u k=1 4k 2 + u2
(3.10)
Focusing on the left hand side and using the following identities
cos(z) =
1 iz
e + eiz
2
sin(z) =
and
1 iz
e eiz
2i
gives us
i
2
cos
sin
iu2
iu2
=
=
=
i
2
eu/2 +eu/2
2
eu/2 +eu/2
2i
1 (eu/2 + eu/2 )
2 (eu/2 + eu/2 )
1 (eu/2 ) (eu 1 + 2)
2 (eu/2 ) (1 + eu )
1
1
+ u
.
2 e 1
Then, after substituting this back into equation (3.10), multiplying by u and
moving a term over we end up with
u
2u2
u
.
+
1
=
eu 1 2
4k2 2 + u2
k =1
(3.11)
Now for the homestretch, focusing on the left hand side we have
u
u
+ 1 =
u
e 1 2
k =0
= 1
B uk
u
u
+ k + 1
2 k=2 k!
2
Bk uk
k!
B2k u2k
(2k)!
k =2
Bk uk
u
+ 1
k!
2
k =1
(3.12)
2u2
1
2u2
4k2 2 + u2 = (2k )2
u 2
1 + 2k
k =1
k =1
u 2j
2u2
j
=
(
1
)
(2k )2 j=0
2k
k =1
u 2j+2
= 2(1) j
2k
k =1 j =0
u 2j
= 2(1) j1
2k
j =1 k =1
2(1) j1
1
=
u2j
2j
2j
(
2
)
k
j =1 k =1
2(1) j1
1
=
u2j
2j
2j
(
2
)
k
j =1
k =1
2(1) j1
(2j)u2j .
2j
(
2
)
j =1
(3.13)
Finally, changing the index from j to k and substituting equations (3.12) and
(3.13) back into equation (3.11) yields
2(1)k1
(2k )u2k .
2k
(2 )
k =1
(2k2k)! u2k =
k =1
(3.14)
28
3.5.2
1
1
=
2
sin(2xk)
.
k
k =1
2B1 ( x )dx =
Z u
sin(2xk)
k =1 0
dx.
Z u
0
2B1 ( x )dx =
cos(2uk )
.
2k2
k =1
Z x
0
2B1 (u)du =
cos(2xk)
.
2k2
k =1
cos(2xk)
2k2
k =1
cos(2xk)
.
k2
k =1
Plugging in x = 0 yields
1/6 = B2 (0) =
cos(0)
dx
k2
k =1
(2) = 2 /6,
cos(2nx )
n2k
n =1
sin(2nx )
n2k+1
n =1
= (1)k1
(2 )2k B2k ( x )
2(2k )!
= (1)k1
(2 )2k+1 B2k+1 ( x )
2(2k + 1)!
Setting x = 0 in the second equation reaffirms that the odd Bernoulli numbers (besides B1 ) are zero.
30
3.6
The section will concern itself with the derivation of the Euler-Maclaurin
summation formula, which is a means to compute the error when approximating a sum with an integral:
N
f (n)
n= M
Z N
M
1
f ( x )dx + [ f ( M ) + f ( N )],
2
(3.15)
N 1
Z
n +1
1
n (x b xc 2 ) f 0 (x)dx
n= M
N 1 Z 1
1 0
(t ) f (n + t)dt .
=
2
0
n= M
1
( x b x c ) f 0 ( x )dx =
2
dv = f 0 (n + t)
v = f (n + t)
1
( x b x c ) f 0 ( x )dx =
2
N 1
Z
0
n= M
N 1
1
(t ) f 0 (n + t)dt
2
1 Z
[(t 1/2) f (n + t)]
n= M
N 1
1
0
f (n + t)dt
N 1 Z 1
1
1
=
f ( n + 1) + f ( n )
f (n + t)dt
2
2
n= M
n= M 0
N 1 Z n +1
N 1
1
1
f ( x )dx
=
f ( n + 1) + f ( n )
2
2
n= M
n= M n
Z N
N 1
1
1
=
f ( n + 1) + f ( n )
f ( x )dx
2
2
M
n= M
1
1
f ( M ) + f ( M + 1) + + f ( N 1) + f ( N )
2
2
By inspection we can see that if we add this final line to the right hand side
of equation (3.15) it will cause both sides to be equal.
The Euler-Maclaurin summation formula is the result of repeated integration by parts on the error term that we just verified. Before we start
integrating like mad, it will benefit us to rewrite (3.16) in terms of Bernoulli
polynomials. Recall that B1 ( x ) = x 1/2. So
Z N
M
N 1
1 0
0 (t 2 ) f (n + t)dt
n= M
N 1 Z 1
0
=
B1 (t) f (n + t)dt .
1
( x b x c ) f 0 ( x )dx =
2
n= M
Z
dv = B1 (t)
v = 12 B2 (t)
Z N
M
f ( x )dx.
32
1
( x b x c ) f 0 ( x )dx =
2
=
=
N 1
n= M
1
Bs (t) f 0 (n + t)
2
1 N 1 Z 1
1
00
B2 (t) f (n + t)dt
+
0 2
n= M
0
N 1
1
1
B2 (0) f 0 ( M) B2 (1) f 0 ( N )
B2 ( x b x c) f 00 ( x )dx
2
2
M 2
Z N
B2 (0) 0
1
[ f ( x )] N
B2 ( x b x c) f 00 ( x )dx.
M
2
M 2
The first sum telescoped because of corollary 2.4.2 which says that Bk (1) =
Bk (0). When we repeat the process on the final integral we obtain
Z N
M
1
B2 (0) 0
B3 (0) 00
( x b x c ) f 0 ( x )dx =
[ f ( x )] N
[ f ( x )] N
M
M+
2
2
23
Z N
M
1
B3 ( x b x c) f 000 ( x )dx.
23
B2 (0) 0
1
B3 (0) 00
( x b x c ) f 0 ( x )dx =
[ f ( x )] N
[ f ( x )] N
M
M +
2
2!
3!
(1)k+1
+
k!
Z N
M
Bk ( x b x c) f (k) ( x )dx.
Since Bk (0) is the kth Bernoulli number, and Bk = 0 for odd k > 1 this can
be simplified and in conjunction with equation (3.15) we end up with the
Euler-Maclaurin summation formula:
N
N
Z N
N
1
B2 0
B4 000
f (n) = M f (x)dx + 2 [ f ( M) + f ( N )] + 2! f (x) + 4! f (x) +
M
M
n= M
N
B2 (21)
f
( x ) + R2 ,
+
(2)!
M
where
R2 =
1
(2 + 1)!
Z N
M
3.7
1
(s) = s
n
n =1
which means that f (n) = 1/ns and [ M, N ] = [1, ]. Note that the kth
derivative of f (n) is
f (k) ( n ) =
n= X
1
ns
and added back in the first X 1 terms afterward? Well, then we would
still have f (n) = 1/ns but our interval would change to [ M, N ] = [ X, ].
Then
(1)k (s)(s + 1) (s + k 1)
[ f (k) (n)] N
.
M =
X s+k
Now, if X is fairly large then the terms in the summation formula get
smaller much faster, and we end up with a better accuracy without calculating too many terms. Keep in mind though that by doing it this way,
we still have to calculate the value of
X 1
n =1
1
.
ns
34
(s) =
n =1
1
+
ns
B2 0
B4 000
du +
f ( x ) +
f ( x )
us
2!
4!
X
X
Z
1
X
B2 (21)
f
( x ) + R2 .
++
(2)!
X
After simplification this finally becomes
X 1
(s) =
n =1
1
X 1 s
1 1
B2 s
B (s)(s + 1) (s + 2)
+
+
+
+ 4
s
s
s
+
1
n
1s 2X
2! X
4!
X s +3
++
B2 (s)(s + 1) (s + 2 2)
+ R2 ,
(2)!
X s+21
where
R2
(s)(s + 1) (s + 2 1)
=
(2 + 1)!
Z
B2+1 ( x b x c)
X
us+2
du.
There are many other methods to approximate the other values of Eulers zeta series at these points. If you are interested in additional methods, Borwein, Bradley and Crandall have an article which discusses several
computational processes [BBC].
Chapter 4
Introduction
36
4.2
ek t .
(4.1)
k =
1
= t1/2 (t)
t
(4.2)
2l n< 2l
n/2 + n/2
2
2m
bm/l c
k =bm/l c
l
22m
2m
m + kl
.
Proof. Let m and l be positive integers and let = e2i/l . Note that
l = e2i = cos(2 ) + i sin(2 ) = 1.
Now let z = n and sum equation (4.4) over l/2 n < l/2. Then swap
the sums (which we can do because both sums are independent of each
other and finite) to yield
2m
m
2m
n/2
n/2
=
+
m + j nj
2l n< 2l
2l n< 2l j=m
m
2m
=
nj . (4.5)
m
+
j
l
l
j=m
2 n< 2
Claim:
nj
2l n< 2l
(4.6)
2l n< 2l
nj =
2l n< 2l
(n+l/2) j =
0 n < l
nj .
38
nj =
0 n < l
( n +1) j = j
0 n < l
nj .
0 n < l
n/2
n/2
2m
bm/l c
k =bm/l c
2l n< 2l
2m
m + kl
l
Finally, divide both sides by 22m to finish the proof of lemma 4.2.2.
Lemma 4.2.3. If t > 0 then
lim
2l n< 2l
n/2 + n/2
2
2m
n2 /t
n=
1
=
.
t
Proof. Let t > 0, and keep it fixed. Up until now the onlyrestriction placed
on l has been that it is a positive integer, now let l = b mtc. Note that
this implies both that
l 2 mt
Since cosh( x ) =
e x +e x
2
[cosh(in/l )]
2m
and
l2
as m .
t
(4.7)
we have
ein/l + ein/l
=
2
2m
n/2 + n/2
=
2
2m
.
x2
+ O( x 4 ).
2
n/2 + n/2
2
2m
"
(in)2
+O
= 1+
2l 2
in
l
4 !#2m
.
2l n< 2l
n/2 + n/2
2
2m
lim
2l n< 2l
lim
n=
2 n2
1+
+O
2l 2
2 n2
+O
1+
2l 2
(in)4
l4
(in)4
l4
2m
lim
2l n< 2l
n/2 + n/2
2
2m
n2 /t
= lim 1 + 2
m
2l /t
2l 2 /t
.
We were able to drop the big-O term since it becomes insignificant as l goes
to infinity. Then, since 2l 2 /t as l , we can use equation (4.3) to
2
conclude that the right hand side goes to en /t as l and m go to infinity.
Thus we have our intended result of
n/2
2m
+ n/2
1
n2 /t
lim
=e
=
.
(4.8)
m l
2
t
l
2 n< 2
Lemma 4.2.4.
n2 t
b mtc
2m
lim
=
t e
= t(t).
m
m + kl
22m
n=
k =bm/l c
bm/l c
2m
40
b mtc t m.
Thus
bm/l c
m
b mtc
2m
2m
= t lim
lim
22m m + kl .
m
m
m + kl
22m
k =bm/l c
k =bm/l c
bm/l c
m
m
(2m)!
2m
= 2m
.
2m
m + kl
2
2
(m + kl )!(m kl )!
Using Stirlings Formula,
n!
n n
e
2n,
m
22m
2m
e
2m
22m
e
m + kl
m+kl
1
p
2 (m + kl )
e
m kl
mkl
1
p
2 (m kl )
.
m
+
kl
m
kl
(m + kl )
(m kl )
m + kl m kl
(4.9)
m
m
1
=
1 as m .
=
m + kl m kl
1 k2 l 2 /m2
m2 k 2 l 2
.
(m2 k2 l 2 )m (m + kl )kl
(m + kl )m+kl (m kl )mkl
(4.10)
Now, when we dissect the right hand side of (4.10), the left term yields
m2m
1
1
=
2
2
2
m
2
2
2
m
2
(m k l )
(1 k l /m )
(1 k t/m)m
k2 t as m .
2
2
2
m
2
m
(m k l )
(1 + (k t)/m)
e
Now we can focus on the right term of (4.10). Recalling that the Maclaurin
series of (1 x )/(1 + x ) yields the approximation
1x
= 1 2x + O( x2 ),
1+x
we can then apply this to the right term of (4.10) to get
(m kl )kl
=
(m + kl )kl
1 lk/m
1 + lk/m
lk
"
2lk
= 1
+O
m
lk
m
2 !#lk
.
2tk2
1+
lk
lk
e2k
2t
as m
b mtc
2m
lim
m
m + kl
22m
k =bm/l c
bm/l c
42
=
=
=
=
=
=
bm/l c
t lim
k =bm/l c
m
22m
2m
m + kl
m2m
m
t
m+kl ( m kl )mkl
m + kl m kl
m= ( m + kl )
m2m
(m kl )kl
[1]
t
2
2 2 m ( m + kl )kl
m= ( m k l )
1
2k2 t
e
t
[1]
k2 t
m= e
k2 t
t e
m=
t(t).
=
t
lim
2l n< 2l
n/2 + n/2
2
2m
b mtc
2m
= lim
m
m + kl
22m
k =bm/l c
=
t(t)
bm/l c
4.3
en t ts/2
dt
.
t
en t ts/2
dt
t
Z
0
=
=
e x s/2 ns x s/21
dx
x
1 s/2 x s/21
e x
dx
ns
0
1 s/2
(s/2).
ns
Z
Let <(s) > 0, and then sum both sides from n = 1 to to get
(s) = (s)
s/2
1
(s/2) = s s/2 (s/2) =
n
n =1
n =1
en t ts/2
dt
.
t
To keep from getting sidetracked we will simply switch the order of the
sum and the integral while noting that to prove this is legal is not a trivial
procedure. Thus we have
(s) =
Z
0
f (t)ts/2
dt
t
where
f (t) =
en t .
2
(4.11)
n =1
Recall Jacobis theta function (section 4.2) and the its corresponding trans-
43
44
en t
1 + 2 f (t) = 1 + 2
n =1
= e (0) t +
n =1
en
en t +
en
2t
n=
2t
n=
= (t)
= t1/2 (t1 )
= t1/2 (1 + 2 f (t1 )).
Then solving for f (t) in the first and last lines yields
f (t) =
1 1/2
(t
1) + t1/2 f (t1 ).
2
(4.12)
dt
t
0
Z 1
Z
dt
s/2 dt
=
f (t)t
+
f (t)ts/2
t
t
0
1
Z
Z 1
1 1/2
dt
1/2
1
s/2 dt
(t
1) + t
f (t ) t
+
f (t)ts/2
=
2
t
t
1
0
Z 1
Z 1
Z
1 s/23/2
dt
dt
=
(t
ts/21 )dt +
t1/2 f (t1 )ts/2 +
f (t)ts/2 .
2
t
t
0
0
1
(s) =
f (t)ts/2
1
(ts/23/2 ts/21 )dt =
2
2
Z 1
0
(s1)/2
f (t
dt
) =
t
Z 1
ts/21/2
ts/2
(s1)/2
dx
f (x)
=
x
Z
1
x (1s)/2 f ( x )
dx
.
x
(4.14)
1
1
dt
dt
+
t(1s)/2 f (t) +
f (t)ts/2
s1 s
t
t
1
1
Z
1
dt
(1s)/2
s/2
+
(t
+ t ) f (t) .
s ( s 1)
t
1
45
46
4.4
4.4.1
4.4.2
(2k) =
1
(1)k1 B2k (2 )2k
=
2k
2(2k )!
n =1 n
k!
2k1/2 (1/2 k )
(2k )
k!
(1)k1 B2k (2 )2k
2(2k )!
2k1/2 (1/2 k )
Which finally gives us the formula for the values of the Riemann zeta function at the odd integers:
(1 2k ) =
Chapter 5
First we define
Z = {s C | (s) = 0 and 0 <(s) 1}.
In words, Z is the set of nontrivial zeros of the Riemann zeta function. Another helpful definition is that whenever we mention or t we are talking
about the real and complex components of the variable plugged into (s) .
Meaing that if s C then s = + it. The classic Riemann zeta hypothesis
then becomes:
Z {s C | <(s) = 1/2},
which is sometimes also seen as Z {s C | = 1/2}. As a side note,
another useful convention is that usually represents a nontrivial zero of
(s) .
There are also other, though more unnatural, ways to express this hypothesis. For example, the following two hypotheses, that were supplied
by S. J. Patterson, are equivalent to Riemanns zeta hypothesis.
(i) 0 (s)/ (s) + (s 1)1 is holomorphic in {s C | <(s) > 1/2}.
(ii) log{(s 1) (s)} is holomorphic in {s C | <(s) > 1/2}.
50
5.2
Now that we have clarified what the Riemann zeta hypothesis is, we can
start looking at some of the numerous results that pop up if the hypothesis
is assumed to be true. The following two theorems, that are proved in
Patterson, create a direct link between the Riemann zeta hypothesis and
the distribution of the primes.
Theorem 5.2.1. Suppose there exists < 1 such that
Z {s C | 1 <(s) }.
Then as X
(n) = X + O( X (log X )2 ).
n X
(n) = X + O( X )
n X
then
Z {s C | <(s) }.
Just to clarify, (n) is Mangoldts function and is defined as
log p if n = pk for some prime p and some integral k > 0
(n) =
.
0
otherwise
Currently, these theorems have not been terribly useful because no one has
found either a or that satisfies them. However when combined with the
validity of the Riemann hypothesis they say a lot regarding the theory surrounding the prime number theorem. Although it is obvious that a proof of
the prime number theorem does not require the Riemann zeta hypothesis
to be true (because if it did we wouldnt have a proof of the prime number theorem) it does rely on a small part of the hypothesis, mainly that if
is a nontrivial zero of (s) , then Re() 6= 1 However, this does not at
all say that a proof of Riemanns zeta hypothesis would not improve the
Additional Consequences 51
prime number theorem. The prime number theorem is usually stated in
two equivalent forms, the first is more intuitive while the second is more
useful for the purposes of this section. The following two theorems come
from [Pat].
Theorem 5.2.3 (Intuitive Prime Number Theorem). There exists a constant
c > 0 such that
Z X
(X) =
1/2
Theorem 5.2.4 (Useful Prime Number Theorem). There exists a constant c >
0 so that as X
(n) = X + O( Xec(log X )
1/2
).
n< X
In the second form we can see the relationship between the prime number theorem and theorems 5.2.1 and 5.2.2 that are concerned with the zeta
function. In fact, what the validity of the Riemann hypothesis allows us to
do is to obtain a tighter bound on the error term in the formula for ( x ).
On the Riemann hypothesis, theorem 5.2.1 would tell us that
n X
Z X
2
a much more precise estimate for the number of primes less than a given
magnitude.
5.3
Additional Consequences
There are many more consequences of the Riemann hypothesis; I will briefly
touch on a few. I will not go into as much detail with these since most have
little to do with prime distribution.
Another function that pops up when talking about (s) is S( T ). If T > 0
is not the ordinate (where the ordinate of a point z is the imaginary component of z) of a zero, let
S( T ) = 1 arg (1/2 + iT )
52
(n)
ns
n =1
(s > 1/2)
is not only a necessary condition for the Riemann hypothesis, but also a
sufficient one. Recall that if n = p11 p22 pk k is the prime factorization of n
where the pi s are distinct primes and all i > 0, then the Mobius
function
is defined as
if n = 1
1
k
(n) =
(1) if 1 = 2 = = k = 1 .
0
otherwise
Be careful not to confuse this function with ( ) which I will discuss in the
next section.
There are also some interesting results surrounding the function
M( x) =
( n ).
n x
M ( x ) = O ( x 2 +e )
for some e > 0 is a necessary and sufficient condition for the Riemann
hypothesis.
In addition to this we have the Mertens hypothesis, which is that
| M( x )| < n
( n > 1).
Hypothesis
Lindelofs
It turns out that although this is not implied by the Riemann hypothesis, it
does itself imply the Riemann hypothesis.
Titchmarsh notes that the weaker hypothesis, M( x ) = O( x1/2 ), is eerily
similar to the function
( x ) x =
(n) x,
n x
F(x) =
(1)k+1 x k
(k 1)! (2k) .
k =1
5.4
Lindelofs
Hypothesis
Lindelofs
hypothesis is a consequence of the Riemann zeta hypothesis
however, the effects of its veracity are much more accessible and far reaching and thus it merits it own section.
Before we get to the hypothesis we must define a new function. Let
() = inf{ a R | ( + it) = O(|t| a ) as t }.
In other words, |t|() is the closest upper bound we can get to the zeta
function restricted to the line {c C | <(c) = }. To get a feeling for this
we can figure out () for select areas right off the bat. For example, we
know that ( + it) is bounded for all > 1. Since it is bounded we know
that ( + it) = O(|t|e ) for all e > 0, therefore () = 0 for > 1. We can
then use this result in conjunction with the functional equation to arrive at
() = 12 for < 0. This only leaves the critical strip in which () is
hypothesis plays its part. Since
unknown, and this is where the Lindelof
() is a downwards convex function we can outline the possible region
hypothesis states
for the function on the domain 0 1. The Lindelof
53
54
k
+ it dt = O( T e ).
2
Z T
1
0
Z T
0
| ( + it)|k dt = O( T e ) as T .
Hypothesis
Lindelofs
There are 3 more statements by Hardy and Littlewood that are equivalent,
however since they deal with Plitzs generalization of Dirichlets divisor
problem (which extends far beyond the scope of this thesis) they will not
be discussed [Tit].
55
Bibliography
[Apo] Tom M. Apostol. Introduction to Analytic Number Theory. SpringerVerlag, 1976.
[BBC] Jonathan M. Borwein, David M. Bradley, and Richard E. Crandall.
Computational strategies for the riemann zeta function. J. Comput.
Appl. Math., 121(1-2):247296, 2000. Numerical analysis in the 20th
century, Vol. I, Approximation theory.
[CJ]
[Ivi]
Aleksandar Ivic. The Riemann Zeta-Function: The Theory of the Riemann Zeta-Function wih Applications. John Wiley & Sons, 1985.
[Kno] Konrad Knopp. Theory and Application of Inifinite Series. Dover Publications Inc., second edition, 1990.
[LR]
[Pat]
S. J. Patterson. An Introduction to the Theory of the Riemann ZetaFunction. Cambridge University Press, 1995.
[Sto]
58
Bibliography
[Tit]