Вы находитесь на странице: 1из 22

6 Time-dependent Problems

In this chapter we will address the solution of time-dependent problems. As a


prototype of a time-dependent problem, we will focus on the one-dimensional
heat equation.
6.1 Time-dependent Problems - Problem Formulation
As a concrete example, we will study the time-dependent, one-dimensional
heat conduction equation
T(x, t)
t

2
T(x, t)
x
2
= 0, x [0, L] (6.1)
with boundary conditions
k
T(0, t)
x
+ h(T(0, t) T

) = 0, and T(L, t) = T
L
. (6.2)
Additionally, we need to specify an initial condition
T(x, 0) = F(x). (6.3)
The problem describes, for example, heat transfer in a slab subjected to a
convective boundary condition at x = 0 and xed temperature at x = L.
T(x, t) is the temperature expressed in degrees Kelvin (K). k(W/m-K) is
the heat conductivity, which in general may be a function of both x and T.
= k/(C
p
) (m
2
/s) is the thermal diusivity. is the density (kg/m
3
). C
p
is
the heat capacity (W/kg-K). h is the heat transfer coecient (W/m
2
-K). For
now, we will assume that all the thermophysical properties are temperature-
independent and uniform in space and time. Letting the various properties
be temperature and space dependent will cause signicant problems in the
analytical solution but very few complications in the numerical solution.
Similar equations appear in a large number of engineering and science prob-
lems such as mass diusion.
Before starting to solve any problem, we will convert the equations into dimen-
sionless form. There are many reasons for doing so. For one, the dimensionless
form gives us some idea about the relative importance of the various terms.
Perhaps some of the terms in the equation play a secondary role and may be
neglected. Second, one would like to have an estimate for the time constant(s)
of the problem. For instance, how long should the equations be integrated un-
til a steady state has been established? Third, it is always more convenient to
operate with variables that are order one in magnitude rather than very large
105
or very small numbers. Finally, the dimensionless results have much broader
applicability.
To cast the equations in dimensionless form,we use L as the length scale. Thus,
the dimensionless length will be = x/L and the domain will span 0 1.
(T
L
T

) is the temperature scale, and the dimensionless temperature is:


u(x, t) =
T(x, t) T

T
L
T

. (6.4)
Substituting these scalings into the equation (6.1), one has:
u
t


L
2

2
u

2
= 0, [0, 1].
This immediately suggests that L
2
/ is an appropriate scale for the time. Thus
the dimensionless time = t/L
2
, and the dimensionless form of equation
(6.1) becomes,
u(, )



2
u(, )

2
= 0, [0, 1]. (6.5)
Likewise, the boundary conditions (6.2) will assume the form

k
L
u(0, t)

+ hu(0, t) = 0, and u(1, t) = 1.


We can bundle the various coecients into one group

u(0, )

+ Bi u(0, ) = 0, and u(1, ) = 1. (6.6)


where Bi = hL/k is the Biot number. We already encountered the Biot num-
ber when we analyzed heat transfer in the n. We see an immediate benet
of the non-dimensionalization. When the Biot number is very large, we would
expect u(0, ) = 0. When the Biot number is very small, we would expect
u(0, )/ = 0.
In summary, our dimensionless equations are:
u



2
u

2
= 0, [0, 1] (6.7)

u(0, )

+ Bi u(0, ) = 0, u(1, ) = 1, and u(, 0) = f().


Before we proceed with the nite element solution, I will solve the equations
analytically. If you suspect that I am taking advantage of this opportunity to
teach you some math, you are absolutely right. My main objective is, however,
to show you how to develop simple solutions that you can use later to verify
106
your numerical results. Verication is extremely important, and it is going to
be a recurring theme in our course and, hopefully, throughout your professional
life. As we shall see over and over, the fact that a solution was generated by
a computer does not mean that it is correct.
To solve equation (6.7), we will split the function u(, ) into two parts:
u(, ) = u
h
(, ) + u
p
(), where u
p
() is a solution that satises the dif-
ferential equation and the nonhomogeneous boundary conditions. u
p
() will
turn out to be the long time (steady state) solution. The problem for u
p
() is:


2
u
p

2
= 0, [0, 1]

u
p
(0)

+ Bi u
p
(0) = 0, u
p
(1) = 1.
We can readily integrate this second order dierential equation to obtain
u
p
() = a +b. The integration constants a and b are obtained by substituting
u
p
() into the boundary conditions. After some simple algebra, we have the
steady state:
u
p
() =
1
1 + Bi
(Bi + 1). (6.8)
Lets examine briey a couple of limiting cases. When Bi >> 1, which means
that the left boundary is in good thermal communication with the ambient
and u(0, ) 0, we have u
p
() . When Bi << 1, the left boundary is nearly
insulated and, not surprisingly, u
p
() 1. This illustrates the signicance of
the Biot number (Bi).
Next, lets turn our attention to the transient solution:
u
h



2
u
h

2
= 0, [0, 1] (6.9)

u
h
(0, )

+ Bi u
h
(0, ) = 0, u
h
(1, ) = 0, and u
h
(, 0) = f() u
p
().
There are a number of dierent techniques that can be used to solve equation
(6.9) such as Laplace Transform, Fourier Transform, and separation of vari-
ables. The latter is probably the most intuitive, and it is somewhat reminiscent
of the nite element method. Briey, we propose a solution of the form:
u
h
(, ) =

n=1
a
n

n
()X
n
(). (6.10)
We are taking advantage again of problems linearity and the fact that the
sum of the solutions of a linear dierential equation is also a solution. This
is the well-known principle of superposition. Substituting any of the terms of
107
(6.10) into the dierential equation in (6.9) and rearranging, we have
1

n
()
d
n
()
d
=
1
X
n
()
d
2
X
n
()
d
2
=
2
n
. (6.11)
In the above, we argued that since the left hand side depends only on time
and the right hand side depends only on the spatial coordinate , the two can
be equal only if they are both equal to a constant (
2
n
). Of course, the bound-
ary conditions will have to be compatible with this argument (and luckily they
are in this case). Equation (6.11) leads, therefore, to two dierential equations.
We will deal with them one at a time. The equation for the time-dependent
part,
1

n
()
d
n
()
d
=
2
n
can be readily integrated to give exponentially decaying modes.

n
() =
n
(0)exp(
2
n
). (6.12)
Since
n
() does not depend on the spatial coordinate, the task of satisfying
the boundary conditions is delegated to X
n
(). The equation for X
n
() is
d
2
X
n
()
d
2
+
2
n
X
n
() = 0 (6.13)
with the boundary conditions

dX
n
(0)
d
+ Bi X
n
(0) = 0, and X
n
(1) = 0. (6.14)
Witness that equations (6.13) and (6.14) are homogeneous and they admit the
trivial solution X
n
() = 0. In fact, this is the only solution unless some special
conditions are met. More specically,
n
must assume certain particular values
known as eigenvalues. Equations (6.13) and (6.14) form an eigenvalue problem.
This type of problem is known as a Sturm-Liouville problem.
Equation (6.13) admits a solution of the form
X
n
() = C
1,n
sin(
n
) + C
2,n
cos(
n
). (6.15)
The boundary condition at = 1 suggests that we can reduce (6.15) to
X
n
() = C
n
sin(
n
(1 )). (6.16)
From the boundary condition at = 0, we have
tan(
n
) =

n
Bi
. (6.17)
108
Table 6.1
The rst 12 solutions of the transcendental equation tan(
n
) =
n
/Bi when
Bi = 1.
n (2n + 1)/2
n
(n + 1)
0 1.570796327 2.028757825 3.141592654
1 4.712388980 4.913180439 6.283185307
2 7.853981634 7.978665712 9.424777961
3 10.99557429 11.08553841 12.56637061
4 14.13716694 14.20743673 15.70796327
5 17.27875959 17.33637792 18.84955592
6 20.42035225 20.46916740 21.99114858
7 23.56194490 23.60428477 25.13274123
8 26.70353756 26.74091601 28.27433388
9 29.84513021 29.87858651 31.41592654
10 32.98672286 33.01700103 34.55751919
11 36.12831552 36.15596642 37.69911184
By solving equation (6.17), we obtain the values of
n
. As usual, it is useful
to take a look at some limiting cases. When Bi >> 1, the right hand side of
(6.17) is nearly zero and we would expect
n
n. When Bi << 1 and/or

n
is large, the right hand side of (6.17) will be large (in absolute value)
and we would expect
n
(2n + 1)/2. To obtain
n
values for moderate
values of Bi, we need to resort to numerical techniques. It is easy to see
that (2n + 1)/2 <
n
< (n + 1), where n = 0, 1, 2, .... Table 6.1 lists the
rst 12 values of
n
when Bi = 1. The various columns are, respectively, n,
(2n + 1)/2,
n
, and (n + 1). Witness that when n = 11, the dierence
between
n
and (2n + 1)/2 is less than 0.1%.
The eigenfunctions X
n
() are orthogonal. In other words,
_
1
0
X
n
()X
m
()d =
0 when n = m . The proof is straightforward, and I will not reproduce it
here. It is convenient to choose the coecients C
n
in equation (6.16) so that
the functions X
n
() are orthonormal. An appropriate choice of C
n
is C
n
=
_
4n
2nsin(2n)
, or
X
n
() =

4
n
2
n
sin(2
n
)
sin(
n
(1 )) . (6.18)
Now that we have the eigenfunctions X
n
() and the eigenvalues
n
, we are
ready to construct the general homogeneous solution
u
h
(, ) =

n=0
a
n
exp(
2
n
)X
n
(). (6.19)
109
The constants are obtained from the initial conditions ( = 0),
f() u
p
() =

n=0
a
n
X
n
(), (6.20)
where a
n
=
_
1
0
(f() u
p
())X
n
()d using the orthogonal property of X
n
().
Equation (6.19) suggests that the time constant of the system is proportional
to
2
0
. At large times, u
h
(, ) will decay like exp(
2
0
). We have already
argued that /2 <
0
< . The table below displays
0
as a function of Bi.
Bi 0 0.01 0.1 1 10 100

0
1.57 1.58 1.63 2.03 2.86 3.11 3.14
To check the results of our numerical simulations, it would be convenient to
construct a mock problem that will require us to retain only a few terms in
the innite series. To this end, we will select
f() = X
0
() + u
p
() =
4
0
2
0
sin2
0
sin(
0
(1 )) + ( + 1)/2, (6.21)
where
0
2.029 and Bi = 1. The exact solution in this case is
u(, ) =
4
0
2
0
sin(2
0
)
sin(
0
(1 )) exp(
2
0
) + ( + 1)/2. (6.22)
Later in the chapter, we will use the above expression to verify our nite
elements calculations. We could have selected any other coecient for the
sine term.
6.2 Projection on the Finite Element Space
In order to solve the equation (6.7) with nite elements, we rst need to obtain
the weak form of the equation and then subsequently project the weighted
residual form onto the nite element space.
_
1
0
w()
_
u



2
u

2
_
d = 0. (6.23)
Witness that the weighing function is a function of only and it is not a
function of time. The weak formulation is
_
1
0
w()
u

d =
_
w()
u

_
1
0

_
1
0
w

d. (6.24)
110
Next, following the derivation of section 3.6, we use the projection
u(, ) =
n

i=0
N
i
()d
i
(). (6.25)
In our formulation, the shape (base) functions are time-independent. There
are other variants of nite elements, in which one constructs shape functions
along the time axis. In fact, time can be treated just as another coordinate
along with the spatial coordinate x. Here, we treat time as a special coordinate
and our objective is to reduce the partial dierential equation into a set of
ordinary dierential equations (ODEs).
In our case,
u(, )

=
n

i=0
N
i
()
d
i
()

. (6.26)
The only addition to the derivation of section 3.6 is the integral
_
1
0
w()
u

d.
The other terms are treated the same as in section 3.6 and we do not repeat
the derivation here. As before, we select w() to be any of the base functions
N
j
().
_
1
0
N
j
()
u

d =
n

i=0
d
i
()

_
1
0
N
i
()N
j
()d =
n

i=0
d
i
()

m
ij
, (6.27)
where m
ij
=
_
1
0
N
i
()N
j
()d are the entries of the mass matrix (M). The
term capacitance is also frequently used. The matrix M is sparse and consists
only of contributions from the elements that share node i. The mass matrix
is clearly symmetric: m
ij
= m
ji
.
In practice, we calculate the mass matrix at the element level and then as-
semble the mass matrix for the entire problem. Consider, for example, a linear
element spanning
k

k+1
. Using natural coordinates, the element is pro-
jected onto 1 1. Recall that the two base functions are N
1
= (1)/2
and N
1
= (1 + )/2. Thus, at the element level, we have the integrals:
m
(e)
1,1
=
_

k+1

k
N
k
()N
k
()d =
h
(e)
2
_
1
1
N
1
()N
1
()d =
h
(e)
3
,
m
(e)
1,1
=
_

k+1

k
N
k
()N
k+1
()d =
h
(e)
2
_
1
1
N
1
()N
1
()d =
h
(e)
6
.
The mass matrix, at the element level, will have the form
m
(e)
=
h
(e)
6
_
_
2 1
1 2
_
_
,
111
where h
(e)
=
k+1

k
.
The projection of the time-dependent problem into the nite element space
results in a set of ordinary dierential equations that we write below in a
matrix form,
M
d()

= Kd. (6.28)
with the initial conditions d(0).
6.3 Numerical Solution of Time-Dependent Problems
To solve the initial value problem (6.28) numerically, we need to discretize
the time axis into nite (small, not necessarily uniform) segments
k
=

k+1

k
. Standard numerical codes can vary
k
as the calculation progresses,
according to internal estimates of the error. There are numerous algorithms
available for this purpose. Press et al. is an excellent reference.
In the discussion below, we will assume uniform time steps,
k
= , and
we will use the notation d
(n)
= d(n) to mean the value of the approximate
(numerically solved) vector d at time = n. In actual simulations, it is
often advantageous to employ non-uniform time steps, adapting the size of
the step to reduce errors and economize the computational time.
Furthermore, we will assume that d
(n)
is known and focus on the task of eval-
uating d
(n+1)
. Finally, to account for situations when the problem is nonlinear,
we will generalize the expression (6.28) and rewrite it as
M
d()

= F(d()), (6.29)
where F(d) is some known (possibly nonlinear) vector function of d. The
system can be integrated between
n
= n and
n+1
= (n + 1), yielding:
d
(n+1)
= d
(n)
+
_
(n+1)
n
M
1
F(d)d. (6.30)
The problem is how to approximate the integral on the right hand side of
equation (6.30). There are many dierent approaches to making this approx-
imation. Below, we enumerate a few.
(1) Implicit methods: express the integral in terms of both the known vari-
ables d
(n)
, d
(n1)
,..., and the unknowns: d
(n+1)
, d
(n+2)
,....
(2) Explicit - single step: express the integral in terms of the known variables,
d
(n)
, d
(n1)
,...
112
(3) Explicit - multiple step: express the integral in terms of the known vari-
ables, d
(n)
, d
(n1)
,..., and the intermediates d
(n+
1
m
)
, d
(n+
2
m
)
, ... The well-
known Runge-Kutta method is one example of such a scheme.
(4) Multiple interval methods such as leap-frog methods:
d
(n+1)
= d
(n1)
+
_
(n+1)
(n1)
M
1
F(d)d.
When solving the time-dependent problem, we must watch for:
(i) ACCURACY: We want |d
exact
(n)d
(n)
| = O(
p
), where p is the order
of the approximation. When p = 1, we say that the method is rst order. The
error decreases and the computational time increases as decreases and/or
p increases.
(ii) SIMPLICITY: We would like to minimize the number of times that we
need to compute the RHS of equation (6.29).
(iii) STABILITY: Unfortunately, small errors are introduced at every time
step, and they become an unavoidable part of d
(n)
. For the numerical solution
to be of any use, these errors must be bounded.
To better appreciate the various issues pertaining to the numerical integration
in time, we will analyze a few simple examples.
6.4 A Model Problem
In addition to presenting an overview of various algorithms for the solution of
time-dependent problems, I would also like to use this opportunity to show you
that the numerical algorithm can introduce spurious eects such as articial
damping or numerically-induced uctuations. These spurious eects mimic
actual physical phenomena and it is not always easy to discern whether certain
predicted phenomena are real or an artifact.
In the discussion below, we will also consider the simple model problem of
damped oscillations:
u
t
+ (if + )u = v, with u(0) = u
0
, (6.31)
where f is the frequency, is the damping, v is the forcing, i is the imaginary
variable. All the parameters are real. Equation (6.31) is a simplied version
of the linear advection equation after projection into (spatial) Fourier modes.
The equation admits the exact solution
u(t) =
_
u
0

v
+ if
_
e
(if+)t
+
v
+ if
, (6.32)
which exhibits damped oscillations.
113
6.5 The Explicit Euler Method
This is a rst order method (p = 1), in which we evaluate the RHS of (6.29)
at the current time step
M
d(t)
t
M
d
(n+1)
d
(n)
t
= F(d
(n)
). (6.33)
In other words, at each time step, the RHS of the equation can be explicitly
evaluated since d
(n)
is known.
Consider the model problem (6.31). The forward Euler approximation of (6.31)
is:
u
(n+1)
u
(n)
= t( + if)u
(n)
+ vt.
This is known as the Forward Euler method, explicit method. With no forcing
(v = 0), the new value of u is given by
u
(n+1)
= (1 t( + if))u
(n)
= Gu
(n)
, (6.34)
where
G = 1 t( + if)
is the growth factor. Thus the solution to the dierence equation (6.34) is
u
(n)
= G
n
u
(0)
= u
0
(1 t( + if))
n
, (6.35)
which is to be compared with the exact solution, u(nt) = u
0
e
(+if)nt
.
First, we will verify that the forward Euler method is convergent. To this end,
we will x the time t, and check whether the nite dierence approximation
(6.35) gives the same result as the exact solution in the limit of t 0 or
n , or whether lim
n
(1 at/n)
n
= e
at
, where a = + if. If you
remember your calculus, you know that the answer is the armative.
Thus, we conclude that the Euler method is, indeed, convergent in the limit
of t 0. In practice, since life is short, we cannot decrease t to zero.
The question is how big t can be with the scheme still producing reasonable
results. Thus, we need to address the issue of stability.
When > 0, the analytic solution decays to zero. For the nite dierence
approximation to exhibit similar decay, we must have the growth rate |G| < 1,
or
(1 t)
2
+ (ft)
2
< 1. (6.36)
114
0 2 4 6 8 10 12 14 16 18
!6
!4
!2
0
2
4
6
t
R
e
(
u
)


Exact
! t=0.2
! t=0.1
! t=0.01
Fig. 6.1. Simulation based on forward Euler method applied to un-damped oscilla-
tions (f = 1, = 0). The oscillations amplify with time. The rate of amplication
depends on the time step. For large time steps, the frequency of the oscillations is
reduced.
Thus, the explicit method imposes a restriction on the time step.
When f = 0 (no oscillations), we have |G| = |1 t|, and the scheme is
conditionally stable provided that t < 2/. As a quick example, consider
f = 0, = 10, u
0
= 1, and t = 0.1. The exact solution is e
t
= e
1
= 0.36788.
Lets see what the nite dierence approximation produces for various values
of t. The table below lists the results.
t 0.001 0.01 0.05 0.1
u
(n)
= G
n
u
0
0.366 0.349 0.25 0
u
(n)
u
exact
0.002 0.019 0.118
Next, consider the case of zero damping ( = 0). The exact solution (6.32)
predicts constant amplitude oscillations. In contrast, equation (6.35) suggests
amplied oscillations, as shown in gure 6.1. Clearly, this is a spurious behavior
induced by the numerical approximation. The rate of growth depends on t,
|G|
2
= 1 + (ft)
2
> 1. The numerical solution grows unbounded as n ,
thus this scheme is not stable in the case of zero damping.
In the more general case, when we have both dissipation and oscillations, the
explicit scheme is not entirely useless. The stability criteria can be obtained by
requiring (6.36). Thus, in the presence of dissipation, the forward Euler scheme
can provide reasonable results. Sometimes, dissipation is spuriously added
to stabilize the scheme. This is, however, not advisable since the resulting
algorithm does not appropriately represent the behavior of the dierential
equation.
115
6.6 Implicit Time Stepping (Backward Euler)
This is also a rst order (p = 1) method. The idea here is to evaluate the RHS
of the equation (6.29) at the new time.
M
d(t)
t
M
d
(n+1)
d
(n)
t
= F(d
(n+1)
). (6.37)
In other words, at each time step, the RHS of the equation is not known and
needs to be solved for.
Lets consider the backward Euler approximation to equation (6.31),
u
(n+1)
u
(n)
= t( + if)u
(n+1)
+ vt.
With no forcing (v = 0), the solution is
u
(n+1)
=
1
1 +t( + if)
u
(n)
.
Like the explicit scheme, it is easy to see that the implicit scheme converges
to the correct value when t 0, since lim
n
(1 + at/n)
n
= e
at
, where
a = + if. The growth factor for this scheme is
G =
1
1 + t + ift
.
It is easy to see that |G| is always less than 1, suggesting that the scheme is
unconditionally stable for all choices of t.
As a quick numerical example, lets consider the same case as previously:
f = 0, = 10, u
0
= 1, and t = 0.1. The exact solution is e
1
= 0.36788.
Lets see what the nite dierence approximation produces for various values
of t. The table below shows the results. As t increases, the error increases.
Note that the explicit method converges to the solution from below while the
implicit method converges from above.
t 0.001 0.01 0.05 0.1
u
(n)
= G
n
u
0
0.3697 0.3855 0.4444 0.5000
u
(n)
u
exact
0.0018 0.0177 0.0766 0.1321
When the original system lacks damping ( = 0), this scheme still induces
articial (numerical) damping or dissipation |G| = 1/
_
1 + (ft)
2
, as shown
in g.6.2. The numerical scheme exhibits lower frequency oscillations than the
actual problem.
116
0 5 10 15 20 25 30
!1
!0.8
!0.6
!0.4
!0.2
0
0.2
0.4
0.6
0.8
1
t
R
e
(
u
)


Exact
! t=0.5
! t=0.1
! t=0.01
Fig. 6.2. Simulations of the oscillatory behavior (f = 1) without damping ( = 0)
using the backward Euler method. The numerical solutions decay with time. In
other words, the scheme introduces articial damping. For large time steps, the
frequency of the oscillations is reduced.
For this stability, we pay a heavy price. This price is not evident in the one-
dimensional problem. But it is quite painful in the large problem. Md
(n+1)

tF(d
(n+1)
) = Md
(n)
. At each time step we need to solve (possibly) nonlinear
equations. In many cases, the added cost of having to solve an implicit problem
is more than compensated for by the larger time steps that this method aords.
6.7 The Trapezoidal () Method
In this scheme, we approximate the equation (6.29) by
M
d(t)
t
M
d
(n+1)
d
(n)
t
= F(d
(n+1)
) + (1 )F(d
(n)
).
When = 0, we get the explicit scheme. When = 1, we get the implicit
scheme. Typically, one selects = 0.5. In this case, we have a second order
accurate (p = 2) implicit scheme. The trapezoidal scheme with = 0.5 is
referred to as the Crank-Nicholson method.
Lets apply the trapezoidal method (with = 1/2) to the damped, linear
oscillator (6.31) without forcing. The nite dierence equation becomes:
u
(n+1)
u
(n)
= t( + if)
u
(n+1)
+ u
(n)
2
,
117
or
u
(n+1)
=
1
t
2
( + if)
1 +
t
2
( + if)
u
(n)
.
The growth factor
G =
1
t
2
( + if)
1 +
t
2
( + if)
.
It can be shown that |G| < 1 for all values of t, suggesting that the scheme
is unconditionally stable. Of course, in the limit of t 0, the numerical
approximation converges to the exact solution.
The trapezoidal method has another important advantage. It is second order
accurate (p = 2). To see that this is the case, we expand u
(n+1)
into a Taylor
series about u
(n)
, or t = t
n
,
u
(n+1)
= u(t
n
+t) = u(t
n
) + u

(t
n
)t +
1
2
u

(t
n
)t
2
+ O(t
3
),
and evaluate the residual of the nite dierence approximation,
u
(n+1)
u
(n)
t

a
2
(u
(n+1)
+ u
(n)
)
=
u

(t)t +
1
2
u

(t)t
2
+ O(t
3
)
t

a
2
_
u(t) + u

(t)t + O(t
2
) + u
(n)
_
= u

(t) +
1
2
u

(t)t + O(t
2
) au(t)
a
2
_
u

(t)t + O(t
2
)
_
= (u

(t) au(t)) +
1
2
(u

(t) au(t))

t + O(t
2
) = O(t
2
)
In the above, a = + if and we took advantage of the equation u

= au. The
O(t
2
) precision remains valid also in the nonlinear case. This, of course, has
the advantage that we can get much more accurate results with coarser time
steps.
As a quick numerical example, lets consider the same case as previously:
f = 0, = 10, u
0
= 1, and t = 0.1. The exact solution is e
1
= 0.36788. Lets
see what the nite dierence approximation produces for various values of
t. The table below lists the results. As you can see, the trapezoidal method
is quite accurate. One may wonder whether it is possible to obtain similar
accuracy with fully explicit (fast) methods. Happily, the answer is yes.
t 0.001 0.01 0.05 0.1
u
(n)
= G
n
u
0
0.3679 0.3679 0.3600 0.3333
118
0 2 4 6 8 10 12 14 16 18 20
!1
!0.8
!0.6
!0.4
!0.2
0
0.2
0.4
0.6
0.8
1
t
R
e
(
u
)


Exact
! t=0.5
! t=0.1
Fig. 6.3. Simulations of the oscillatory behavior (f = 1) without damping ( = 0)
using the Crank-Nicholson method.
When = 0 (undamped oscillations), the growth factor per time step is |G| =
|1ift/2|
|1+ift/2|
1, and the scheme does not introduce articial damping. In other
words, the oscillations will have constant amplitude as they should. Figure
6.3 shows the numerically calculated results for two values of t. It can be
noticed that the frequency of the numerically calculated oscillations is lower
than the real frequency (or the period of the oscillation is longer).
6.8 Multi-Step (Adams-Bashforth) Methods
Consider the two-step method to approximate the (6.29),
M
d
(n+1)
d
(n)
t
= F(d
(n)
) + F(d
(n1)
).
Witness that the RHS of the equation consists of terms evaluated at times t
and t t. This is a fully explicit scheme, and it should be almost as fast as
the forward Euler scheme. Clearly, this scheme will require a special rule to
evaluate d
(1)
at time t = t. This means that we can start the algorithm only
at t = 2t. To get the method started, we need to use a dierent algorithm
such as the Runge-Kutta scheme.
To simplify matters, we will focus on the damped linear oscillator (6.31) with-
out forcing. The discretized scheme is, therefore,
u
(n+1)
u
(n)
t
= a(u
(n)
+ u
(n1)
), (6.38)
where a = ( + if). We wish to nd values of and to make this scheme
second order (p = 2) accurate. To this end, we substitute u

= au in the
119
equation (6.38), and use the Taylor series expansion for values of the function
other than at time t = t
n
. The residual of (6.38) can be written as,
u

+
1
2
u

t + O(t
2
) u

(u

t + O(t
2
))
= (1 )u

+ (
1
2
+ )u

t + +O(t
2
)
To maintain second order precision, we need + = 1 and 1/2 + = 0.
Hence, we select = 1/2 and = 3/2. The scheme (6.38) becomes
u
(n+1)
u
(n)
t
= a
_
3
2
u
(n)

1
2
u
(n1)
_
,
which leads to the iterative scheme
u
(n+1)
=
_
1 +
3
2
at
_
u
(n)

a
2
tu
(n1)
.
To assess the stability of this scheme, lets evaluate the growth factor G,
u
(n+1)
= Gu
(n)
. Upon substitution in the above expression, we obtain the
second order algebraic equation for G.
u
(n)
G =
_
1 +
3
2
at
_
u
(n)

a
2
t
u
(n)
G
.
or
G
2

_
1 +
3
2
at
_
G +
a
2
t = 0.
Since the method is two-step, there are two growth factors. Recall that in
the forward Euler method with no oscillations (f = 0), when a = < 0,
the threshold of stability was G = 1. If we substitute G = 1 here, we get
at = 1. That is, for stability, we need t < 1/a. This is a tighter limit
on t than in the Euler technique (t < 2/a). We still gain, however, since
we are second order accurate. In the absence of damping ( = 0), the scheme
amplies oscillations. The amplication is, however, weak and proportional to
(ft)
4
.
The scheme that we described here can be extended to include more time
levels and greater precision, i.e.,
u
(n+1)
u
(n)
t
=
1
f(u
(n)
) +
2
f(u
(n1)
) + ... +
p
f(u
(np+1)
).
These techniques allow one to achieve O(t
p
) precision, but not absolute
stability. In fact, as the precision (p) increases, the restriction on t tightens.
Multi-step methods can also be applied together with implicit schemes to give
high precision with lesser restrictions on the time steps. In general, implicit
120
methods have better stability, but higher cost. One can combine the speed of
an explicit technique with the benets of an implicit one by combining them
both into a predictor-corrector algorithm.
6.9 Predictor-Corrector Methods or Heuns Methods
Consider the system,
du
dt
= f(t, u), u(0) = u
0
.
A Predictor-Corrector technique can be described as
(i) Predictor: use explicit formula to predict an intermediate u
(n+1)
.
(ii) Estimator: evaluate a new estimate for the derivative f

(t
n+1
, u
(n+1)
).
(iii) Corrector: use f

(t
n+1
, u
(n+1)
) to obtain the new u
(n+1)
.
In contrast to the explicit Euler method, in which the derivative f(t, u) is
calculated at the beginning of the interval, in the predictor corrector method,
one uses a better approximation for the derivative. The algorithm has two
steps at each time step.
u
(n+1)
= u
(n)
+tf(t
n
, u
(n)
),
u
(n+1)
= u
(n)
+
t
2
_
f(t
n
, u
(n)
) + f(t
n+1
, u
(n+1)
)
_
.
Witness that only one condition is needed to get the method started. This
algorithm is the simplest example of the predictor corrector method.
The predictor corrector method is more often used in an implicit form. In the
latter case, the corrector equation is replaced with
u
(n+1)
(j+1)
= u
(n)
+
t
2
_
f(t
n
, u
(n)
) + f(t
n+1
, u
(n+1)
(j)
)
_
where u
(n+1)
(1)
= u
(n+1)
. The corrector equation is iterated until u
(n+1)
(j)
con-
verges. Typically, 2-3 iterations are needed. The PEC method is second order
accurate (p = 2).
6.10 Runge-Kutta (RK) Methods
Runge-Kutta methods consist of weighted averages of the derivatives of f(t, u)
calculated at various instances in time within the interval t. The RK meth-
ods are explicit, based on Taylor series expansions, and may have various
precisions, i.e., 2nd, 4th order, etc.
121
Below, we illustrate the one-step variant of RK.
u
(n+1)
u
(n)
t
=
1
2
_
f(t
n
, u
(n)
) + f(t
n+1
, u
(n)
+tf(t
n
, u
(n)
))
_
.
Witness that u
(n+1)
= u
(n)
+ tf(t
n
, u
(n)
) appears inside the second evalua-
tion. The scheme is somewhat reminiscent of the trapezoidal rule, which uses
f(t
n+1
, u
(n+1)
) . This RK scheme is, however, fully explicit.
We will apply the scheme to our model equation du/dt = au. This gives
u
(n+1)
u
(n)
t
=
1
2
_
au
(n)
+ a(u
(n)
+tau
(n)
)
_
,
or
u
(n+1)
= u
(n)
_
1 + at +
1
2
a
2
t
2
_
= Gu
(n)
.
You can immediately convince yourself that the scheme is a Taylor series
expansion accurate up to O(t
2
) of the solution for u. Thus, the scheme is
second order (p = 2) accurate. For stability, we need |G| < 1. The requirement
G > 1 is always satised. The requirement G < 1 imposes a limit on the
step size of t < 2/(a). If this requirement is not satised, the numerical
solution will grow while the actual solution decays (a < 0). In other words,
this RK scheme is conditionally stable.
The algorithm below outlines the 4th order (t
4
accuracy) RK method, which
is probably the most popular version of the RK schemes in the solution of
ODEs. The time step t does need to be uniform, and is often adapted to
specic problem requirements.
K
1
= f(t
n
, u
(n)
)
K
2
= f(t
n
+
t
2
, u
(n)
+
t
2
K
1
)
K
3
= f(t
n
+
t
2
, u
(n)
+
t
2
K
2
)
K
4
= f(t
n
+t, u
(n)
+tK
3
)
u
(n+1)
= u
(n)
+
t
6
(K
1
+ 2K
2
+ 2K
3
+ K
4
)
t
n+1
= t
n
+t
We will not derive here the RK formula, and we will just make a few observa-
tions. The u-iteration formula is a weighted average of four values K
1
, K
2
, K
3
,
and K
4
. K
1
and K
4
are given a weight of 1/6 in the weighted average, whereas
K
2
and K
3
are weighted 1/3, or twice as heavily as K
1
and K
4
. As usual with
a weighted average, the sum of the weights 1/6, 1/3, 1/3 and 1/6 is 1.
122
If you apply this scheme to our model equation du/dt = au, you will nd
that the fourth order scheme approximates the exact solution e
at
through
(at)
4
/24.
6.11 Sti Problems
Problems that include multiple, disparate time (or length) scales are dubbed
sti. For example, in chemical kinetics, we often have fast and slow reactions.
In sti problems, the time step must be small enough to resolve the fast
phenomenon, and we need a sucient number of time steps to obtain an
approximation of the slow phenomenon. The need for a very large number of
very small time steps makes the problem intractable.
Lets consider, for example, the equation
du
dt
= Au, where A =
_
_
3 1
0 100
_
_
and u =
_
_
u
1
u
2
_
_
.
The eigenvalues of A are, respectively, 3 and 100. If we were to use Eulers
forward method, the numerical stability criteria would require t < 2/100,
even though e
3t
is the solution of interest. u
2
decays like e
100t
and is of no
interest. Unfortunately, in numerical simulations, we cannot ignore the very
large negative eigenvalue as it will cause any explicit numerical scheme to
diverge. In other words, we will be spending a lot of resources attempting to
resolve a phenomenon that is of little interest and as a result being unable to
go far enough to study what we really want.
The problem can be partially addressed through the use of implicit algorithms,
which maintain numerical stability even for relatively large time steps. The
development of ecient methods for sti problems is still an active research
area.
6.12 The COMSOL Corner
In this section, we will solve the problem introduced in section 6.1 with COM-
SOL and compare the nite element solution with the analytical solution.
(1) Launch COMSOL Multiphysics 4.0a. Specify 1-D in Space Dimension;
under Add Physics page, expand the Mathematics branch and further
expand the PDF Interfaces branch and select General Form PDE(g);
under Select Study Type page, select Time Dependent.
123
(2) Choose Global Denitions in the Model Builder window by right click-
ing it, and select Parameters. We want to dene a global parameter,
v0 = 2.029.
(3) Pointing to Geometry in the Model Builder window, and clicking the
right mouse button, a drop-down menu will appear. Select Interval from
this menu. In the middle Settings window on the Interval page, there are
values of the left endpoint and right endpoint for this interval (0 and 1
by default).
(4) Expand PDE (g) in the Model Builder window
Select General Form PDE, specify =-ux, f=0, da=1, ea=0.
Specify the initial condition: 4*v0/(2*v0-sin(2*v0))*sin(v0*(1-x))+(x+1)/2.
Right clicking the PDE(g) branch in the Model Builder window, and
selecting Dirichelet Boundary Condition from the drop-down menu.
Select boundary point 2 in the graphics window, add this point to the
box under Boundaries in the Dirichelet Boundary Condition page, and
specify r1 = 1 in the box under Prescribed value of u.
Right clicking the PDE(g) branch in the Model Builder window, and
selecting Flux/Source from the drop-down menu. Select boundary
point 1 in the graphics window, add this point to the box under Flux/Source
page, and specify g = 0 and q = 1.
(5) Right clicking Mesh and select Build All in the Model Builder window.
COMSOL will automatically mesh the domain.
(6) Right clicking Study branch in the Model Builder window, select Com-
pute. COMSOL will solve the equation and display the solution in the
Graphics window. You can also specify: Step1 / Time Dependent / Time
range(0:0.1:2) to generate the solution for time between 0 and 2. The
time setting implies that COMSOL stores the results of the numerical
calculations after every 0.1 time interval. This is not the actual time in-
crement used in the numerical integration. The latter is typically much
smaller.
(7) At the completion of the calculation, COMSOL displays u as a function
of x for dierent time instants as specied. Since in our example, the
time constant of the system is 1, we would expect that at t = 2, the
temperature distribution will be fairly close to the steady state. Compare
the COMSOL solution with the steady state temperature distribution.
(8) In the Results menu, one can save the computed solution into a data le
by selecting Results/Report, and specify the name of the le in Output
option in the Settings window. This le can be exported into MATLAB
to further process the data. For example, gure 5 depicts u as a function
of t at x = 0 and x = 0.5. The the symbols and solid lines correspond,
respectively, to the nite element and analytical solutions. As you can
see, the agreement is excellent.
124
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
time
u
x=0.5
x=0
Fig. 6.4. The temperature at x = 0 and x = 0.5 as a function of time. The symbols
and solid lines correspond, respectively, to the nite element and the exact solutions.
We can also implement this problem in COMSOL using the weak formulation.
Our starting point is equation (6.24) with the boundary conditions

u(0, t)

+ Bi u(0, t) = 0, and u(1, t) = 1.


We require the weighing function w to satisfy w(1) = 0. Thus, substitution of
boundary conditions into the weak form yields:
_
1
0
w()
u

d +
_
1
0
w

d + Bi w(0)u(0) = 0.
Now, we are ready to go to COMSOL.
(1) Launch COMSOL Multiphysics 4.0a. Specify 1-D in Space Dimension;
under Add Physics page, expand the Mathematics branch and further
expand the PDF Interfaces branch and select Weak Form PDE(w);
under Select Study Type page, select Time Dependent.
(2) Dene the geometry as before.
(3) Expand PDE (w) in the Model Builder window
Select Weak Form PDE, spell out the weak form on the element level:
test(u)*ut+test(ux)*ux
where ut=du/dt and ux=du/dx, test(u) is the weighing function w, and
test(ux) is dw/dx. This takes care of the expression for the weak form
on element level.
Specify the initial condition: 4*v0/(2*v0-sin(2*v0))*sin(v0*(1-x))+(x+1)/2
Right clicking the PDE(w) branch in the Model Builder window,
and selecting Dirichelet Boundary Condition from the drop-down
menu. Select boundary point 2 in the graphics window, add this point to
the box under Boundaries in the Dirichelet Boundary Condition page,
125
and specify r1 = 1 in the box under Prescribed value of u.
Right clicking the PDE(w) branch in the Model Builder window, un-
der boundary condition section select More/Weak Contribution. In the
Weak Contribution window, add boundary point 1 to the box, and
input the Weak expression: u test(u), which stands for u(0) w(0).
(4) Now, you are ready to solve the problem in its weak formulation. Please
go ahead to generate the mesh, and solve the problem.
REFERENCES
Press, W., Flannery, B., Tuekolsky, S., Vetterling, W, Numerical Recipes,
Cambridge 1986.
Hughes, T. J., 1987, The Finite Element Method, Dover
126

Вам также может понравиться