Вы находитесь на странице: 1из 10

Machine Learning

Solving Partial Differential Equations


Finite Difference Method for solving Linear Parabolic Partial Differential Equations

Praveen Singh

Intern, Risk Latte

A partial differential equation (PDE) is an equation involving an unknown function u(x,y,…) of two
or more variables and some or all of its partial derivatives. It basically a relationship between an
unknown function u(x,y,..) and its derivatives with respect to the independent variables x, y,... .
𝜕𝑢
We often denote derivative by subscripts thus, = 𝑢𝑥 .
𝜕𝑥

The most general PDE in two independent variables of first order.

𝐹 (𝑥, 𝑦, 𝑢(𝑥, 𝑦), 𝑢𝑥 (𝑥, 𝑦), 𝑢𝑦 (𝑥, 𝑦)) = 𝐹(𝑥, 𝑦, 𝑢, 𝑢𝑥 , 𝑢𝑦 ) = 0

A solution of a PDE is a function u(x, y, … ) that satisfies equation identically, at least in some
region of the x, y … variables.
Consider the generic form of a second order linear partial differential equation in two variables with
constant coefficients:
𝐴𝑢𝑥 + 𝐵𝑢𝑥𝑦 + 𝐶𝑢𝑦𝑦 + 𝐷𝑢𝑥 + 𝐸𝑢𝑦 + 𝐹 = 𝐺(𝑥, 𝑦)
If 𝐵 2 − 4𝐴𝐶 > 0, then the equation is called hyperbolic, for example wave equation.
If 𝐵 2 − 4𝐴𝐶 = 0, then the equation is called parabolic, for example heat conduction equation.
If 𝐵 2 − 4𝐴𝐶 < 0, then the equation is called elliptic, for example Laplace equation.
Parabolic form of partial differential equation is used to describe a wide family of problems in
science including heat diffusion, ocean acoustic propagation, physical or mathematical systems
with a time variable and processes like heat diffusing through a solid. 1

Risk Latte Americas Inc.


Machine Learning
One – Dimensional Heat Equation

Consider a thin bar of length L, of uniform cross-section and constructed of homogeneous material.
Suppose that the side of the bar is perfectly insulated so no heat transfer could occur through it (heat
could possibly still move into or out of the bar through the two ends of the bar). Thus, the
movement of heat inside the bar could occur only in the x-direction. Then, the amount of heat
content at any place inside the bar, 0 < x < L, and at any time t > 0, is given by the temperature
distribution function u(x, t). It satisfies the homogeneous one-dimensional heat conduction
equation:
𝑢𝑡 = 𝑘𝑢𝑥𝑥

where 𝑢(𝑡, 𝑥) is the temperature at time t and at position x and k is a constant. The symbol 𝑢𝑡
signifies the partial derivative with respect to time variable t, and similarly 𝑢𝑥𝑥 is the second partial
derivative with respect to x. This equation says, roughly, that temperature at a given time and point
rises or falls at a rate proportional to the difference between the temperature at that point and the
average temperature near that point. The quantity 𝑢𝑥𝑥 measures how far off the temperature is from
satisfying the mean value property of harmonic functions.

A generalization of the heat equation is:

𝑢𝑡 = −𝐿𝑢

where L is the second – order elliptic operator (implying L must be positive also a case where L is
non-positive is described below). Such a system can be hidden in an equation of the form
∇. (𝑎(𝑥)∇𝑢(𝑥)) + 𝑏(𝑥)𝑇 ∇𝑢(𝑥) + 𝑐𝑢(𝑥) = 𝑓(𝑥)
if the matrix-valued function a(x) has a kernel of dimension 1.
2

Risk Latte Americas Inc.


Machine Learning
Some finite difference method used to solve parabolic partial difference equation
1. Euler method
Consider the following problem involving heat equation with a source term,
𝑢𝑡 = 𝛽𝑢𝑥𝑥 + 𝑓(𝑥, 𝑡), 𝑎 < 𝑥 < 𝑏, 𝑡>0
𝑢(𝑎, 𝑡) = 𝑔1 (𝑡), 𝑢(𝑏, 𝑡) = 𝑔2 (𝑡), 𝑢(𝑥, 0) = 𝑢0 (𝑥),
let us seek a numerical solution for u(x, t) at a particular time T > 0 or at certain times in
the interval 0 < t < T.
𝑏−𝑎
𝑥𝑖 = 𝑎 + 𝑖ℎ, 𝑖 = 0,1, … , 𝑚, ℎ= 𝑚
,
𝑇
𝑡 𝑘 = 𝑘∆t, k = 0,1, … , n, ∆t = 𝑛

It turns out that any arbitrary Δt cannot be used for explicit methods because of numerical
instability concerns. The second step is to approximate the derivatives with finite difference
approximations.
Forward Euler method
At a grid point (𝑥𝑖 , 𝑡 𝑘 ), 𝑘 > 0 on using the forward FD approximation for 𝑢𝑡 and central
FD approximation for 𝑢𝑥𝑥 we have
𝑢(𝑥𝑖 , 𝑡 𝑘 + ∆𝑡) − 𝑢(𝑥𝑖 , 𝑡 𝑘 ) 𝑢(𝑥𝑖−1, 𝑡 𝑘 ) − 2𝑢(𝑥𝑖 , 𝑡 𝑘 ) + 𝑢(𝑥𝑖+1 , 𝑡 𝑘 )
=𝛽 + 𝑓(𝑥𝑖 , 𝑡 𝑘 ) + 𝑇(𝑥𝑖 , 𝑡 𝑘 )
∆𝑡 ℎ2

The local truncation error is


ℎ2 𝛽 ∆𝑡
𝑇(𝑥𝑖 , 𝑡 𝑘 ) = 𝑢𝑥𝑥𝑥𝑥 (𝑥𝑖 , 𝑡 𝑘 ) + 𝑢𝑡𝑡 (𝑥𝑖 , 𝑡 𝑘 ) + ⋯,
12 2

where the dots denote the higher order terms, so the discretization is 𝑂(ℎ2 + ∆𝑡). The discretization
is first order in time and second order in space, when the finite difference(FD) equation is
𝑈𝑖𝑘+1 + 𝑈𝑖𝑘 𝑘
𝑈𝑖−1 − 2𝑈𝑖𝑘 + 𝑈𝑖+1
𝑘
=𝛽 2
+ 𝑓𝑖𝑘
∆𝑡 ℎ
where 𝑓𝑖𝑘 = 𝑓(𝑥𝑖 , 𝑡 𝑘 ), with 𝑈𝑖𝑘 again denoting the approximate value for the true solution𝑢(𝑥𝑖 , 𝑡 𝑘 ).
When 𝑘 = 0, 𝑈𝑖0 is the initial condition at the grid point (𝑥𝑖 , 0); and from the values 𝑈𝑖𝑘 at the time
level k the solution of the FD equation at the next time level k+1 is
𝑘
𝑈𝑖−1 − 2𝑈𝑖𝑘 + 𝑈𝑖+1
𝑘
𝑈𝑖𝑘+1 = 𝑈𝑖𝑘 + ∆𝑡 (𝛽 + 𝑓𝑖𝑘 ) , 𝑖 = 1,2, … , 𝑚 − 1.
ℎ2
The solution of the FD equations is thereby directly obtained from the approximate solution
at previous time steps. We successively compute the solution at 𝑡1 from the initial
condition at 𝑡 0 , and then at 𝑡 2 using the approximate solution at 𝑡1 .
3

Risk Latte Americas Inc.


Machine Learning
Backward Euler method
If the backward FD formula is used for 𝑢𝑡 and the central FD approximation for 𝑢𝑥𝑥 at (𝑥𝑖 , 𝑡 𝑘 ), we
get
𝑈𝑖𝑘 + 𝑈𝑖𝑘−1 𝑘
𝑈𝑖−1 − 2𝑈𝑖𝑘 + 𝑈𝑖+1
𝑘
=𝛽 + 𝑓𝑖𝑘 , 𝑘 = 1,2, …,
∆𝑡 ℎ2

which is conventionally re-expressed as

𝑈𝑖𝑘+1 + 𝑈𝑖𝑘 𝑘+1


𝑈𝑖−1 − 2𝑈𝑖𝑘+1 + 𝑈𝑖+1
𝑘+1
=𝛽 + 𝑓𝑖𝑘+1 , 𝑘 = 0,1, ….
∆𝑡 ℎ2

The backward Euler method is also consistent, and the discretization error is again 𝑂(ℎ2 + ∆𝑡).
Using the backward Euler method, we cannot get 𝑈𝑖𝑘+1 with a few simple algebraic operations
because all of the 𝑈𝑖𝑘+1 ′𝑠 are coupled together. Thus, we need to solve the following tridiagonal
system of equations, in order to get the approximate solution at the time level k+1.

1 + 2𝜇 −𝜇 𝑈1𝑘+1 𝑈1𝑘 + ∆𝑡𝑓1𝑘+1 + 𝜇𝑔1𝑘+1


−𝜇 1 + 2𝜇 −𝜇 𝑈2𝑘+1 𝑈2𝑘 + ∆𝑡𝑓2𝑘+1
−𝜇 1 + 2𝜇 −𝜇 𝑈3𝑘+1 = 𝑈3𝑘 + ∆𝑡𝑓3𝑘+1
⋱ ⋱ ⋱ ⋮ ⋮
𝑘+1 𝑘 𝑘+1
−𝜇 1 + 2𝜇 −𝜇 𝑈𝑚−2 𝑈𝑚−2 + ∆𝑡𝑓𝑚−2
𝑘+1 𝑘 𝑘+1
[ −𝜇 1 + 2𝜇] [𝑈𝑚−1 ] [𝑈𝑚−1 + ∆𝑡𝑓𝑚−1 + 𝜇𝑔2𝑘+1 ]

𝛽∆𝑡
Where 𝜇 = and 𝑓𝑖𝑘+1 = 𝑓(𝑥𝑖 , 𝑡 𝑘+1 ). Note that we can use 𝑓(𝑥𝑖 , 𝑡 𝑘 ) instead of 𝑓(𝑥𝑖 , 𝑡 𝑘+1 ),
ℎ2

since the method is first order accurate in time. Such a numerical method is called an
implicit, because the solution at time level k + 1 are coupled together. The advantage of
the backward Euler method is that it is stable for any choice of Δt.
4

Risk Latte Americas Inc.


Machine Learning
2. Method of lines
The method of lines (MOL) is a technique for solving partial differential equations (PDEs) in
which all but one dimension is discretized. It is basically the construction or analysis of numerical
methods for PDEs that proceeds by first discretizing the spatial derivatives only and leaving the
time variable continuous. This leads to a system of ordinary differential equations (ODEs) to
which a numerical method for initial value ordinary equations can be applied.
Consider a general parabolic equation of the form
𝑢𝑡 (𝑥, 𝑡) = 𝐿𝑢(𝑥, 𝑡) + 𝑓(𝑥, 𝑡),
where L is an elliptic operator. Let Lh be a corresponding FD operator acting on a grid 𝑥𝑖 = 𝑎 + 𝑖ℎ.
We can form a semi-discrete system of ordinary differential equation of form
𝜕𝑈𝑖
= 𝐿ℎ 𝑈𝑖 (𝑡) + 𝑓𝑖 (𝑡),
𝜕𝑡
where 𝑈𝑖 (𝑡) ≈ 𝑢(𝑥𝑖 , 𝑡) is the spatial discretization of 𝑢(𝑥, 𝑡) along the line 𝑥 = 𝑥𝑖, i.e., we only
discretize the spatial variable. For example, the heat equation with a source 𝑢𝑡 = 𝛽𝑢𝑥𝑥 + 𝑓 where
𝐿 = 𝛽𝜕 2 /𝜕𝑥 2 is represented by 𝐿ℎ = 𝛽𝛿𝑥𝑥
2
produces the discretized system of ODE

𝜕𝑈1 (𝑡) −2𝑈1 (𝑡) + 𝑈2 (𝑡) 𝑔1 (𝑡)


=𝛽 + 2 + 𝑓(𝑥1 , 𝑡),
𝜕𝑡 ℎ2 ℎ
𝜕𝑈𝑖 (𝑡) 𝑈𝑖−1 (𝑡) − 2𝑈2 (𝑡) + 𝑈𝑖+1 (𝑡)
=𝛽 + 𝑓(𝑥𝑖 , 𝑡), 𝑖 = 2,3, … , 𝑚 − 2,
𝜕𝑡 ℎ2
𝜕𝑈𝑚−1 (𝑡) 𝑈𝑚−2 (𝑡) − 𝑈𝑖−1 (𝑡) 𝑔2 (𝑡)
=𝛽 + 2 + 𝑓(𝑥𝑚−1 , 𝑡),
𝜕𝑡 ℎ2 ℎ
and the initial conditional is
𝑈𝑖 (0) = 𝑢0 (𝑥𝑖 , 0), 𝑖 = 1,2, … , 𝑚 − 1.
𝜕
The MOL is especially useful for nonlinear PDE of the form 𝑢𝑡 = 𝑓 (𝜕𝑥 , 𝑢, 𝑡).

The resulting ODE system can be solved using the standard initial value software (like MATLAB),
which may use a variable time-step/variable order approach with time local error control.
The most important advantage of the MOL approach is that it has not only the simplicity of the
explicit methods but also the superiority (stability advantage) of the implicit ones unless a poor
numerical method for solution of ODEs is employed. It is possible to achieve higher-order
approximations in the discretization of spatial derivatives without significant increases in the
computational complexity.
5

Risk Latte Americas Inc.


Machine Learning
3. Schmidt method
Consider a rectangular mesh in the x-t plane with spacing h along x direction and along time t
direction. Denoting the mesh point (𝑥, 𝑡) = (𝑖ℎ, 𝑗𝑘) as simply i, j we have,
𝜕𝑢 𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗
=
𝜕𝑡 𝑘
And
𝜕 2𝑢 𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗
2
=
𝜕𝑥 ℎ2
𝜕𝑢 𝜕2 𝑢
Substituting these into 1-D Heat equation = 𝑐 2 𝜕𝑥 2 , we obtain
𝜕𝑡

𝑘𝑐 2
𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 = 2 [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ]

Or, 𝑢𝑖,𝑗+1 = 𝛼𝑢𝑖−1,𝑗 − (1 − 2𝛼)𝑢𝑖,𝑗 + 𝛼𝑢𝑖+1,𝑗 (1.1)
where 𝛼 = 𝑘𝑐 2 /ℎ2 is the mesh ratio parameter. This formula enables us to determine the value of
u at the (i, j+1) mesh points in the terms of the known function values at the point 𝑥𝑗−1 , 𝑥𝑖 and 𝑥𝑖+1
at the instant 𝑡𝑗 it is a relation between function values at the time levels j+1 and j and is therefore
called a 2-level formula in schematic form as shown below:

1
Hence (1.1) is called the Schmidt explicit formula which is valid only for 0 < 𝛼 < 2.
1 1
In particular when 𝛼 = , equation (1.1) reduces to 𝑢𝑖,𝑗+1 = (𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 )
2 2

which shows that the value of u at 𝑥𝑖 at time 𝑡𝑗+1 is the mean of the u-value at 𝑥𝑖−1 and 𝑥𝑖+1 at time
𝑡𝑗 . This known as bender-Schmidt recurrence relation, gives the values of at the internal mesh points
with the help of boundary condition.
6

Risk Latte Americas Inc.


Machine Learning

4. Crank Nicolson method


The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in
time. It is a second-order method in time. It is implicit in time and can be written as an implicit
Runge–Kutta method, and it is numerically stable. The equation for Crank–Nicolson method is a
combination of the forward Euler method at n and the backward Euler method at n + 1.

𝜕𝑢 𝜕2 𝑢
Considering 1-D Heat equation = 𝑐 2 𝜕𝑥 2 ,
𝜕𝑡

and applying a finite difference spatial discretization for the right-hand side, the Crank–Nicolson
discretization is then:

𝑢𝑖𝑛+1 − 𝑢𝑖𝑛 𝛼 𝑛+1


= ((𝑢𝑖+1 − 2𝑢𝑖𝑛+1 + 𝑢𝑖−1
𝑛+1 𝑛
) + (𝑢𝑖+1 − 2𝑢𝑖𝑛 + 𝑢𝑖−1
𝑛
))
∆𝑡 2(∆𝑥 2 )

𝛼∆𝑡
or, substituting 𝑟 = 2(∆𝑥 2 ) :

𝑛+1
−𝑟𝑢𝑖+1 + (1 + 2𝑟)𝑢𝑖𝑛+1 − 𝑟𝑢𝑖−1
𝑛+1 𝑛
= 𝑟𝑢𝑖+1 + (1 − 2𝑟)𝑢𝑖𝑛 + 𝑟𝑢𝑖−1
𝑛

which is a tridiagonal problem, so that 𝑢𝑖𝑛+1 may be efficiently solved similarly as described in
backward Euler method.
Particularly, the Black–Scholes option pricing model's differential equation can be transformed
7

into the heat equation, and thus numerical solutions for option pricing can be obtained with the
Crank–Nicolson method.
Risk Latte Americas Inc.
Machine Learning
5. Iterative method (Jacobi’s iteration method)

The iterative methods can be applied to solve the finite-difference equations can be obtained in
the preceding section. In the Crank-Nicolson method for typical heat equation we have

𝑗+1 𝑗+1 𝑗+1 𝑗 𝑗 𝑗 𝑐2𝑘


−𝛼𝑢𝑖−1 + (2 + 2𝛼)𝑢𝑖 − 𝛼𝑢𝑖+1 = 𝛼𝑢𝑖−1 + (2 + 2𝛼)𝑢𝑖 + 𝛼𝑢𝑖+1 𝑤ℎ𝑒𝑟𝑒 𝛼 =
ℎ2

𝑗+1 𝑗+1 𝑗+1 𝑗 𝑗 𝑗


2(1 + 𝛼)𝑢𝑖 = 𝛼[𝑢𝑖+1 + 𝑢𝑖−1 ] + (2 − 2𝛼)𝑢𝑖 + 𝛼𝑢𝑖+1 + 𝛼𝑢𝑖−1
𝑗+1 𝑗+1 𝑗 𝑗 𝑗 𝑗
= 𝛼[𝑢𝑖+1 + 𝑢𝑖−1 ] + 𝛼[𝑢𝑖+1 + 𝑢𝑖−1 − 2𝑢𝑖 ] + 2𝑢𝑖
𝑗+1 𝛼 𝑗+1 𝑗+1 𝛼 𝑗 𝑗 𝑗 𝑗
(1 + 𝛼)𝑢𝑖 = [𝑢𝑖+1 + 𝑢𝑖−1 ] + [𝑢𝑖+1 + 𝑢𝑖−1 − 2𝑢𝑖 ] + 𝑢𝑖
2 2
or an Implicit scheme, we have
𝑗+1 𝛼 𝑗+1 𝑗+1 𝑗 𝛼 𝑗 𝑗 𝑗
(1 + 𝛼)𝑢𝑖 = [𝑢𝑖−1 + 𝑢𝑖+1 ] + 𝑢𝑖 + [𝑢𝑖−1 − 2𝑢𝑖 + 𝑢𝑖+1 ]
2 2
𝑗+1 𝑗+1 𝑗+!
Here only 𝑢𝑖 , 𝑢𝑖−1 and 𝑢𝑖 Are unknown while all others are known since these were
already computed in the jth step
𝑗 𝛼 𝑗 𝑗 𝑗 𝛼 𝑖 𝑏
Denote: 𝑏𝑖 = 𝑢𝑖 + 2 (𝑢𝑖−1 − 2𝑢𝑖 + 𝑢𝑖+1 ), then 𝑢𝑖 = 2(1+𝛼) (𝑢𝑖−1 + 𝑢𝑖+1 ) + 1+𝛼

This gives the iteration formula


𝑗+1 𝛼 𝑗 𝑗 𝑏𝑖
𝑢𝑖 = (𝑢𝑖−1 + 𝑢𝑖+1 ) +
2(1 + 𝛼) 1+𝛼
which expresses the (j+1)th iterates in terms of the jth iterates only.
This is known as the Jacobi iteration formula.
We iterate unless convergence is obtained.
8

Risk Latte Americas Inc.


Machine Learning
6. Alternative Direction Implicit method (ADI)
The alternate direction implicit method is a time splitting or fractional step method. The ADI is a
time splitting or fractional step method. The idea is to use an implicit discretization in one direction
and an explicit discretization in another direction. For the heat equation 𝑢𝑡 = 𝑢𝑥𝑥 + 𝑢𝑦𝑦 +
𝑓(𝑥, 𝑦, 𝑡),
the ADI method is,
1 1 1 1
𝑘+ 𝑘 𝑘+ 𝑘+ 𝑘+
2 2 2 2 𝑘 𝑘 𝑘
𝑈𝑖,𝑗 − 𝑈𝑖,𝑗 𝑈𝑖−1,𝑗 − 2𝑈𝑖,𝑗 + 𝑈𝑖+1,𝑗 𝑈𝑖,𝑗−1 − 2𝑈𝑖,𝑗 + 𝑈𝑖,𝑗+1 𝑘+
1
2
= + + 𝑓𝑖,𝑗
,
(∆𝑡)/2 ℎ𝑥2 ℎ𝑦2
1 1 1 1
𝑘+1 𝑘+ 𝑘+ 𝑘+ 𝑘+
2 2
𝑈𝑖,𝑗 − 𝑈𝑖,𝑗 𝑈𝑖−1,𝑗 − 2𝑈𝑖,𝑗 2 + 𝑈𝑖+1,𝑗
2 𝑘+1
𝑈𝑖,𝑗−1 𝑘+1
− 2𝑈𝑖,𝑗 𝑘+1
+ 𝑈𝑖,𝑗+1 𝑘+
1
2
= + + 𝑓𝑖,𝑗
,
(∆𝑡)/2 ℎ𝑥2 ℎ𝑦2

which is second order in time and in space if u(x, y, t) ∈ C4. It is unconditionally stable for linear
problems. We can use symbolic expressions to discuss the method, rewritten as
1
𝑘+
2 𝑘 ∆𝑡 2 𝑘+12 ∆𝑡 2 𝑘 ∆𝑡 𝑘+12
𝑈𝑖,𝑗 = 𝑈𝑖,𝑗 + 𝛿 𝑈 + 𝛿𝑦𝑦 𝑈𝑖,𝑗 + 𝑓𝑖,𝑗 ,
2 𝑥𝑥 𝑖,𝑗 2 2
1
𝑘+1 𝑘+ ∆𝑡 2 𝑘+12 ∆𝑡 2 𝑘+1 ∆𝑡 𝑘+12
𝑈𝑖,𝑗 = 𝑈𝑖,𝑗 2 + 𝛿 𝑈 + 𝛿𝑦𝑦 𝑈𝑖,𝑗 + 𝑓𝑖,𝑗 ,
2 𝑥𝑥 𝑖,𝑗 2 2
The key idea of the ADI method is to use the implicit discretization dimension by dimension,
by taking advantage of fast tridiagonal solvers for
1
𝑘+
2 𝑘 ∆𝑡 2 𝑘+12 ∆𝑡 2 𝑘 ∆𝑡 𝑘+12
𝑈𝑖,𝑗 = 𝑈𝑖,𝑗 + 𝛿 𝑈 + 𝛿𝑦𝑦 𝑈𝑖,𝑗 + 𝑓𝑖,𝑗 ,
2 𝑥𝑥 𝑖,𝑗 2 2
1 1 1
𝑘+ 𝑘+ 𝑘+
For fixed j, we get a tridiagonal system of equations for 𝑈1𝑗 2 , 𝑈2𝑗 2 , … , 𝑈𝑚−1.𝑗
2
, assuming a

Dirichlet boundary condition at x = a and x = b. The system of equations in matrix-vector form is


1
𝑘+
2
𝑈1,𝑗
1
1 + 2𝜇 −𝜇 𝑘+
𝑈2,𝑗 2
−𝜇 1 + 2𝜇 −𝜇 1
𝑘+
−𝜇 1 + 2𝜇 −𝜇 𝑈3,𝑗 2 = 𝐹̂
⋱ ⋱ ⋱

−𝜇 1 + 2𝜇 −𝜇 1
𝑘+
2
[ −𝜇 1 + 2𝜇] 𝑈𝑚−2,𝑗
1
𝑘+
2
[𝑈𝑚−1,𝑗 ]
9

Risk Latte Americas Inc.


Machine Learning

Where,
∆𝑡 𝑘+12 1
𝑘
𝑈1,𝑗 + 𝑓1,𝑗 + 𝜇𝑢𝑏𝑐 (𝑎, 𝑦𝑗 )𝑘+2 + 𝜇(𝑈1,𝑗−1
𝑘 𝑘
− 2𝑈1,𝑗 𝑘
+ 𝑈1,𝑗+1 )
2
𝑘 ∆𝑡 𝑘+1 𝑘 𝑘 𝑘
𝑈2,𝑗 + 𝑓2,𝑗 2 + 𝜇(𝑈2,𝑗−1 − 2𝑈2,𝑗 + 𝑈2,𝑗+1 )
2
𝑘 ∆𝑡 𝑘+12 𝑘 𝑘 𝑘
𝐹̂ = 𝑈3,𝑗 + 𝑓3,𝑗 + 𝜇(𝑈3,𝑗−1 − 2𝑈3,𝑗 + 𝑈3,𝑗+1 )
2

1
𝑘 ∆𝑡 𝑘+
2 𝑘 𝑘 𝑘
𝑈𝑚−2,𝑗 + 𝑓𝑚−2,𝑗 + 𝜇(𝑈𝑚−2,𝑗−1 − 2𝑈𝑚−2,𝑗 + 𝑈𝑚−2,𝑗+1 )
2
𝑘 ∆𝑡 𝑘+12 𝑘 𝑘 𝑘 𝑘+
1
[ 𝑈𝑚−1,𝑗 + 𝑓𝑚−1,𝑗
+ 𝜇(𝑈 𝑚−1,𝑗−1 − 2𝑈 𝑚−1,𝑗 + 𝑈 𝑚−1,𝑗+1 ) + 𝜇𝑢𝑏𝑐 (𝑏, 𝑦𝑗 ) 2
]
2

1 1
𝛽∆𝑡 𝑘+
And 𝜇 = 2ℎ2 , and 𝑓𝑖,𝑗 2
= 𝑓(𝑥𝑖 , 𝑡 𝑘+2 ). For each j, we can solve a symmetrical tridiagonal system of

equations.

10

Risk Latte Americas Inc.

Вам также может понравиться