Академический Документы
Профессиональный Документы
Культура Документы
p
tr
(x
3
, t
3
[ x
2
, t
2
)p
tr
(x
2
, t
2
[ x
1
, t
1
)dx
2
(2)
The above equation is very general in scope, and not of particular practical signicance.
For a special class of homogeneous Markov processes known as Diusion processes, Eq.2
can be reduced to a partial dierential equation in the probability density function of the
random variable. This is the well known Fokker-Planck equation (also known as the forward
Kolmogorov equation), given by:
J(x, t)
t
=
_
_
i=1
D
(1)
i
(x, t)
x
i
+
N
i=1
N
j=1
2
D
(2)
ij
(x, t)
x
i
x
j
_
_
J(x, t)
Or (3)
J(x, t)
t
= L
FP
(J(x, t))
where L
FP
() is the so called Fokker-Planck operator. Notice that the operator is linear. In
Eq.3, J(x, t) is the joint probability density function of the N-dimensional system state, x,
at time t. D
(1)
is known as the Drift Coecient and D
(2)
is called the Diusion Coecient.
The governing dynamics of the corresponding stochastic (nonlinear) system is given by the
following equation and initial conditions:
x = f (x, t) +g(x, t)(t), x(t
0
) = ? (4)
J(x, t = t
0
) = J(x, t
0
) (5)
The above represents a general dynamic system of type S2, with -correlated random ex-
citation (t) (called the Langevin force) with the correlation function E[
i
(t
1
)
j
(t
2
)] =
Q
ij
(t
1
t
2
). The initial probability distribution is described by the density function
J(x, t
0
). The drift and diusion coecients in Eq.3 are related to the system dynamics
in the following manner:
D
(1)
(x, t) = f (x, t) +
1
2
g(x, t)
x
Q
??
g(x, t) (6)
D
(2)
(x, t) =
1
2
g(x, t)Qg
T
(x, t) (7)
Or, in indicial notation,
D
(1)
i
(x, t) = f
i
(x, t) +
1
2
g
ij
(x, t)
x
k
Qg
kj
(x, t) (8)
D
(2)
ij
(x, t) =
1
2
g
ik
(x, t)Q
ij
g
jk
(x, t) (9)
2
It is possible to write a more general form of Eq.3; for instance, by including higher order
derivatives. This leads us to the Kramers-Moyal expansion of the Fokker-Planck equation:
J(x, t)
t
=
i=1
_
x
_
i
D
(i)
(x, t)J(x, t) (10)
However, when the system dynamics follows Eq.4, with a Gaussian -correlated Langevin
forcing term, all the coecients D
(i)
, i 3 vanish and Eq.3 is obtained. This simplication
does not take place for discrete random variables and non-Markovian processes. For com-
pleteness, we note that the Kramers-Moyal expansion coecients, D
(k)
(x, t) are given by the
following relation:
D
(k)
(x, t) =
1
k!
lim
0
1
[(t + ) x]
k
)[
(t)=x
(11)
It is worth noting what has been illustrated through the above developments. We have
replaced an awkward stochastic dierential equation (Eq.4) with a completely deterministic
partial dierential equation (Eq.3) with deterministic initial conditions (Eq.5). In other
words, we have circumvented the problem of solving the equation of motion of a random
variable x, by instead solving the equation of motion of the probability density, J(x, t), of
the random variable. Furthermore, while Eq.4 is in general nonlinear, the revised problem to
solve, i.e. Eq.3 is always a linear equation. A brief derivation of this remarkable equation is
in place. We shall follow the developments outlined by Ref:Fuller(1969), for general nonlinear
N dimensional systems. Considering only white noise excitation, let us write the i
th
equation
of the governing dynamics in Eq.4 as follows:
dx
i
dt
= f
i
(x
1
, x
2
, . . . , x
N
, t) + g
i
(t) ; i = 1, 2, . . . , N (12)
By writing the integral form of the above state equations, it can be shown that x(t) is a
Markov process (i.e. any future state of x depends only on its current state and increments of
the white noise forcing term). From the discussion above on Markov processes, the Chapman-
Kolmogorov equation can be applied on x:
p
tr
(x, t + t[ x
0
, t
0
) =
_
p
tr
(x, t + t[ x
, t)p
tr
(x
, t[ x
0
, t
0
)dx
(13)
Fuller drops the initial time t
0
out of the above equation for simplicity, leading to:
p(x, t + t) =
_
p
tr
(x, t + t[ x
, t)p(x
, t)dx
(14)
Now, assuming the process to be homogeneous, the transition probability p
tr
(x, t + t[ x
, t)
can be replaced with q(z, t[x z, t), where z is the transition vector x x
. Eq.14 reduces
to:
p(x, t + t) =
_
x
1
+ . . . + z
N
x
N
_
r(x) +
1
2!
_
z
1
x
1
+ . . . + z
N
x
N
_
2
r(x) . . .
(17)
Hence, Eq.15 becomes:
p(x, t + t) =
_
. . .
_
r(x)dz
1
. . . dz
N
i=1
_
. . .
_
z
i
r(x)
x
i
dz
1
. . . dz
N
+
1
2!
N
i=1
N
j=1
_
. . .
_
z
i
z
j
2
r(x)
x
2
i
dz
1
. . . dz
N
. . .
(18)
Now interchanging the order of dierentiations and integrations, we obtain:
p(x, t + t) =
_
. . .
_
r(x)dz
1
. . . dz
N
i=1
x
i
_
. . .
_
z
i
r(x)dz
1
. . . dz
N
+
1
2!
N
i=1
N
j=1
2
x
2
i
_
. . .
_
z
i
z
j
r(x)dz
1
. . . dz
N
. . .
(19)
We now replace r(x) in Eq.19 using Eq.16; i.e. substitute r(x, t, z, t) = q(z, t[x, t)p(x, t):
p(x, t + t) = p(x, t)
_
. . .
_
i=1
x
i
_
p(x, t)
_
. . .
_
z
i
q(z, t[x, t)dz
1
. . . dz
N
_
+
1
2!
N
i=1
N
j=1
2
x
2
i
_
p(x, t)
_
. . .
_
z
i
z
j
q(z, t[x.t)dz
1
. . . dz
N
_
. . .
(20)
4
Notice that p(x, t) comes outside the integration, which is over the dierential increment dz,
because the state x(t) is independent of the increment at t. We now identify the integrals
in Eq.20 as various moments of the transition probability density function q(z, t[x, t). The
rst integral is simply unity, because it is the probability mass under q. The second term
is the summation of the rst moments z
i
= z
i
(t, x, t), the third term is the summation of
the second moments z
i
z
j
(t, x, t) and so on. We now assume that the rst two moments of z
are much greater than all its higher moments because the probability of z assuming a large
value is very small (it is an increment over t). Rearranging terms, and expanding the left
hand side using a rst order Taylor series expansion about t:
p(x, t)
t
t =
N
i=1
x
i
[p(x, t)z
i
(t, x, t)]
+
1
2!
N
i=1
N
j=1
2
x
2
i
[p(x, t)z
i
z
j
(t, x, t)]
(21)
As the nal step, we take the limit t 0, and postulate that the following limiting values
are obtained:
lim
t0
z
i
(t, x, t)
t
= D
i
(x, t) (22)
lim
t0
z
i
z
j
(t, x, t)
t
= D
ij
(x, t) (23)
We have the Kolmogorov version of the Fokker-Planck equation (equivalent to Eq.3):
p(x, t)
t
=
N
i=1
x
i
(D
i
p(x, t)) +
1
2!
N
i=1
N
j=1
2
x
2
i
(D
ij
p(x, t)) (24)
Reduction of drift and diusion coecients to system dynamics coming soon...
For systems of type S1, the Langevin force term is zero, i.e. g(x, t) = 0, and the uncertainty
in initial conditions brings about the stochastic nature of the problem. In this special case,
the Fokker-Planck equation reduces to the so called Liouville equation, given by:
J(x, t)
t
=
_
i=1
D
(1)
i
(x, t)
x
i
_
J(x, t); J(x, t = t
0
) = J(x, t
0
) (25)
An example of an S1 system is the error propagation problem in celestial mechanics. In this
system, the governing dynamics consists exclusively of rather clean gravitational forces,
which are very well understood. The source of randomness is the uncertainty in the initial
state of the object, due to the limited accuracy of measurement devices.
5
Connections with Linear Stochastic Dynamics
Stochastic dynamics of linear systems can be rigorously extracted from the Fokker-Planck
equation as a special case. Linearized propagation of system covariance nds widespread
application in ltering theory and error propagation. Even for systems with nonlinear gov-
erning dynamics, it is a routine procedure to apply Gaussian closure and approximate the
probability density function with a relatively small number of parameters, which are usually
the elements of the mean vector and covariance matrix. Such approximation is a major draw-
back, especially for systems with a high degree of nonlinearity. Even for moderately nonlinear
systems, these approximations break down after suciently long durations of propagation,
because of error accumulation from the dropped second and higher order terms. This is
especially relevant in error propagation into the future, because repeated measurements are
not available to update the current mean and covariance predictions. An example is the
problem of propagation of the state PDF of an asteroid several years into the future for the
purpose of prediction of its probability of collision with Earth.
In ltering theory, the extended Kalman lter has however performed well despite these ob-
vious shortcomings, because of the availability of measurements to correct the error resulting
from linearized propagation. The reliability of such an approach is not guaranteed and diver-
gence issues have led to the development of alternate nonlinear techniques like the Unscented
Kalman lter.
In this section, we show that the equations of propagation of mean and covariance of lin-
earized systems can be obtained by the application of the Fokker-Planck equation to the basic
denition of these quantities. For simplicity, let us rst consider rst a single dimensional
linear system. These results can be extended to multidimensional systems:
x = ax + g(t)(t); E[(t
1
)(t
2
)] = Q(t
2
t
1
) (26)
J(x, t
0
) = N(
0
,
0
)
We have the following denitions:
(t) = E[x] =
_
(x (t))
2
J(x, t)dx (28)
In the equations to follow, it is to be understood that J = J(x, t), and for linear dynamics,
J = N(x, (t), (t)) . Taking the time derivative of the above relations, and using Eq.3:
(t) =
_
x
J
t
dx =
_
xL
FP
(J)dx (29)
(t) =
_
2(x ) Jdx +
_
(x )
2
J
t
dx
=
_
_
2(x ) + (x )
2
L
FP
( )
Jdx (30)
6
Let us expand Eq.29:
(t) =
_
x
_
(axJ)
x
+
g
2
Q
2
2
J
x
2
_
dx
= a
_
xJdx a
_
x
2
J
x
dx +
g
2
Q
2
_
2
J
x
2
dx
= a
_
xJdx + a
_
x
2
(x )
2
Jdx +
g
2
Q
2
_
x
_
2
+
(x )
2
4
_
Jdx
= a +
a
2
_
_
(x )
3
+ 2(x )
2
+
2
(x )
Jdx
+
g
2
Q
2
2
_
+
1
2
_
_
(x )
3
+ (x )
2
Jdx
_
= a +
a
2
2
2
+
g
2
Q
2
2
_
+
1
2
_
= a(t)
We follow a similar development for the variance, (t), starting from Eq.30:
(t) = 2a
_
(x )Jdx a
_
(x )
2
Jdx +
a
2
_
_
(x )
4
(x )
3
Jdx
+
g
2
Q
2
2
_
(x )
2
_
1 +
(x )
2
2
_
Jdx
= a
2
+
a
2
(3
4
) +
g
2
Q
2
2
(
2
+
3
4
2
)
= 2a(t) + g
2
Q
which is the Riccati equation for the linear scalar system.
Global Weak Form Formulation of the Fokker Planck Equation
Consider the 2-dimensional nonlinear dynamic system:
x + f(x, x) = g(t)((t) (31)
where ((t) is a zero-mean white noise process with strength Q. The corresponding Fokker-
Planck equation for the probability density function is:
J
t
+ x
J
x
f
J
x
f
x
J
g
2
Q
2
2
J
x
2
= 0 (32)
We seek a solution for J(x, x, t). Besides satisfying Eq.32, the obtained solution should also
fulll the following constraints, so that it may be a valid probability density function:
7
1. Positivity: J(x, t) > 0, x, t.
2. Normality:
_
J(x, t)dx = 1.
The second constraint can be satised by appropriate normalization of the obtained solution
as a post-processing operation. In order to enforce the rst constraint, we transform Eq.32
such that J is replaced with log(J). Letting J = e
= J
(33)
2
J
= J
_
2
_
(34)
An important observation here is that in thus enforcing the positivity constraint, we end
up trading a linear equation for a nonlinear one (Eq.34). Since the approach to solution in
either case is numerical, we shall keep our ngers crossed. The following modied form of
the Fokker-Planck equation, for the log-pdf () results:
t
+ x
x
f
x
g
2
Q
2
x
2
g
2
Q
2
(
x
)
2
=
f
x
(35)
Or,
_
t
L
log
FP
_
=
f
x
(36)
The general idea in the global weak form formulation is to approximate the solution of the
(log)pdf with a series sum of N basis functions, leading to a N
th
order non-gaussian closure:
=
N
i=1
i
(t)
i
(x, b) =
T
(t)(x, b) (37)
where b is a vector of time dependent parameters, usually used for desired scaling of the
basis functions. The basis functions may be a set of polynomials, radial basis functions or
some other suitable type. Substitution of Eq.37 into Eq.35 leads to the following expression
for the residual error:
R(x, t) =
N
i=1
i
(t)
i
(x, b) +
N
i=1
i
(t)
L
l=1
i
(x, b)
b
l
db
l
dt
+ x
N
i=1
i
(t)
i
(x, b)
x
f
N
i=1
i
(t)
i
(x, b)
x
g
2
Q
2
N
i=1
i
(t)
i
(x, b)
x
2
g
2
Q
2
N
i=1
N
j=1
i
(t)
j
(t)
i
(x, b)
x
j
(x, b)
x
f
x
(38)
8
The residual error, R(x, t), is projected onto a space spanned by N discreetly chosen inde-
pendent weight functions (test functions), 1
p
(x, b
R(x, b, t)1
p
(x, b
)d = 0; p = 1, 2, . . . , N (39)
The weight functions represent the space over which the projection of the residual error is
minimized. It is therefore important to select these functions such that they completely span
the regions of signicance in the solution space, i.e., they give the requisite weightage to
all the regions over which the solution is active. This may require some heuristics for the
determination of such important regions from the point of view of obtaining the solution.
The approach wherein the weight functions are the same as the basis functions is called the
Galerkin global weak form method.
Systems with Polynomial Nonlinearity
For a large class of nonlinear systems, the function f(x, x) in Eq.31 can be modelled with a
polynomial function. The various nonlinear oscillators, like the Dung oscillator, the Van-der
Pol oscillator, the Rayleigh oscillator etc. are of this type. A set of independent polynomials
is a good choice for the basis functions for these systems. Furthermore, if the weight functions
are taken to be the same as the basis functions, multiplied with a carefully chosen gaussian
pdf, it can be shown that all the integrals involved in Eq.39 can be computed rather easily.
In particular, if we choose hermite polynomials to approximate the log-pdf (Eq.37), all the
integrals in Eq.39 reduce to moments of various order of the gaussian pdf used in the weight
functions. All these moments can be evaluated analytically. We have:
(x, x, t) =
N
i=0
M
j=0
ij
(t)H
i
(x)H
j
( x) (40)
1
pq
(x, x) = H
p
(x)H
q
( x)e
x
2
e
x
2
(41)
where x is shorthand for
(x
x
)
x
, and x for
( x
x
)
x
. The coecients of the basis functions,
ij
(t), are the unknowns to be solved for. The time varying parameters
x
(t),
x
(t),
x
(t)
and
x
(t) are pre-dened, and they are used for the scaling of basis polynomials.
There are several reasons for choosing the particular form of the basis and weight functions
shown in Eqs.40-41. Hermite polynomials form an orthogonal basis with respect to the ex-
ponential function over the domain (, ). This fact gives us an immediate advantage,
because (, ) is the exact theoretical domain of the solution space along each dimension
(Eq.39). Therefore, we can exploit the orthogonality properties of the hermite polynomials
while evaluating the integrals in Eq.39 analytically, over the exact solution domain. The
presence of the gaussian-pdf in the weight function gives us a twofold advantage. Firstly, by
wisely selecting (heuristically or otherwise) the proles of the mean and standard deviation
parameters, we can give greater weight to regions where the actual solution has its dominant
9
presence, while theoretically integrating over the innite domain in all the dimensions. This
leads us to the second benet - Polynomial functions have a tendency to blow out as they
move away from their center. By giving weightage only to the regions around the center
of these polynomials, we reduce the chance of divergence of the method due to ballooning
integrals.
Selection of Scaling Parameters: The parameters b
1
2g
2
_
(x
2
+ x
2
) +
2
(x
2
+ x
2
)
2
_
_
(43)
where k is a normalizing constant. Comparing with Eq.31, we see that f(x, x) = ( x + x +
(x
2
+ x
2
) x). For reasons described above, scaled and orthonormalized hermite polynomials
(see appendix) are chosen as the basis functions. The system dynamics (f(x, x), and its
derivatives) can also be represented by hermite polynomial series sums, which will be of
great use, as will be shown below. We obtain the following weak form equation:
_
_
_
t
+ x
x
f
x
S
0
g
2
2
x
2
S
0
g
2
2
_
x
_
2
f
x
_
_
1
pq
dxd x = 0 (44)
11
where,
=
N
i=0
M
j=0
ij
(t)H
i
(x)H
j
( x) (45)
1
pq
= H
p
(x)H
q
( x)e
x
2
e
x
2
(46)
f(x, x) =
K
k=0
L
l=0
kl
H
k
(x)H
l
( x) (47)
f
x
=
K
k=0
L
l=0
kl
H
k
(x)H
l
( x) (48)
The mean and standard deviation parameters to be used for scaling are obtained by solving
the linearized system, as described above. Muscolino et.al. have used a similar approach,
but with stochastic linearization to obtain the relevant parameters. They have solved for the
zero-mean process, i.e.,
x
and
x
do not appear in the scaling, and the pdf is stationed at the
origin at all times, which is also the equilibrium point of the nonlinear system. This has been
done for simplicity of calculation. Obviously, the issues discussed above regarding attaching
appropriate weights to signicant regions in the solution space are rather easy to address in
this example; because rstly, the center of the solution pdf does not change at any time (it is
a zero-mean process). Therefore, we always know where to look for the solution. Secondly,
the system has a known stationary pdf - meaning thereby that the domain of signicance is
easy to determine even for large times, because the nal answer is known beforehand. The
authors have mentioned that the precision with which the standard deviation parameters are
determined plays a crucial role in the convergence of the solution for a prescribed level of
accuracy. This is true in general, the reasons for which have been described in the previous
section.
For this particular example, we show in this paper that a very good solution can be obtained
even with simple linearization (as opposed to stochastic linearization). The reason for such
reduced sensitivity to parameter tuning is attributed to improved normalization of the basis
functions, suited to the particular problem. Furthermore, the formulation shown in this pa-
per is applicable to the cases in which the process is not zero-mean.
We now take a closer look at each of the terms in the weak form equation (Eq.44) using
hermite polynomials:
12
Term 1:
_
t
1
pq
dxd x
=
_
i=0
M
j=0
d
ij
dt
H
i
(x)H
j
( x)H
p
(x)H
q
( x)e
x
2
e
x
2
dxd x
+
_
i=0
M
j=0
ij
_
H
i
(x)
x
x
+
H
i
(x)
x
x
_
H
j
( x)H
p
(x)H
q
( x)e
x
2
e
x
2
dxd x
+
_
i=0
M
j=0
ij
H
i
(x)
_
H
j
( x)
x
x
+
H
j
( x)
x
x
_
H
p
(x)H
q
( x)e
x
2
e
x
2
dxd x
(49)
Looking at the rst of these three terms, we see that the summations and integrals can be
rearranged in the following manner:
N
i=0
_
_
_
H
i
(x)H
p
(x)e
x
2
dx
M
j=0
ij
_
H
j
( x)H
q
( x)e
x
2
d x
_
_
(50)
Using the orthonormal properties of the hermite polynomials, Eq.50 collapses to:
N
i=0
_
_
ip
M
j=0
ij
jq
_
_
=
pq
(51)
In the second and third integral expressions in Eq.49, we shall use the dierentiation-recursion
relations among the hermite polynomials. For instance, note the following developments for
the bracketed terms in the second integral in Eq.49:
H
i
(x)
x
x
+
H
i
(x)
x
x
=
H
i
(x)
x
x
x
x
+
H
i
(x)
x
x
x
x
(52)
=
2ik
i1
k
i
x
x
+ x
x
H
i1
(x) (using recursion) (53)
=
2ik
i1
k
i
x
k
0
x
H
0
(x) +
k
1
x
2
H
1
(x)H
i1
(x) (54)
The expression in the curly brackets of Eq.53 has been represented in terms of hermite
polynomials in Eq.54 so as to simplify the evaluation of integrals. The normalization factors
k
i
have been dened in the appendix. Carrying out a similar rearrangement of terms as for
the rst integral, the second integral in Eq.49 reduces to:
i=1
_
2ik
i1
k
i
iq
k
0
x
i1,p
+
k
1
x
2
1,i1,p
_
(55)
13
The expression
has been explained in the appendix. The third integral in Eq.49 takes
the exact similar form as Eq.55, with i and p interchanged with j and q respectively. There-
fore, we have the following expression for term 1:
_
t
1
pq
dxd x =
pq
N
i=1
_
2ik
i1
k
i
iq
k
0
x
i1,p
+
k
1
x
2
1,i1,p
_
(56)
j=1
_
2jk
j1
k
j
pj
k
0
x
j1,q
+
k
1
x
2
1,j1,q
_
The advantage of orthonormalization is now clear. We see that Eq.56 contains the time
derivative of only
pq
. Therefore, from each of the N M weak form equations, we obtain a
dierential equation for every one of the undetermined coecients, without having to invert
a matrix for this purpose. In general, with non-orthogonal polynomials, a set of dierential
equations of the type A +B +C
2
+f = 0 would have been obtained, which would require
a matrix inversion to separate out the time derivatives of the individual coecients. No such
inversion is required here.
Continuing with the weak form in Eq.44, we look at the remaining terms (after following
similar steps of simplication as for term 1):
Term 2:
_
x
1
pq
dxd x
=
N
i=1
2ik
i1
k
i
x
_
H
i1
(x)H
p
(x)e
x
2
dx
M
j=0
ij
_
k
0
x
H
0
( x) +
k
1
x
2
H
1
( x)H
j
( x)H
q
( x)e
x
2
d x
=
M
j=0
2(p + 1)k
p
k
p+1
p+1,j
_
x
+
k
1
x
2
1jq
_
;for p < N, Term 2 = 0 for p = N
(57)
Term 3:
_
x
1
pq
dxd x
=
N
i=0
M
j=1
2jk
j1
ij
k
j
x
K
k=0
_
H
i
(x)H
p
(x)H
k
(x)e
x
2
dx
L
l=0
kl
_
H
j1
( x)H
q
( x)H
l
( x)e
x
2
d x
=
N
i=0
M
j=1
2jk
j1
k
j
ij
K
k=0
ipk
L
l=0
kl
j1,q,l
(58)
14
Term 4: S
0
g
2
_
x
2
1
pq
dxd x
= S
0
g
2
N
i=0
_
H
i
(x)H
p
(x)e
x
2
dx
M
j=2
4j(j 1)k
j2
k
j
2
x
ij
_
H
j2
( x)H
q
( x)e
x
2
d x
= S
0
g
2
4(q + 1)(q + 2)k
q
k
q+2
2
x
p,q+2
; for q < M 1, Term 4 = 0 otherwise
(59)
Term 5: S
0
g
2
_
x2
)
2
1
pq
dxd x
= S
0
g
2
N
i=0
N
k=0
_
H
i
(x)H
k
(x)H
p
(x)e
x
2
dx
M
j=1
ij
l=1
4jlk
j1
k
l1
k
j
k
l
2
x
_
H
j1
( x)H
l1
( x)H
q
( x)e
x
2
d x
= S
0
g
2
N
i=0
N
k=0
ikp
M
j=1
ij
M
l=1
4jlk
j1
k
l1
k
j
k
l
2
x
j1,l1,q
(60)
Term 6:
_
f
x
1
pq
dxd x
=
K
k=0
_
H
k
(x)H
p
(x)e
x
2
dx
L
l=0
kl
_
H
l
( x)H
q
( x)e
x
2
d x
=
K
k=0
kp
L
l=0
kl
lq
(61)
The nal resulting equation for
pq
is:
pq
=
N
i=1
_
2ik
i1
k
i
iq
k
0
x
i1,p
+
k
1
x
2
1,i1,p
_
+
M
j=1
_
2jk
j1
k
j
pj
k
0
x
j1,q
+
k
1
x
2
1,j1,q
j=0
2(p + 1)k
p
k
p+1
p+1,j
_
x
+
k
1
x
2
1jq
_
i=0
M
j=1
2jk
j1
k
j
ij
K
k=0
ipk
L
l=0
kl
j1,q,l
+ S
0
g
2
4(q + 1)(q + 2)k
q
k
q+2
2
x
p,q+2
+ S
0
g
2
N
i=0
N
k=0
ikp
M
j=1
ij
M
l=1
4jlk
j1
k
l1
k
j
k
l
2
x
j1,l1,q
+
K
k=0
kp
L
l=0
kl
lq
(62)
15
The Meshless Local Weak Form Formulation
Material goes in here. The Meshless Local Petrov Galerkin (MLPG) method emerged
as a promising numerical technique that oers a simplied methodology compared to the
conventional FEM and other meshless techniques that require a background mesh for inte-
gration. The MLPG scheme is a truly meshless method, in which grid generation using the
node distribution is not required at any stage.
16
Appendix: Normalized Hermite Polynomials
Standard hermite polynomials are dened in the following manner:
H
n
() = (1)
n
e
2 d
n
d
n
e
2
(63)
These polynomials satisfy the orthogonality property with respect to the weighting function
e
2
over the domain ] , [ :
_
()H
()e
2
d = 2
(64)
The polynomials used in the approximation in Eq.40 have been obtained by normalizing the
polynomials in Eq.63 such that the integral shown in Eq.64 gives unity when = . In other
words, the following orthonormal polynomials have been used in Eq.40:
H
n
() =
H
n
()
_
2
n
n!
H
n
k
n
(65)
These orthonormalized hermite polynomials have the following useful interrelationships:
_
()H
()e
2
d =
(66)
_
()H
()H
()e
2
d =
(67)
dH
n
()
d
=
2nk
n1
k
n
H
n1
()
3
(, , ) (68)
where,
=
_
_
!!!
1
(s)!(s)!(s)!
if 2s is even, and s (, , ),
0 otherwise
(69)
with s =
++
2
, and k has been dened in Eq.65.
The rst few orthonormalized hermite polynomials are:
17
H
0
() =
1
_
(70)
H
1
() =
2
_
2
(71)
H
2
() =
4
2
2
_
8
(72)
H
3
() =
8
3
12
_
48
(73)
H
4
() =
16
4
48
2
+ 12
_
384
(74)
Theoretical Aspects of the FPE
Brownian Motion
In the deterministic world, the simplest expression for the dynamics of a particle immersed
in a uid is given by:
m v + v = 0 (75)
v + v = 0 (76)
where v is the frictional force, and = /m = 1/. So, an initial velocity v(0) decays to
zero exponentially with relaxation time = 1/. The physics behind the frictional force is
the collision of the particle with uid particles, and the deterministic model Eq.75 is a good
approximation when the mass of the particle is large, so that the velocity due to thermal
uctuations is negligible. Now, from the kinetic theory of gases and the equipartition law,
we have
1
2
m < v
2
>=
1
2
kT (77)
where k is the Boltzmann constant. So, the thermal velocity v
th
=
< v
2
> =
_
kT/m is
negligible for large m. But, for small particles, Eq.75 needs to be modied to lead to the
correct thermal energy. To this eect, the net force on the particle is decomposed into a
continuous damping force F
c
(t) and a uctuating force F
f
(t):
F(t) = F
c
(t) + F
f
(t) = v(t) + F
f
(t) (78)
The properties of F
f
(t) are given only in average due to its stochastic nature. The reason
behind its stochastic nature is the following. If we were to solve the motion of the particle
exactly, we would need to consider its coupling with all the uid particles, which are of the
order 10
23
. Since it is not practical to solve this enormous coupled system, we encapsu-
late all these coupling terms in one stochastic force, given above, and specify its average
characteristics. We get:
v + v = (t) (79)
18
Some of the properties of the Langevin force, (t) are:
< (t) >= 0 (80)
< (t)(t
) >= 0 [t t
[
0
(81)
where
0
is the mean duration of a collision. Eq.81 is reasonable because it assumes that the
collisions of dierent molecules are independent. Furthermore,
0
(= 1/), hence taking
a reasonable limit
0
0, to get:
< (t)(t
) >= q(t t
) (82)
where q is the noise strength, q = 2kT/m.
19