Академический Документы
Профессиональный Документы
Культура Документы
Introduction
(1)
(2)
This algorithm works fairly well from the standpoint of energy conservation, provided that we work in a coordinate system where the kinetic energy
does not depend on the coordinates, just their time derivatives (typically
Cartesian coordinates).1 The main assumption in the algorithm is that the
velocity and acceleration do not change very much during a time t. If
this assumption starts to break down, there are two things that you can do:
+
r
.
This
issue
is discussed further in a later section.
2
1
Use a shorter time step t (which means that you will need to do more computational steps to simulate the motion of the system over the same amount
of time), or use a different algorithm. We will develop a better algorithm,
one that draws on ideas you learned in freshman physics.
Constant acceleration
The EC algorithm is ultimately based on writing down a Taylor series for the
position and only keeping the first term (since velocity is the first derivative
of position). If we write down more terms, we get:
1 d2~ri 2
1 d3~ri 3
~ri (t + t) = ~ri (t) + ~vi (t + t)t +
t +
t + ... (3)
2! dt2
3! dt3
For small t we can truncate the series after the term quadratic in
t. Things may look more familiar if we rewrite Eq. (3) as:
1
~ri (t + t) = ~ri (t) + ~vi (t)t + ~ai t2
2
(4)
(5)
4. When particle i moves from ~ri (t) to ~ri (t + t), it experiences an acceleration that changes slightly. We will capture this effect by using
the average acceleration between the times t and t + t, and updating
the velocity accordingly:
~ai (t) + ~ai (t + t)
~vi (t + t) = ~vi (t) +
t
(6)
2
5. Go back to step 1, use the new acceleration and velocity to compute
the new position, and so it goes...
The infallible Wikipedia tells us that this algorithm has been re-invented
numerous times by several different scientists and mathematicians. It is often called the Velocity Verlet (VV) algorithm, after Loup Verlet, who pioneered a very similar algorithm for modeling interactions between molecules
in large-scale computer simulations[2].
An implementation note
Usually, when people are studying the time-evolution of a system they care
a lot about positions and velocities. We can use those to calculate energy,
momentum, and angular momentum. People tend not to care as much about
accelerations. Thus, we need arrays for velocity components and arrays for
coordinates, but not arrays for acceleration.
But wait! you say, Surely we need arrays for acceleration components!
If we compute d coordinates for N particles and do T time steps, we need
N d additional arrays, each holding T acceleration values in them, for a
total of N d T numbers! Right?
Wrong. For each particle i, and each component (i.e. x, y, and z) j,
we just need two acceleration variables: aijold and aijnew. It is always
better to store fewer arrays rather than more. Here is a sketch of how our
code would look in Python:
while t<tmax:
#Insert some loops over particles and components
rij[t+1] = rij[t] + vij[t]*dt + 0.5*aijold*dt**2
aijnew = #Heres where you have to have a formula to calculate
#force from position and velocity
vij[t+1]=vij[t] + 0.5*(aijnew+aijold)*dt
3
aijold = aijnew #Now that were done using the old acceleration,
#the latest one becomes the "old" variable.
In chapter 3 of the textbook (page 51), it is shown that the Euler algorithm
gives results that blow up when used to simulate a simple pendulum.
EC, however, gives stable results, for reasons discussed in the article by
Timberlake[3] (posted on Blackboard). Here we will use the same approach
to show that VV also gives stable results. We will use a simple 1D example
for illustrative purposes, but the same results carry over to more complex
systems.
The Jacobian matrix relating variables at successive time steps is defined
by:
x(t+t) x(t+t)
x(t)
v(t)
d [x(t + t)] d [v(t + t)] = v(t+t)
(7)
v(t+t) d [x(t)] d [v(t)]
x(t)
v(t)
The left and right sides represent phase space volume after and before the
time step. Volume is conserved only if the determinant is 1. Lets compute
the derivatives using the formulas listed in the procedure in Section 2. (We
will leave out vector symbols and components, since we are working in 1D,
and also leave out the subscript i for the particle number, since we are
assuming just 1 particle.)
The derivative of x(t+t) with respect to x(t) is computed from Eq. (5).
There are two terms that depend on x(t): the first term (whose derivative
a
is just 1) and the second term, which is 21 x
t2 . There is one term that
depends on v(t), and the derivative with respect to v(t) is t.
The derivatives of v(t + t)) with respect to x(t) and v(t) are computed
from Eq. (6). There is one term that depends on v(t), and the derivative of
that term is 1. There is one term that depends on x(t), and that term is
a
a(t)t/2. The derivative with respect to x(t) is 12 x
t. The determinant
that we have to compute is:
1 + 1 a t2 t
2
x
= 1 + 1 a t2 1 a t t = 1
J =
(8)
1 a
t
1
2 x
2 x
2 x
Note that this only worked out because of an exact cancelation of terms.
The factor of 1/2 is very important! If we just used a(t) in that term,
4
instead of (a(t) + a(t + t))/2 then our determinant would be very different.
Suppose that we had used a simpler update rule for velocity:
v(t + t) = v(t) + a(t)t
(9)
This looks perfectly reasonable, being something that you saw in freshman
physics when you were doing calculations for constant acceleration rather
than variable acceleration. However, the determinant would be different:
1 + 1 a t2 t
2
x
= 1 + 1 a t2 a t t = 1 1 a t2 (10)
J =
a
1
2 x
x
2 x
x t
This is not equal to 1! The key lesson is simple:
THINGS THAT ARE PERFECTLY REASONABLE IN
EXACT PENCIL-AND-PAPER CALCULATIONS CAN BE
DANGEROUSLY WRONG IN APPROXIMATE NUMERICAL
COMPUTATIONS!
Home-brewing is a fine thing to do with beer2 but it is very dangerous when
algorithms are involved. That doesnt mean that you should never homebrew your algorithms (there would be no progress if nobody home-brewed
new algorithms) but it does mean that home-brewed algorithms should not
be used without extensive testing. Home-brewed algorithms are best seen
as long-term research projects, not reliable shortcuts for finishing homework
on time.
Also, if we worked in a non-Cartesian coordinate system, e.g. polar
coordinates, we cannot guarantee that the cancelation would still occur. In
general, VV is only guaranteed to work well in Cartesian coordinates, or
in some 1D problems. Once the kinetic energy depends on coordinates,
the equations of motion become much more complicated, and terms in the
Jacobian do not usually cancel. In those cases, Runge-Kutta is often a
perfectly suitable algorithm. If better performance is required, consider
predictor-corrector methods.[4]
References
[1] N. Giordano and H. Nakanishi,
son/Prentice Hall, 2006).
2
Computational Physics
(Pear-