Вы находитесь на странице: 1из 15

Molecular Simulation

and Modelling
A Science Philosophical Paper

by
Lück, Nils Lorenz Kjær (s092971)
Stålhandske, Simon (s092995)

Supervisors:
Schiøtz, Jakob
Hansen, Jørn Bindslev

Elements of Applied Philosophy of Science, 01543


Technical University of Denmark
16 January 2011
Contents
1 Introduction 1

2 Molecular Modelling 1

3 Models and Approximations 1


3.1 Quantum or Classical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3.2 Approximations in Molecular Modelling . . . . . . . . . . . . . . . . . . . . . . . . 2
3.3 Theoretical and Empirical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.4 Validity of the Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

4 Computer Simulation 5
4.1 Approaches to Molecular Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Numerical Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3 Chaos and Shadowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

5 Validity and Realism 8


5.1 Falsification and Scientific Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.2 Comparison to Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

6 Scientific Approach 10
6.1 Induction and Deduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.2 Theory or Experiment? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

7 Broader Perspective 11

8 Conclusion 12
1 Introduction
This paper concerns the science philosophical aspects of molecular modelling in general, with a
special focus on simulations of gold nanoparticles on a surface, in order to study the structural
properties of such particles, as described in [1]. While [1] is mostly concerned with the results
obtained from these simulations, this paper will describe the scientific methods used during this
process. Especially the epistemological sides of molecular dynamics are of interest, but also a
short introduction to the practical implications of molecular modelling will be given.
The important epistemological aspects of molecular modelling include studying various molecular
models and numerical (integration) methods. A discussion will also be given on the science
philosophical validity, including falsifiability, of molecular modelling and simulation.
As described in [1], the simulations used an effective medium potential to describe the interactions
among atoms and a simple pair potential was used for modelling the surface. By creating a
simulation which melted and afterwards slowly cooled a gold nanoparticle in the vicinity of the
surface, a stable particle attached to the surface was obtained. An analysis of this final particle
was the main interest of [1], and will therefore not be discussed thoroughly here.
A historical introduction will not be given here, instead refer to [2, 3].

2 Molecular Modelling
In solid state physics, one must be able to describe the properties of solids. Some properties can be
derived analytically, maybe using approximations. Some theories, such as statistical mechanics,
are very useful at calculating certain properties of materials, but in many cases an analytical
approach is not possible. Often these situations can be described experimentally, but this process
can be both very time consuming and costly. If one needs to analyse large amounts of data,
an experimental approach might not be reasonable. In this case, one might instead resolve to
numerical methods in order to extract the desired data. The relationship between experiments
and molecular simulations will be discussed further in Section 6.2.
Molecular modelling is the science of using computers and numerical methods to calculate the
individual motions of particles at the atomic scale. With the computing power available today,
it is possible to numerically simulate reasonably large systems (millions of atoms) and study the
behaviour of these. The results of such simulations may hold valuable information about the
nature of the atomic system.
In general, molecular modelling is divided into three subcategories, namely model development,
molecular simulations and data analysis. Models for molecular systems need to describe the laws
of motion for atoms in that system. Molecular simulations utilise these models to numerically
find the trajectories of particles at the atomic scale.
Introducing each of these subcategories, one also introduces errors due to approximations and
numerical calculations. In this paper, the implications of these errors are discussed, along with a
science philosophical discussion of the scientific validity of molecular modelling, where the primary
focus will be on molecular simulations. Along the way, the performed simulations will be discussed
as example [1].

3 Models and Approximations


In order to simulate molecular systems, one needs an accurate description of the motions of atomic
particles. In principle, molecular motion is described accurately by quantum mechanics, and the
equations of motion are found using the Schrödinger equation. However, solving the Schrödinger
equation for all subatomic particles (protons, electrons and neutrons) in a system consisting of

1
more than a few atoms, is a formidable task, and is not possible in practice. In fact, explicit
results do not exists for any but the simplest atomic systems.
Instead, approximative models are developed, which hopefully still provide a good description of
reality. The approximated models are then solved numerically (this will be discussed in Section
4), providing data that must be carefully analysed.
Different levels of approximation exist, which vary in both accuracy and computational efficiency.
In some cases few approximations (for instance the Born-Oppenheimer approximation) might be
reasonable, but often more radical approximation schemes must be applied. In all cases, one must
consider the scientific validity of applying these simplifications.

3.1 Quantum or Classical Mechanics


One of the most basic approximation performed, when deriving the laws of motion for molecular
systems, is the approximation that atoms (more specifically the nuclei) obey the laws of classical
mechanics, rather than quantum mechanics. Using this approximation, one simply needs to
calculate the forces acting on each atom, in order to find the equations of motion.
One must, however, question if this is a valid approximation, or if quantum mechanics has too
large an influence for it to be reasonable. A simple test can give an answer to this question.
Newtonian mechanics is a valid approximation if Λa  1. In the performed simulations a ≈ 2.9 Å
is the bond length of gold and Λ is the de Broglie thermal wavelength given by [3]

h
Λ= √ ≈ 7.2 pm
2πM kB T

Here M ≈ 197 amu is the mass of a gold atom and T = 300 K is the lowest temperature obtained
during the simulations (thus giving the greatest value of Λ). Finally, the fraction is found to be
Λ Λ
a ≈ 0.025 which is acceptable. In comparison, for light atoms such as Lithium or Argon, a is of
the order of 0.1 at their triple point (around 100 K for Argon).
Indeed, the use of classical mechanics is an approximation, and one must consider if this is
reasonable in a given case. According to the above, it is assumed that the classical approximation
affects the simulations less than other approximations, which will be described in more detail
below.

3.2 Approximations in Molecular Modelling


In classical molecular dynamics, one needs to know the forces acting on each atom. On the atomic
scale, all forces are conservative, hence the interaction forces can be calculated as the negative
gradient of the potential energy of each atom.
The main purpose of a molecular model is therefore to estimate the potential energies in a system.
The simplest model of atomic interactions is a pair potential, where the total potential energy
of an atom is simply a superposition of the contributions from all other atoms. While easy to
compute, this model is only reasonable for computing long range forces [4]. More sophisticated
models, like density functional theory (DFT) [5] can be used when the simpler models are too
inaccurate, but at the expense of computation time. A spectrum of widely used molecular models
is sketched in Figure 1, where the most accurate models are listed first.
Whenever a simulation is carried out, it is important to consider what model to use and how this
choice affects the data which is extracted from the simulation. Whenever precise measurements
of microscopic quantitative properties are needed, one must indeed use an accurate model, but
this is not necessarily the case when analysing tendencies and qualitative behaviour.
Also, it must be noted that a given model might provide a good description of some physical
properties, but an incorrect description of other properties. For instance, pair potentials provide

2
Molecular Models

Figure 1: Models used in molecular dynamics, sorted by accuracy and simulation speed. The most
exact models are given in the top, whereas faster simulation times make the bottom half attractive. As
the diagram states, the models in the dark area are mostly based upon quantum mechanics, whereas
the models in the light area are empirically based. Of course there are many exceptions to this rule,
as indicated by the gradient transition. A note to the diagram is that EMT and EAM are very alike,
except for the fact that EMT is more or less based on quantum mechanics while EAM is based mostly
on empirical data.

a reasonably good description of long-range forces, but does not take multibody effects into
account [4].

3.3 Theoretical and Empirical Models


There are two approaches to the development of molecular models; a deductive approach and an
inductive approach.
The deduced models are based on the well founded laws of physics. Using various mathematical
approximations, quantum mechanics is simplified to the desired extend. The challenge is then
to find approximations, such that the computational efficiency is increased as much as possible,
while keeping the introduced errors at a minimum. Models that are based purely on theoretically
deduced results, are called ab initio (first principles) models. Both HF and DFT are ab initio
models [5].
In the inductive approach, models are based on empirical data. Typically, the behaviour of molec-
ular systems is observed, and a mathematical model is proposed, based these observations. To
obtain the desired (realistic) behaviour, the mathematical model is fitted to quantitative empirical
data. For instance, most multibody and pair potentials are based on empirical observations [3].
For most practical purposes, however, these two approaches are combined. Models that are
approximated from quantum mechanics must be compared to experimental data. If major incon-
gruences are found, the model is altered to overcome these problems; one must reconsider the
applied approximations. Some models that are derived from quantum mechanics even contain
mathematical expressions that are fitted to empirical data. On the other hand, empirical models
are often fitted to data that has been generated using ab initio simulations [3].
The EMT model, which was used in the performed simulations, is based on the principles of
quantum mechanics, but such approximations have been applied, that the model must be fitted
to experimental (or ab initio) data, in order to obtain useful parameter values. Also, comparison
to empirical data has lead to several improvements to the model [6]. Hence, the model is neither

3
purely deduced from theory, nor induced from experimental data alone, since both approaches
have been utilised to developing the model.

3.4 Validity of the Models


Since all approximative models are incorrect in nature, verification or falsification of these is mean-
ingless. In spite of this, approximative models are widely used due to their practical usefulness,
but it is important that one understands the limitations of each model and considers in which
cases a given model is applicable and how the approximations influence the results of a given
simulation.
Most often, a given model is validated (not verified ) by comparing it to both experimental data
and to more precise simulations. Some macroscopic properties (such as melting temperature,
electric conductance and optical scattering phenomena) can be easily measured experimentally,
making a comparison more or less straightforward. Other properties (such as surface energies)
are more difficult to extract experimentally, and these are typically calculated using ab initio
simulations.
One should now notice, that the use of approximated models introduce two major problems,
which are described below. While these problems might be of lesser practical importance, they
are crucial from a science philosophical point of view.
First, the fact that the models are approximative means, that one can never know for certain if the
data extracted from a simulation is correct, unless it is directly compared to similar experimental
data. In principle, this means that results from computer simulations can be stated as no more
than hypotheses, needing experimental evidence for validation.
Secondly, by comparing a model to a more sophisticated one, it is implicitly assumed that the
sophisticated model is correct. In combination with the first problem, this assumption is a dan-
gerous one. In a worst case scenario, one could imagine a propagation of errors due to multiple
erroneous comparisons.
For most practical purposes, however, these problems seem to be of only minor importance. The
fact that one can never know for sure, which data is reliable, is a purely theoretical one. In reality,
experience gained from several simulations of a given model, are great indicitors on its strengths
and weaknesses. The approximations used in ab initio models can be analysed mathematically in
order to highlight the bottlenecks of the model. Due to this, it is possible to get an idea of the
valid scope of a given model, although uncertainty does exist.
Also, less precise models are often useful at determining the qualitative behaviour of a system. For
instance, one might use a computationally effective (and less accurate) model to locate materials
that are likely to exhibit some specific property. Although not all the located materials might
exhibit this property in reality, one can later use more accurate, and thus more time-consuming,
means (ab initio simulations or experimental methods) to determine the ones that do. In this
way, the amount of materials that must be analysed using the more time-consuming methods can
be decreased drastically, saving time and resources.
Experience shows, that the used EMT model is quite good at describing the overall behaviour
of various molecular systems [6]. The used EMT model has been specifically fitted to mimic the
mechanical properties of gold, which it handles fairly well. One of the greatest weaknesses of the
EMT model, however, is how the model handles cluster surfaces. Surface energies deviate by up
to 65% from their actual values, so the formation of surfaces seen in simulations are somewhat
inaccurate. This greatly questions the validity of all structural results obtained from the simulation
of gold nanoparticles.
Another questionable matter of the simulations, was the use of a simple pair potential model for
the surface. Essentially, the surface acted vertically on each gold atom, as described by a Morse
potential [7]. The justification for using such a simple model, was its computational efficiency
and flexibility; flexibility meaning that the bond energy between the surface and the gold atoms
could be easily adjusted, so as to emulate a variety of different surfaces.

4
The problem with this surface, however, is that it is completely built upon the assumption that
it is in fact the bond energy between the surface and the particle, which is important for the final
structural properties of the particle. It is thereby assumed, that the surface has no influence on
the forces among the gold atoms, but only contributes with a vertical force individual to each
atom. Also, the heterogeneity of a real surface is completely ignored.
This surface model is also very influential on the results. To what extend, depends on the accuracy
of the model compared to an actual physical surface. How the accuracy is affected can only be
determined by comparing the results to precise theoretical or experimental data (as described
later in Section 5.2).

4 Computer Simulation
While the use of models in molecular simulations is of great importance, and influences the
validity of the results extracted from a simulation, another important aspect is the used numerical
methods.
In classical molecular dynamics, one needs an effective algorithm for numerically integrating the
equations of motion, which are typically given by Newton’s second law. These integration schemes
range from stochastic methods to more or less deterministic ones.
As is mentioned in Section 3, effective numerical integration-algorithms are important, since in
most molecular systems, no explicit solutions to the equations of motion can be found. Hence,
numerical methods make it possible to analyse more complex systems, than could be achieved
using symbolic mathematics. This does, however, lead to a fair amount of potential pitfalls. It is
therefore important to consider the implications of this numerical integration.

4.1 Approaches to Molecular Simulation


Simulation Schemes

Figure 2: Simulation schemes listed by stochastic nature, from most stochastic to most deterministic.

Different approaches to molecular simulations exist. Some methods are of stochastic nature, that
is, based on statistics and probability. For instance, Monte Carlo methods are based on random
events, where probabilities are calculated from the potential energies of different states. Monte
Carlo methods are very useful for calculating static properties of materials [5]. Some algorithms
are only partially stochastic, such as the Langevin dynamics method, where random fluctuations
(and friction) are added to the system. In other cases, more or less deterministic methods are
desired. Deterministic methods are often simply referred to as molecular dynamics [8], but also
Langevin dynamics will be referred to as molecular dynamics in the following. A spectrum of

5
widely used simulation schemes is sketched in Figure 2, where the most deterministic methods
are listed topmost [8].
From a science philosophical point of view, the different approaches each impose their own prob-
lems. The primary concern of the deterministic methods, is the propagation of numerical errors,
due to the discretisation of time. The stochastic methods impose all the general problems of
stochastics as well, that is the impossibility of falsification. Although experience might show a
stochastic model or method to be accurate or not, an actual Popperian falsification is not possible.
In the performed simulations, Langevin dynamics was used, which allows accurate control of the
temperature of the system. Langevin dynamics is not simply an integration scheme, but also a
model for external and internal influences on a molecular system. These interactions are modelled
stochastically, thus introducing both approximation errors from the model itself, and uncertainty
due to the stochastic nature of the model. These stochastic uncertainties should, hopefully, be
reasonably small when averaged over several identical simulations. Although this cannot be proven
to be correct from a science philosophical point of view, experience shows this to be a quite useful
model, when used correctly [8].

4.2 Numerical Errors


total energy

Velocity Verlet

Runge-Kutta

time
Figure 3: While the total energy of simulations (of isolated systems) using the Velocity Verlet scheme
fluctuates on a small time-scale, it is relatively steady over time compared to most Runge-Kutta methods,
which instead ensure a steady total energy on a small time-scale, but diverts over large time-spans.
Proportions are exaggerated for clarity.

In order to simulate molecular systems, one needs an estimate of the precision of a given simulation.
In most cases, the approximative models are of primary importance, but nevertheless, numerical
errors should be considered in order to validate the results of a given simulation.
The numerical errors consist of discretisation errors and truncation errors [9]. Discretisation
errors are due to the discretisation of time, which is necessary in order to numerically integrate
the equations of motion. Multiple integration algorithms have been developed, showing different
properties. Some general algorithms like Runge-Kutta methods show to be very accurate on
a local time-scale. In molecular simulations of isolated systems, the total energy of the system
should be conserved, but over large time-spans this is not achieved using Runge-Kutta methods, as
seen in Figure 3. Instead, a simpler algorithm, like the Velocity Verlet, shows to be practical. The
Velocity Verlet algorithm is both computationally faster and shows a better handling of long-term
energy conservation, although the potential energy fluctuates somewhat more than Runge-Kutta
methods on a local time-scale [10].
Truncation errors are due to the use of floating point numbers, which are limited to a finite
precision. In most cases, the truncation errors are negligible compared to the magnitude of the
approximation errors and the discretisation errors. They do, however, have a different impact, as
described in Section 4.3.

6
The used Langevin dynamics model was solved using an integration scheme inspired by Velocity
Verlet. The specific details will be left out, but from a science philosophical point of view, it must
be concluded that the random fluctuations of the model itself, cause a greater error, than the
errors caused by the numerical integration.

4.3 Chaos and Shadowing

integrated shadow
trajectory trajectory

exact solution
Figure 4: Exact solutions for chaotic systems (which have even the slightest difference in initial conditions),
diverge exponentially from each other over time. Subsequently, because of truncation errors, numerically
integrated solutions do the same. The shadowing lemma states the existence of shadow orbits, that are
exact solutions which uniformly follow a numerically integrated trajectory. Shadow orbits have slightly
different initial conditions, than the numerically integrated orbits.

From Newton’s law of gravitation it is known, that it is impossible to find the explicit solutions
to the equations of motion of a three planet system. This problem has come to be known as the
three body problem, and applies to planets as well as atoms. As soon as the (atomic) system
in question holds more than two particles, it is a chaotic system. This means, that virtually
all systems simulated in molecular dynamics are chaotic. The chaotic nature of these systems
fundamentally means, that results cannot be replicated. That is, the trajectories of atoms in one
simulation, will end up being completely different to those in other similar simulations, although
completely identical set-ups were used. This, of course, refers to a system that ought to be fully
deterministic.
The impossibility of replicating results is caused by a combination of two phenomena, which both
are present in computer simulations. The first phenomena is the existence of random numerical
rounding errors in the computer. The second is the fact, that solutions of chaotic systems separate
exponentially fast from each other, given that they have a small displacement in initial conditions;
the system is Lyapunov unstable. The truncation errors will separate solutions a slight bit from
each other, which will lead them to exponentially fast separation.
This raises a problem, which challenges the nature of molecular dynamics fundamentally, for how
can one learn anything from simulating a system, if the basic motions of the atoms cannot be
replicated? Indeed, it seems that the simulations are wrong. However, there are fortunately many
indicators that this is not the case. The shadowing lemma states the existance of so-called shadow
orbits, which are exact solutions, with infinitesimally displaced initial positions, that uniformly
follow the numerically integrated trajectory, that was found using simulations [11]. Due to the
fact that shadow orbits have slightly different initial conditions, combined with the chaotic nature
of the system, it is possible for these to follow the numerically integrated solution. The fact that
these shadow orbits exist, makes the numerically integrated solution seem plausible.
A way to ensure that the results from molecular dynamics are reliable, is not to draw conclusions
from microscopic dynamic data, such as atom trajectories, but only to look at the bigger picture
of the simulation. For instance, this corresponds to measuring the pressure, temperature and
volume of a gas, instead of analysing the motions of single particles. Even though the molecules,
which make up the gas, are moving around in chaotic motion, the gas as a whole satisfies the
macroscopic parameters given by the ideal gas law. Looking individually at the trajectories of all
atoms, however, is not meaningful, and would not be possible to replicate in another experiment.

7
Likewise, this argument can be used when simulating gold nanoparticles, and measuring their
shapes. The atoms might be governed by chaotic motion, but the macroscopic properties are
conserved. However, in the performed simulations, the stochastic nature of Langevin dynamics
causes the system to behave even less deterministic.

5 Validity and Realism


5.1 Falsification and Scientific Validity
As discussed in Sections 3 and 4, a great amount of errors are inevitable in most molecular
simulations. While this error does indeed affect the simulations and the reliability of these, does
it also make molecular simulations less scientific?
In the Popperian philosophy of science, a discipline can only be considered scientific, as long as
it is open to falsification. Molecular modelling is indeed open to falsification, but the problem
is, as mentioned earlier, that the approximative models have been falsified and are used in spite
of this fact. Before drawing any premature conclusions, however, it is important to realise that
the approximative models are not expected to give a precise description of the actual world. The
conclusion must then be, that it is possible to simulate molecular systems in both a scientific and
unscientific way.
A naive approach to molecular simulations is indeed unscientific and an uncritical interpretation of
data could be very erroneous. But as long as the approximations are taken into account, and the
possible errors are estimated and considered, molecular simulation must be classified as an exact
scientific discipline, and the use of approximations and numerical methods is indeed scientifically
valid.
In the performed simulations, a critical approach was aimed for. Some of the extracted data
was compared to other results, but in the cases where this was not possible, estimates of the
uncertainties were given. These will not be accounted for in detail here, but can be seen in [1].
Nonetheless, it must be mentioned that nothing was concluded for certain, due to the use of both
an approximated potential (EMT), a simple surface and the stochastic Langevin dynamics.

5.2 Comparison to Experimental Data

Figure 5: TEM pictures showing gold nanoparticles on a MgO crystal. (a) clearly shows a particle in
profile. While any facets of the particles are unclear in (a), they are rather obvious on the larger particles
in (b). Images provided courtesy of [12]; edited for clarification.

While a discussion of the theory and potential pitfalls of molecular modelling have been the
main focus until now, the single most important aspect of validating a scientific model is to

8
Figure 6: A simulated particle that closely resembles that in Figure 5 (a). Both shape and size are
similar.

compare it to experimental data. While inductive sciences are directly based on experiment and
observations of the real world (as will be discussed in Section 6.1), the models and methods
used in molecular modelling are only indirectly related to experimental data (as discussed in
Section 3.3). The problems stem from the impossibility of seeing at the atomic scale, where
the laws of physics are described by quantum mechanics.So, unfortunately, it is necessary to
make macroscopic experimental observations, in order to confirm results produced by a molecular
simulation. A lot of macroscopic data (thermodynamic, optical and electromagnetic properties)
can be easily measured, but for a precise description of the shape of a nanoparticle (which was the
goal of the performed analysis, as described in [1]), one must be able to observe the nanoparticles.
However, in many cases, this is possible. There is a way of literally seeing things at a nano scale.
By use of a Transmission Electron Microscope (TEM), one may take a look into the nanoscopic
world, in some cases observing structures that can be compared to similar, simulated structures.
Figure 5 shows two TEM images of gold nanoparticles on MgO crystals. (a) shows a nice profile
view of a particle, perfect for comparison to the results of the performed simulations. For direct
comparison, a simulated particle, which closely resembles the size of that in (a), is shown in
Figure 6. It is chosen based on the fact, that its dimensions match reasonably to those from the
TEM picture (within the errors of the simulation). The matching simulated particle is found to
be a 1372-atom particle on a surface with binding energy per atom of 0.125 eV. This particle at
2.11 nm in height and with a waist diameter of 2.74 nm is very similar in shape to that in Figure
5 (a), although the facets of the simulated particle are more clearly visible. Figure 5 (b) shows
a group of larger nanoparticles, around 5 nm in diameter, again on an MgO support. Here the
facets are much clearer than in (a).
As seen, it is possible to compare the shape of a simulated particle to the shape of an actual gold
particle, which can be found using a TEM. However, it is also noted that more specific experimen-
tal methods are required to validate the results of the simulation, whereas the above comparison
is somewhat qualitative and vague. Also, if one should desire to make a more useful comparison
to experimental results, it would be wise to take TEM profile pictures of gold nanoparticles on
a variety of surfaces. Preferably, one should know the binding energy between gold and the sur-
face material in question. This would give direct experimental data to compare to the simulated
results, and thus provide a much better estimate of the accuracy of the used models. On other
the hand, if it was possible to validate the modelled surface, one might be able to estimate the
surface energies of different substrates.

9
induction

comparison

Figure 7: The diagram shows the layers of our understanding of the universe. The layers are represented
by white boxes, that are placed in the diagram according to their properties. Labels describe the meaning
of the layers and the arrows between them. In general, the arrows mean that a layer is a subset or
is deduced from the former. The horizontal axis indicates the complexity of the layer, in the sense
that layers to the right contain fewer parameters than those to the left of them. The vertical axis
indicates exactness, where closer to the top means closer to reality and closer to the bottom means more
assumptions, approximations and numerical errors. The diagram should be read starting from the top
left corner, following the arrows to the bottom right corner. The diagram shows how numerical methods,
in spite of their errors, can solve much more complex systems than analytical methods. The placements
and proportions of layers in this diagram are in no way final and are open to discussion.

6 Scientific Approach
6.1 Induction and Deduction
Physics rely heavily on both inductive and deductive methods. The inductive approach is mostly
used in experimental physics, where empirical data must be used to generalise observations into
physical laws and theories. Deductive methods are favored in theoretical physics, where physical
insight is found using mathematics, under certain assumptions.
In molecular modelling, both inductive and deductive methods are used. Deductive methods are
used for theoretically developing models. In doing so, one must find a place to start, which in
most cases is quantum mechanics and the Schrödinger equation. Using logic, physical assumptions
and mathematics, one can then apply certain approximations, as discussed in Section 3. From
a science philosophical point of view, one might consider the actual scientific nature of applying
these approximations. From a naive Popperian point of view, a model which can be proved
to be wrong should be dismissed, but for most practical purposes this is unreasonable. Also,
approximative models can be used in a scientific way, as long as one is aware of the limitations of
the model and considers how the total errors of the used simulation set-up propagate throughout

10
the simulation, as discussed in Section 4.3.
Inductive methods are used in order to validate and improve models. The empirical models, men-
tioned in 3.3, are even based on empirical results, thus developed using an inductive approach.
The inductive method is used to compare models to experimental data, using this to improve the
models and simulations. Experimental observations are crucial, in order to realistically simulate
molecular systems and correctly interpret the extracted data. However, in practice this relation-
ship between computer simulations and experimental physics is more of a synergy; data extracted
using molecular simulations often give rise to new interesting ideas, which can be analysed in
more detail experimentally. Also, computer simulations might hint a certain experimental set-up
to be superior to another one, which could actually improve the accuracy of the experimental
measurements.
Both inductive and deductive principles are important to molecular modelling. In order to cor-
rectly understand the physical accuracy of a model or simulation, one cannot neglect either. The
uses of induction and deduction in molecular modelling are also seen in Figure 7, where computer
simulations in the figure are represented as numerical solutions (although simulations are only a
subset thereof).

6.2 Theory or Experiment?


When the first computer simulations were performed, physicists had a hard time deciding what
to think of it. Some physicists argued, that computer simulations were actually similar to exper-
iments, and referred to these as computer experiments, whereas others thought of simulations as
pure theory.
The reason for this discussion, is the way one analyses data extracted from simulations, which is
similar to experimental data analysis. Data from molecular simulations must be analysed using
the same tools as those used for experimental data. Unlike traditional theoretical physics, the
results cannot be stated explicitly using symbolic mathematics. Instead, one gathers numerical
data and visual impressions which must be interpreted afterwards.
Indeed, computer simulations seem to be somewhat comparable to actual experiments, and one
cannot deny the fact, that the approaches to data analysis are similar. However, it cannot be
said that computer simulations are computer experiments, since simulations are simply numerical
solutions to theoretical models and equations. It is misleading to consider computer simulations
similar to experiments, since all the data that can be extracted from a simulation is already
contained in used models; a simulation is simply a representation of the used models, not repre-
sentative of the real world itself.
Molecular simulations must therefore be considered as theory, although not an explicit theory
like Newton’s laws. A computer simulation will never reveal anything of the real world, which
is not already contained in the used equations, although one might not be able to solve these
analytically.

7 Broader Perspective
Today computer simulations are used in virtually all physical and mathematical fields of study,
ranging from economics to biology. Similarly, molecular modelling is crucial in the study of
atomic systems. Molecular simulations are of special interest in chemistry and physics, although
other uses might occur. Today, molecular modelling is used for a variety of important purposes,
including the search for effective ways to store hydrogen, and thus energy.
In the performed simulations, gold nanoparticles on a surface were analysed. Since the origin
of nanosciences, the practical purposes of nanoparticles have been discussed vividly. Their dis-
advantages are stressed by some, but new applications are constantly uncovered. One of the
great advantages of gold nanoparticles is their catalytic activity. The significance of studying

11
gold nanoparticles on a surface, is that gold nanoparticles require a support when using them as
catalysts.
The applications of gold in catalysis include oxidation of bioethanol to acetic acid and the article
[13] describes a way of using carbon supported 2 nm to 5 nm gold nanoparticles as catalysts in the
production of D-gluconic acid from D-glucose. They conclude that gold nanoparticles are worthy
alternatives to more expensive metals such as palladium and platinum. Moreover, they conclude
that gold is more immune to oxygen and chemical poisoning than other alternatives.
According to [14], experimental and simulation data suggests, that corner atoms of gold nanoparti-
cles are especially active, while edge atoms are a little less active and surface atoms are practically
inactive. In other words, the lower the coordination number of the atom, the more catalytically
active it is. This is where molecular simulations are useful. Using simulations it is possible to
study the density of under-coordinated atoms for various sizes of particles and various supports.
The results could provide information to predict the most efficient combination of support and
particle size.
Hence, one cannot avoid the practical importance of computer simulations, which provide means
to effectively analyse and characterise different chemical compounds, including metals and semi-
conductors [5].

8 Conclusion
The science philosophical aspects of molecular dynamics have been discussed. It is concluded
that the use of approximative models and numerical methods in molecular simulations, give rise
to a variety of errors. The models are, however, open to falsification, and in direct comparison to
actual experimental data, most approximative models have already been falsified. Nevertheless,
molecular simulation is a valid scientific field of study, as long as the approximative errors are
considered; whenever a simulation is interpreted, one must also analyse the errors involved and
take this into account when drawing any conclusions. As long as one does not ascribe exaggerated
accuracy to the simulations, one does not postulate it to describe actual reality.
The only way to directly study the validity of a given model, is to compare it to experimental
results (or previously validated theory). In practice, this (that is comparing every set of data
extracted from a simulation to empirical data) is a formidable task, and for most practical purposes
is unreasonable. Instead, experience, and analysis of the used approximations, might show the
strengths and weaknesses of a given model, helping one to understand its limits. Nevertheless,
important data should be verified in order to avoid errors and misinterpretations.
It must be concluded, that molecular simulations utilise a theoretical approach to science. Al-
though similar to the experimental approach in some ways (data analysis e.g.), a simulation
can never provide results that are not already contained in the used models, even though these
might be impossible to solve analytically. Hence, molecular simulations are theory, not a sort of
experiment.
In [1] it is described how a range of molecular simulations of gold nanoparticles on a surface,
were carried out using various approximative models. Due to this use of approximative models,
the results cannot be concluded accurate from a science philosophical point of view, however the
simulations may still be of practical interest.
In general, it is concluded that the use of molecular simulations is of great interest, and might
indeed prove to be even more important in developing new methods in, for instance, energy
storage and catalysis. As an example, gold nanoparticles have been shown to exhibit useful
catalytic properties.

12
References
[1] N. L. K. Lück, S. Stålhandske and A. D. Østerkryger, Molecular Dynamics Simulations of Gold
Nanoparticles on a Surface - A Study of the Geometric and Structural Properties of Supported
Gold Nanoparticles, 12 January 2011.
[2] http://boscoh.com/protein/an-annotated-history-of-molecular-dynamics, 15. January 2011
[3] F. Ercolessi, A molecular dynamics primer, June 1997.
[4] http://polymer.bu.edu/Wasser/robert/work/node8.html, 14. January 2011

[5] J. M. Thijssen, Computational Physics, Cambridge University Press, Second Edition, 2007
[6] K. W. Jacobsen, P. Stoltze & J. K. Nørskov, A semi-empirical effective medium theory for
metals and alloys, paper in Surface Science, 1996
[7] http://community.middlebury.edu/∼chem/chemistry/class/physical/quantum/help/morse/morse.html

[8] J. M. Haile, Molecular Dynamics Simulation - Elementary Methods, John Wiley & Sons Inc.,
1997
[9] T. Pang, An Introduction to Computational Physics, Cambridge University Press, Second
Edition, 2006

[10] http://codeflow.org/entries/2010/aug/28/integration-by-example-euler-vs-verlet-vs-runge-
kutta/, 14 January 2011
[11] E. Barreto, http://www.scholarpedia.org/article/Shadowing, 15 January 2011
[12] L. D. L. Duchstein, TEM pictures of Gold Nano Particles at DTU Cen, Center for Electron
Nanoscopy
[13] S. Biella, L. Prati & M. Rossi, Selective Oxidation of D-Glucose on Gold Catalyst, 2001
[14] S. K. Klitgaard, K. Egeblad, H. Falsig, J. K. Nørskov, C. H. Christensen, Jo mindre jo bedre
- ultrasmå guldklumper sætter fart på kemien, Aktuel Naturvidenskab, 2, 2008

13

Вам также может понравиться