Вы находитесь на странице: 1из 21

assessing uncertainty

in bottom-up modelling
crop- and grasslands

M. Wattenbacha, Pia Gottschalka, Fred


Hattermannb, Claus Rachimowb, Michael
Flechsigb, Astley Hastingsa, Pete Smitha
aUniversity of Aberdeen, School of Medicine and Life
Science, Department of Plant and Soil Science,
Cruickshank Building, St. Machar Drive, Aberdeen, AB24
3UU, UK, m.wattenbach@abdn.ac.uk
b Potsdam Institute for Climate Impact Research,
Potsdam,14473, Telegrafenberg, Germany

CarboEurope meeting, Crete. 2006


outline
• Definitions and background
– Uncertainty
– Sources of uncertainty
• Some case studies
• A concept for a framework approach a step
towards comparability of model results
• Conclusions
• Things I learned recently don’t really understand
but think might be important
Uncertainty
• Uncertainty: the state of being unsure of something

• In field science (ISO 1995 - the GUM ): “Uncertainty:


parameter, associated with the result of a measurement,
that characterizes the dispersion of the values that could
reasonably be attributed to the measurand”
– the term parameter may be, for example, a standard deviation
(or a given multiple of it), or the half-width of an interval having a
stated level of confidence.
– two ways of the evaluation of uncertainty:
• (A) is based on a series of measurements and their associated
variance
• second way which can also be expressed as standard deviation is
based on expert knowledge (B) using all available sources
Uncertainty is not Error
Error refers to the imperfection of a measurement due to
systematic or random effects in the process of
measurement. The random component is caused by
variance and can be reduced by an increased number of
measurements as we can reduce the systematic
component if it occurs from a recognizable process.

The uncertainty in the result of a measurement on the other


hand arises from the remaining variance in the random
component and the uncertainties connected to the
correction for systematic effects (ISO 1995).
Definition of model Uncertainty
measurement model scenario
uncertainty
- type D

Scenario
uncertainty

scientific judgement baseline


uncertainty – type B uncertainty
- type C

measured/statistical
uncertainty - type A conceptual
uncertainty
Ecosystem - type E
Model
example study
One Model on site
– DNDC at the Oensingen cropland site (Hastings et
al. submitted and Wattenbach et al. poster session)
Cross model comparison at one site:
– Using five models at the same site to compare
the model uncertainty – DNDC, FASSET,
PASIM, EPIC, (CENTURY) (M. Wattenbach et al. - in
preparation)

Cross site comparison one model different


sites
– PASIM model on different grassland sites in
Europe (P. Gottschalk et al. accepted AGEE)
DNDC at Oensingen cropland site
80

kgC ha-1 yr-1


Input parameters Uncertainty
60
and variables site scale
Fertilization +/- 10% each

Count
40
(Nitrogen) application
Temperature +/- 1°C
DNDC 20
Precipitation +/- 5%
Global radiation +/- 5% 0
6000 6200 6400 6600 6800 7000 7200
Clay content +/- 10% Oensingen NEE

Initial soil carbon +/- 10%

Comparison of cumulative Eddy Covarience NEE


measurements with DNDC NEE predictions
site NEE best estimate Mean value of 95% confidence interval
measured kgC run kgC ha-1 the Monte 10000 for DNDC simulation of
cumulative NEE.
ha-1 Carlo 8000
simulation kgC

NEE/EC kg C/ha
6000
ha-1
4000
Oensingen 5851 6735 6675 2000
2004 EC NEE
0 DNDC NEE

-2000
1 51 101 151 201 251 301 351
day of year
Cross site PaSim model at
European grassland sites
Cross site: PaSim factor importance
Oensingen in 2002 Oensingen In 2003
N input N input
100.00 100.00
c passive temperature c passive temperature
80.00 80.00
60.00 60.00
c slow global radiation c slow global radiation
40.00 40.00
20.00 20.00
0.00 c active 0.00 precipitation
c active precipitation -20.00
-20.00
atmospheric CO2
atmospheric CO2 c metabolic
c metabolic concentration
concentration
c struct bulk density
c struct bulk density
pH clay content
pH clay content

N input
Carlow 2002 N input
Carlow 2003
100.00 100.00
c passive temperature c passive temperature
80.00 80.00
60.00 60.00
c slow global radiation c slow global radiation
40.00 40.00
20.00 20.00
c active 0.00 precipitation c active 0.00 precipitation
-20.00 -20.00
atmospheric CO2 atmospheric CO2
c metabolic c metabolic
concentration concentration

c struct bulk density c struct bulk density


pH clay content pH clay content
Results – cross model comparison

2002

2003
Contribution index
normalized
Temperature
change of 90.00
PASIM - 2002
DNDC84G - 2002
standard 70.00
EPIC - 2002
FASSET - 2002
deviation 2002 50.00
in % 30.00
Nfert Precipitation
10.00
-10.00

iniSOC Clay
What we need (ISO ?)
• Standardized methods for uncertainty and
sensitivity analysis for ecosystem models
• Standardized datasets to allow inter-model
comparison of uncertainty and sensitivity
measures.
• Standardized software interfaces for ecosystem
models to allow access to databases for model
experiments and results.
• Databases for model evaluation results to allow
scientists, stake-holders and policy maker’s
easy access to information of model quality and
uncertainty.
• To implement the approach we propose a web-
based client - server architecture
Framework multi-run control
input factors &
sampling schemes
design
model dataset for
experiment comparison

ecosystem
model client
result
database

evaluation post-
client processing
& &
visualization visualization

model evaluation
result database

central modelling
framwork server
conclusions
• Ecosystem models produce very
heterogeneous uncertainties
• Results are only meaningful if they are
accompanied by uncertainty ranges
• Standardisation is necessary to reach
inter-comparability
• The presented framework approach might
be a way to achieve this target
Things I learned recently don’t
really understand but think might
be important
• Presentation of John Norton and Ken Reckhow about
“Modelling and Monitoring Environmental Outcomes in
Adaptive Management (AM)” IEMSS 2006

Principles:
• Design management as continuing trial-and-error learning, in
which some variation in system state is valuable because it
yields information about the system’s behaviour: “learning by
doing”
• Compare results of alternative policies, through selected
indicators, rather than attempting to optimise some cost
function
• Include resilience to disturbance as an objective
Things I learned recently, don’t
really understand but think might
be important
• Lyapunov stability ( from Wikipedia)
• In mathematics, the notion of Lyapunov stability
occurs in the study of dynamical systems.
– In simple terms, if all points that start out near a point
x stay near x forever, then x is Lyapunov stable. More
strongly, if all points that start out near x converge to
x, then x is asymptotically stable.
– The idea of Lyapunov stability can be extended to
infinite-dimensional manifolds, where it is known as
structural stability, which concerns the behaviour of
different but "nearby" solutions to differential
equations.
Thank you !
Necessity of consistency
• Changing expectations of models
– In the beginning models where made to explore the
behaviour of systems for scientific reasons only
– Today models are more and more used as predictive
tools for risk analysis and as policy support systems
• Consequences
– We need to understand the predictive capacities and
restrictions of our models
– We need standardized quality checks to give
meaningful uncertainty ranges
tools
• Monte Carlo method
– Monte Carlo methods: algorithms for solving various kinds of computational problems by
using random numbers
• Advantage: easy to use
• Disadvantage: a lot of model runs

– Latin Hypercube sampling: stratified sampling method, which can characterise the
population equally well as simple random sampling with a smaller sample size
• The tool we are using is Simlab (http://sensitivity-analysis.jrc.cec.eu.int/)
a software designed for Monte Carlo based uncertainty and sensitivity analysis
– Advantage: easy to use because of the graphical user interface providing a great number of
different distributions, sampling methods and parameter interactions
– Disadvantage: difficult to integrate external models
• Alternative tools and methods to Monte Carlo
– University of Sheffield: Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA)
see http://www.shef.ac.uk/st1mck/code.html
– Advantage: easy to use, fast and efficient and high precision
– Disadvantage: problems with thresholds and no time dependants integrated (will be done)
Sources of model uncertainty
• Type A and B contributing to Type C - baseline
uncertainty:
– uncertainty resulting from accuracy and precision of measurements
used to determine input factors
– Input factors: parameter and variables
– Accuracy and precision of the model to represent the processes it is
supposed to simulate
– Internal parameters: Their accuracy and precision is harder to
evaluate as they are often derived parameters (e.g. statistical)
based on different measurement methods. In addition they are
mostly hard coded.
Sources of model uncertainty
• Subset of type B - scenario uncertainty
– resulting from the vagueness in scenarios of the
future
– The input factors are uncertain as they are dependant
on unpredictable conditions
– The base assumptions of our models may be
uncertain because they are based on the current
system conditions which may change in future

Вам также может понравиться