Вы находитесь на странице: 1из 16

QBOi

experimental protocol
Version 1.21 Drafted by John Scinocca, Tim Stockdale, Francois Lott, Scott Osprey, Neal Butchart,
Andrew Bushell, and James Anstey.
Version 1.22 Update to include ozone dataset recommended for high-top models (09-10-2015)
Version 1.23 Clarification of update to high to-top models, also including recommendation for ozone
climatology. Update of short-name for convective precipitation flux (prc)
Version 1.24 Adding suggested experiment extensions in 4 (10-10-2016)
Version 1.25 Inclusion of more detailed data protocol information in 5.3, unit correction (psistar)
and updated ozone URL (21-11-2016).

1. Overview
This is the protocol for a set of five QBO experiments, and is based on the outcome of discussions at
and following the QBO Modelling and Reanalyses Workshop, Victoria, March 2015, and is briefly
summarised in Anstey et al., 2015 and Hamilton et al., 2015. The motivations and goals of the
experiments are described below, followed by the technical specification of the experiments and
information on data and diagnostics. The experiments themselves are designed to be simple and
accessible to a wide range of groups.
It is expected that each group will submit a set of results from all the experiments, made with a
single best shot model version. Use of the same model version for the different experiments is
crucial for learning the most from this study.

2. Experiment list and goals


a) Present-Day Climate: Identify and distinguish the properties of and mechanisms underlying the
different model simulations of the QBO in present-day conditions:
EXPERIMENT 1: AMIP specified interannually varying SSTs, sea ice, and external forcings
EXPERIMENT 2: 1xCO2 - identical simulation to the AMIP above except employing repeated annual
cycle SSTs, sea ice, and external forcings
These experiments will allow an evaluation of the realism of modelled QBOs under present-day
climate conditions, employing diagnostics and metrics discussed in Section 5. The impact of
interannual forcing on the model QBO can also be assessed, and Experiment 2 is a control for the
climate projection experiments.
b) Climate Projections: Subject each modelled QBO contribution to an external forcing that is similar
to that typically applied for climate projections:
EXPERIMENT 3: 2xCO2 - identical to Experiment 2, but with a change in CO2 concentration and
specified SSTs appropriate for a 2xCO2 world
EXPERIMENT 4: 4xCO2 - identical to Experiment 2 but with a change in CO2 concentration and
specified SSTs appropriate for a 4xCO2 world

The response of the QBO, its forcing mechanisms, and its impact/influence will be evaluated by the
same set of diagnostics used for diagnosing Experiments 1 and 2, so as to evaluate the response
(2xCO2 - 1xCO2 and 4xCO2 - 1xCO2). Obvious questions that will arise:
-

What is the spread/uncertainty of the forced model response?

Do different models cluster in any particular way?

Can a connection/correlation be made between QBOs with similar metrics/diagnostics in


present-day climate and their response to CO2 forcing?

The hope is that these experiments may indicate what aspects of modelled QBOs determine the
spread, or uncertainty, of the QBO response to CO2 forcing. These aspects should receive the most
attention by QBOi in order to reduce uncertainty in future projections. Such experiments also will
inform the community what the general uncertainty in future predictions might be for state-of-theart QBOs in CMIP6 projection experiments.
c) QBO Hindcast and Process Study: Evaluate and compare the predictive skill of modelled QBOs in
a seasonal prediction hindcast context, and study the model processes driving the evolution of the
QBO.
EXPERIMENT 5: A set of initialized QBO hindcasts, each ideally with a 9-12 month range. Observed
SSTs and forcings specified as in Experiment 1, with reanalysis providing atmospheric initial
conditions for a set of given start dates.
These are not strictly prediction experiments in the seasonal forecast sense (they use prescribed
observed SST), but still represent a challenge as to how well the models can predict the evolution of
the QBO from specified initial conditions. Obvious questions that will arise:
-

How much does model prediction skill vary between models, and to what extent are models
able to predict the QBO evolution correctly at different vertical levels and different phases of
the QBO?

How does the forecast skill relate to the behaviour of the QBO in Experiment 1? Does a
realistic QBO in a long model run guarantee good predictions, or vice versa, or neither?

Do the models that cluster and/or do well in the prediction experiments cluster in the CO2
forcing experiments?

The hope is that these experiments might indicate what aspects of modelled QBOs determine the
quality of QBO prediction, so that these aspects can receive attention in order to improve prediction.
Alternatively, the hindcast framework may be helpful for directly assessing model changes, to help
drive improvements in free-running models. Can these experiments help narrow the range of
plausible models for climate change experiments?
Process Studies: Experiment 5 has a dual purpose: it not only provides information on the predictive
capabilities of the models, it offers a unique opportunity to investigate and evaluate differences in
wave dissipation and momentum deposition, so as to understand the processes driving the QBO in
each model. The initialization of the seasonal forecasts will necessarily present each QBO
contribution with the same initial basic state. The evolution of that state immediately after the start
of the forecast offers an opportunity to compare and contrast the properties of wave dissipation and
momentum deposition between different models given an identical basic state. Specifying the same
observed SST in all models (rather than allowing each model to predict its own SST evolution) helps

focus attention on the model mechanisms that drive the QBO, and the extent to which they are
correctly represented.
It is likely that any focus on processes driving the QBO will benefit from including a special set of
high-frequency diagnostic output. See Sec. 5, below, for specifications of this output.

3. Experiment details
Five sets of simulations/experiments have been defined above:
-

EXPERIMENT 1 - AMIP, interannually varying SSTs, sea ice, and external forcing

EXPERIMENT 2 - 1x CO2, repeated annual cycle SSTs, sea ice, and external forcings

EXPERIMENT 3 - 2x CO2, as EXPT 2 with +2K SST perturbation and 2x CO2

EXPERIMENT 4 - 4x CO2, as EXPT 2 with +4K SST perturbation and 4x CO2

EXPERIMENT 5 - QBO hindcasts, with reanalysis initial conditions on specified start dates.

For each experiment it is requested that all modelling groups use the same set of SST and sea ice
boundary conditions, as specified below. External forcings should be followed to the extent possible,
although it is recognized that models may vary in how they specify aerosols, volcanic forcing etc. For
the purposes of these experiments (sensitivity studies of the QBO), what matters is that the external
forcing remains constant when it is supposed to be constant, and varies as realistically as the model
allows when it is supposed to vary. In all cases, the intention is for the experiments to be made using
only reasonable efforts. Experimental details should be documented by all groups, and any changes
to prescribed forcings should be highlighted.
Ensemble sizes are given as a range, from minimum to preferred size. Each group should assess what
is reasonable, given costs, resources and expected results (e.g. some models may have a highly
regular or seasonally phase-locked QBO).

EXPERIMENT 1- AMIP

Cost: 30-90y

This is based on the CMIP5: Expt 3.3


Period: 30y, using SSTs and sea ice from 1 Jan 1979 to 28 Feb 2009.
Ensemble size: 1-3
Boundary Conditions: CMIP5 interannually varying sea ice and SSTs obtained from:
http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/amipbc_dwnld.php
External Forcings: CMIP5 external forcings for radiative trace gas concentrations, aerosols, solar,
explosive volcanoes etc. obtained from:
http://cmip-pcmdi.llnl.gov/cmip5/forcing.html#amip
Ozone forcing datasets appropriate for use in high-top models can be obtained from:
https://groups.physics.ox.ac.uk/climate/osprey/QBOi_O3/

Atmospheric initial conditions: Not prescribed. Modellers may initialize as they see fit, such as from a
spun-up QBO run, from a default set of initial conditions used by the model, or from reanalysis data
consistent with the timing of the SSTs and sea ice (e.g. begin the model run with 1 Jan 1979 SSTs, sea
ice, and atmospheric conditions).
Notes:
1. 1 Jan 1979 to 28 Feb 2009 is the date range requested for model output to be uploaded to
the common QBOi archive (see Sec. 5 below for further details on diagnostics). Modellers
may wish to begin their runs earlier than 1 Jan 1979 if spin-up time is required.

EXPERIMENT 2 - 1xCO2

Cost: 30-90y

Repeated annual cycle simulation.


Period: 30y, after a suitable spinup (~5y).
Ensemble size: 1-3
Boundary Conditions: CMIP5 SST Climatology 1988-2007 and sea ice Climatology 1988-2007
obtained from:
http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/amipbc_dwnld.php
External Forcings: repeated annual cycle forcings. Ideally this would be some sort of climatological
forcing averaged over the 30-year period used in EXPT1, but that doesn't really exist. The suggestion
is to use year 2002 of the CMIP5 external forcings:
http://cmip-pcmdi.llnl.gov/cmip5/forcing.html#amip
and an ozone forcing dataset appropriate for use in high-top models obtained from:
https://groups.physics.ox.ac.uk/climate/osprey/QBOi_O3/
Note that the year 2002 has neutral ENSO, neutral PDO, and is well away from any historical
explosive volcanoes, therefore we anticipate that any possible effects of SST interannual variability
on the external forcings e.g. an ENSO influence on the ozone distribution would be minimized for
that year. This is desirable since the SST and sea ice boundary conditions use climatology. Since this
experiment will be the base for the 2xCO2/4xCO2 experiments, a constant value of CO2 corresponding
to the average over the year 2002 should be used.
For ozone, modelling groups may find it most appropriate to use a climatology of zonal-mean ozone
instead of the 3D ozone distribution from the year 2002, since this is the year of the SH SSW, and in
general a single year of ozone data will reflect the meteorology of that year. The ozone dataset at the
above link contains 1850-2099 monthly-mean 3D ozone concentration on pressure levels, from
which a zonal-mean climatology can be constructed.
Note that although these choices are not ideal (the 30-year comparison period, the 20-year SST
climatology and the 2002 fixed forcing are all inconsistent with each other), the observed
dependence of the QBO on changing climate through this period appears to be negligible. Thus for
QBO purposes (and in particular for comparing model responses) the protocol is believed adequate,
if all models use the same approach.

Atmospheric initial conditions: As with EXPT1, not prescribed.



EXPERIMENTS 3 and 4 - 2xCO2/4xCO2

Cost: 60-180y

Period: 30y, after suitable spinup


Ensemble size: 1-3
Boundary Conditions: SST as in EXPT 2 but with a spatially uniform +2/+4K perturbation added for
EXPT 3/4. Sea ice identical to EXPT 2.
External Forcings: the forcings in these two experiments should be exactly the same as used in EXPT
2 except for the CO2 concentration, which should be doubled and quadrupled. Only CO2 forcings
should be changed, not other radiatively active trace species. Please note that this includes the
ozone distribution it should be identical to that used in EXPT 2. These are sensitivity experiments,
not attempts to predict specific periods in the future.
Atmospheric initial conditions: As with EXPT1, not prescribed.

EXPERIMENT 5 - QBO hindcasts

Cost: 68-150y

These are atmosphere-only experiments, initialized from re-analysis data, providing multiple short
integrations from a relatively large set of start dates sampling different phases of the QBO.
Start dates (i.e. atmospheric initial conditions): 1 May and 1 November in each of the years 19932007 (15 years, 30 start dates)
Hindcast length: 9-12 months. However, if this length is impractical due to computational or other
constraints, shorter forecasts such as the conventionally used 6-month hindcast length are
acceptable. Note that the process-study aspect of this experiment can be addressed using 6-month
forecasts.
Ensemble size: 3-5 members
The boundary conditions and forcings for this experiment follow the prescription of the AMIP
experiment (EXPT 1).
Boundary Conditions: CMIP5 interannually varying sea ice and SSTs obtained from:
http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/amipbc_dwnld.php
External Forcings: CMIP5 external forcings for radiative trace gas concentrations, aerosols, solar,
explosive volcanoes etc. obtained from:
http://cmip-pcmdi.llnl.gov/cmip5/forcing.html#amip
Ozone forcing datasets appropriate for use in high-top models can be obtained from:
https://groups.physics.ox.ac.uk/climate/osprey/QBOi_O3/
Initial data for these dates should be taken from the ERA-interim reanalysis. ERA-interim data is
available for download from http://apps.ecmwf.int/datasets (registration is required; if downloading
many start dates from this site, it may be easier to use the batch access method described on the

site, although interactive download of each date is also possible. Data are available on either
standard pressure levels or original model levels, and in either grib or netCDF. Try to download only
the data you need, e.g. at 0 z on the 1st of the month).
The ensemble is expected to be generated by perturbing each ensemble member by a small
anomaly, which needs do no more than change the bit pattern of the simulation.

4. Additional experiments
Some groups may want to conduct additional experiments, to provide further information on the
sensitivity of the results to various factors.
EXPERIMENT 5A: As EXPT5, but using a coupled ocean-atmosphere model and predicting the SST,
instead of specifying observed values. External forcings could also be fixed so as not to use future
information. This is then a true forecast experiment for the QBO, and can be compared with the
results of EXPT5.
Further, groups may want to run some or all of the experiments with multiple model versions, to
explore the sensitivity of some of the results e.g. to vertical resolution or physics package. Although
ideally all experiments would be rerun for any given model version, this may not be practical. Model
versions for which complete experiment sets are available are likely to be considered the primary
results when analysis takes place. If a group does run experiments using more than one model
version, the different model versions should be given distinct names, such as when labelling output
data files, to prevent ambiguities arising in the analysis of results.
At the September 2016 Oxford QBO workshop (see SPARC Newsletter 48, January 2017, for a
workshop summary), a number of possible modifications and/or extensions to Experiments 1-5 were
discussed that address topics of interest to QBOi participants. For groups interested in pursuing
them, the following are suggested:

Examine teleconnection robustness, particularly of the NAO response, by extending EXPT2


by one or more centuries. Since EXPT2 is a timeslice experiment, a single long run is
preferable to adding more ensemble members.

Examine simultaneous QBO and ENSO teleconnections by adding SST anomalies


representative of El Nio / La Nia conditions to EXPT2. Single long multi-century runs are
preferable to multiple ensemble members. Ideally the first modelling group to carry out this
experiment should make available the specifications of their El Nio / La Nia perturbations,
so that other modelling groups can run comparable experiments.

Examine the 2016 QBO disruption by running EXPT5 with initialization in Nov 2015.

Idealized experiments, as modifications of EXPT3 and EXPT4, to separate tropospheric and


stratospheric climate change effects. A simple method is to run modified versions of EXPT3/4
in which SST or CO2 is increased, but not both.

QBO vs. no-QBO modifications of EXPT2-4: for models that can remove their QBOs in a
straightforward way (e.g. by turning off tropical non-orographic GWD), what is the overall
effect of the QBO on present-day climate and on projections? For present-day climate EXPT2
should be modified, rather than EXPT1, so that comparison to modified EXPT3 and EXPT4
runs is straightforward.

Ozone recovery: specify an ozone perturbation representing ozone recovery in EXPT3,4.

Interactive ozone: for models that can run both with and without ozone chemistry, how does
the dynamical QBO respond to ozone changes? EXPT2 is the best candidate for this
experiment since it approximates present-day conditions without including interannual
variability due to SSTs (as EXPT1 does). Ideally the chemistry version of the model should not
differ in any other way (i.e. except by the inclusion of chemistry) from the non-chemistry
version of the model.

These experiments do not comprise a second set of coordinated experiments, but are adopted as
coordinated recommendations for interested groups, so that intercomparison of results can be
more easily carried out among groups that choose to pursue these experiments.

Output, diagnostics & data protocol


A discussion around the choice of diagnostics was initiated by Francois Lott and can be found on the
QBOi blogging site: http://qboiexperiments.blogspot.co.uk/. This and subsequent discussions led to
the consensus on output diagnostics described here. The spatial and temporal resolution of the
requested diagnostic variables are first described (Sec. 5.1), followed by the requested output
periods for the different experiments (Sec. 5.2). This is followed by brief comments on data format
and location of the archive (Sec. 5.3), and then a summary table listing the requested output
variables, including their standardized names (Sec. 5.4).

5.1 Spatial and temporal resolution of the requested diagnostics
For the 5 core experiments a good vertical resolution is required for outputting diagnostics.
Accordingly, the following extended set of 30 pressure levels are requested:
Pext (hPa) {1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 175, 150, 120, 100, 85, 70, 60, 50, 40,
30, 20, 15, 10, 7, 5, 3, 2, 1.5, 1.0, 0.4}
These pressures are adapted from the extended levels set requested by CMIP6. They have been
slightly modified to obtain vertical resolution of 1.0 to 1.5 km over the altitude range 200 hPa to 40
hPa, i.e. through the upper tropical troposphere and lower stratosphere.
All output variables are requested to use this standard set of pressure levels, for ease of comparison
between models. There are two exceptions, however:
1. Data to be used for calculating equatorial wave spectra (6-hourly instantaneous fields)
should be provided at vertical resolution equivalent to the model resolution, as discussed in
more detail below. The reason for high vertical resolution is to ensure accurate calculation of
QBO wave forcing (e.g. see Kim and Chun 2015).
2. Daily-mean 3D variables should be provided on the set of 8 pressure levels used by CMIP5:
1000, 850, 700, 500, 250, 100, 50, 10 hPa. This will reduce data volume, and since it is
anticipated that these data will be used to examine QBO influence on other regions of the
atmosphere (e.g. on the NAO), we see no need for high vertical resolution.
There is no prescribed horizontal grid on which data are to be provided. Data should be provided on
latitude-longitude grids at a resolution that ideally is representative of the actual model resolution.

For models with such high horizontal resolution that the size of the dataset becomes prohibitive, use
of a coarser resolution grid is acceptable. In this case the reduction method should be documented
so that there is no confusion between the models resolution and the diagnostics resolution.
Models may not simulate the QBO for the right (or similar) reasons, and in particular the fraction of
resolved and parameterised waves will most likely be different in each model. To examine the zonalmean QBO momentum budget, EP Fluxes (EPF), the EP-Flux divergence (DIVF), and other terms in
the TEM zonal momentum equation are requested. Although requested as both daily-mean and
monthly-mean fields, EPF-derived diagnostics should be calculated using 6-hourly model wind and
temperature fields, at the minimum. However, DIVF alone may not be a sufficient diagnostic, as the
EP-Fluxes can include large opposing contributions from different wave types. To examine the
dependence of QBO wave driving on different types of equatorial waves, wavenumber-frequency
spectra of EPF can be calculated. This requires storage of instantaneous values of u, w, v, and T every
6 hours on model levels or on pressure levels at roughly equivalent vertical resolution to the model
levels. To reduce the size of the dataset, these data will be saved on a reduced range of vertical
levels, from 100 hPa to 0.4 hPa. Note that the vertical resolution within this range will be as high as
possible since the data will be on model levels or on pressure levels with similar vertical resolution.
To further reduce the size of the dataset, the 6-hourly output should be provided only on a reduced
set of latitudes near the equator, red (degrees), specifically:
red : 15S 15N
It should be noted that high vertical resolution of the 6-hourly dataset is desirable for the following
reasons: 1) to improve the representation and evolution of spectra as described in Horinouchi et al.
(2003) and Lott et al. (2014), for model simulations having a QBO; 2) to better understand how
quickly the equatorial waves dissipate as they propagate upward, and 3) to understand the
behaviour of equatorial waves near the Tropical Tropopause Layer (TTL) and in the SAO region.
Differences between vertical levels may also help reduce the contribution of tidal signals in the timelongitude spectra, something that can be problematic at sub-diurnal periods.
If the 6-hourly data are provided on model levels, the accompanying data allowing conversion of the
data from model levels to pressure levels must also be provided. For example, those models with a
height-based vertical coordinate should also upload pressure/density data too. However, if a group
decides to provide the 6-hourly data on a set of pressure levels at vertical resolution equivalent to
the model resolution, then of course the accompanying data for conversion from model to pressure
levels does not need to be provided.

5.2 Output periods
Monthly-mean output should be provided for the full requested durations of all ensemble members
of all experiments. The full requested durations for each experiment are:
EXPT 1:

1 Jan 1979 to 28 Feb 2009

EXPT 2-4:

30 years

EXPT 5:

9-12 months

Notes on these requested durations:

1. For EXPT 1, the requested period facilitates ease of comparison with ERA-Interim and other
recent reanalyses such as MERRA, NCEP-CFSR, and JRA-55. All of these reanalyses begin in
Jan 1979, with the exception of JRA-55 which begins in 1958, and extend to the present day.
(For an overview of current reanalyses, see https://reanalyses.org/atmosphere/overviewcurrent-reanalyses.)
2. As noted above, in Sec. 3, 6 month integrations are acceptable for EXPT 5 if the requested 912 month integrations are prohibitively expensive or otherwise unfeasible.

Daily-mean output should be provided for the full requested durations of each experiment for the
following ensemble members:
EXPT 1-4:

first ensemble member

EXPT 5:

all ensemble members

The full requested durations are identical to those requested for monthly-mean data.

High-frequency (6-hourly) diagnostics for calculating equatorial wave spectra should be provided for
the following periods and ensemble members of each experiment:
EXPT 1:

1997-2002, first ensemble member

EXPT 2-4:

years 1-4, first ensemble member

EXPT 5:

first 3 months, all ensemble members

Notes re. the 6-hourly diagnostics:


1. If it is feasible for a group to save 6-hourly diagnostics for all 9-12 months of each ensemble
member for each start date, this is encouraged. However, saving the first three months of
each ensemble member is the minimum requirement.
2. The 1997-2002 period is suggested for EXPT 1 because this period encompasses positive,
negative and neutral ENSO phases.

5.3 Data format and storage
It is very strongly recommended that groups should upload their QBOi model output data in the CFcompliant netCDF format, which is the standard CMIP-required format. Storage for the QBOi multimodel dataset is provided by the British Atmospheric Data Centre (BADC). The BADC's data storage
and processing service is called JASMIN, and the storage area it provides for the data is referred to as
a "group workspace". JASMIN also offers the option to process data locally, for which a range of
standard software packages is provided on the JASMIN machine (Python, R, IDL, etc).
More detail on data format and JASMIN is given in the subsections below. Please bring any errors,
ambiguities, or suggested improvements in this information to the attention of the QBOi
coordinators as soon as possible.

5.3.1 Data format: CF-compliant netCDF


Guidelines for preparing the data in CF-compliant netCDF format are available at:
https://badc.nerc.ac.uk/help/formats/netcdf/index_cf.html
This BADC page gives an overview of the netCDF file structure and metadata requirements. The CMIP
conventions should be followed, so that if a modelling centre already has a process in place to
produce CF-compliant netCDF files (e.g. as a result of having contributed to CMIP5) then it is
sufficient to use this process for QBOi model output files. We recommend that the CMIP5 version of
the conventions be used. It is also acceptable if filenames and directory structure use the CMIP6
standard, but at the time of this writing (Nov 2016) the CMIP6 standard has not yet been finalized.
For this reason the CMIP5 standard is preferred. It is also acceptable to use the CCMi-1 standard if
this is easier to provide than the CMIP5 standard, but CMIP5 is preferred.
The CMIP5 conventions are described at:
http://cmip-pcmdi.llnl.gov/cmip5/data_description.html
and by following the links on that page. The filename and directory structure should conform to the
CMIP5 standard as described by:
http://cmip-pcmdi.llnl.gov/cmip5/docs/cmip5_data_reference_syntax.pdf
which for the directory structure is (following Sec. 3.1 of the above document):
<institute>/<model>/<experiment>/<frequency>/<modelling realm>/<variable
name>/<ensemble member>

(It is not necessary to include the "<activity>/<product>" directories at the start of the path. Also,
note that the "<model>" directory is useful because a group may contribute QBOi runs using more
than one version of their model.) The corresponding standard filename structure is (following Sec.
3.3 of the above document):
<variable name>_<MIP table>_<model>_<experiment>_<ensemble
member>[_<temporal subset>][_<geographical info>].nc

where fields in square brackets are optional, although it is strongly recommended that the temporal
subset always be included. For example, one netCDF file of CCCma's model output (monthly-mean
zonal wind for QBOi Experiment 1) would reside on JASMIN as:
/group_workspaces/jasmin2/qboi/CCCma/CMAM/QBOiExp1/mon/atmos/ua/
r1i1p1/ua_Amon_CMAM_QBOiExp1_r1i1p1_197901-200912.nc

Note that the optional geographical info label does not appear in the filename. Under CMIP5
convention this label may be used to specify the dimensions of the different types of output fields, as
given in Sec. 5.4 below. E.g., a file of zonal-mean monthly-mean zonal wind data might be called:
ua_Amon_CMAM_QBOiExp1_r1i1p1_197901-200912_YPT.nc

However, the following name for this file is preferred:


ua_ZAmon_CMAM_QBOiExp1_r1i1p1_197901-200912.nc

The reasons for this preference are (1) it collects information about dimensions and averaging (both
spatial and temporal) into one place, the MIP table name; (2) at the time of writing (Nov 2016) we
strongly suspect this will become the CMIP6 way of doing things; (3) the geographical labels (YPT,

etc) are not standardized for CMIP5 anyways. For the QBOi data request, all data have dimensions of
either XYT, XYPT, or YPT. For XYT and XYPT files the standard CMIP5 table name should be used (e.g.
Amon), while for zonal-mean data a Z should be added to front of the standard table name (e.g.
ZAmon, Zday), as in the example above. To summarize, here are examples of all filenames and
their directory paths for zonal wind output:
[...]/mon/atmos/ua/r1i1p1/ua_Amon_CMAM_QBOiExp1_r1i1p1_197901-200912.nc
[...]/mon/atmos/ua/r1i1p1/ua_ZAmon_CMAM_QBOiExp1_r1i1p1_197901-200912.nc
[...]/day/atmos/ua/r1i1p1/ua_day_CMAM_QBOiExp1_r1i1p1_19790101-19831231.nc
[...]/day/atmos/ua/r1i1p1/ua_Zday_CMAM_QBOiExp1_r1i1p1_19790101-20091231.nc
[...]/6hr/atmos/ua/r1i1p1/ua_6hrPLev_CMAM_QBOiExp1_r1i1p1_19790101001979123118.nc

where
[...] = /group_workspaces/jasmin2/qboi/CCCma/CMAM/QBOiExp1

The temporal subset label will of course vary according to file size (the number of times to include
per file for each type of output can be chosen freely as per CMIP5 convention, it is not prescribed).
For more information on standard values of labels that should be used (e.g. "QBOiExp1"), see Sec.
5.3.2 below. See Sec. 5.3.3, below, for an example of how the above file would be uploaded to BADC.
Online tools are available to check that a netCDF file is CF-compliant:
http://puma.nerc.ac.uk/cgi-bin/cf-checker.pl
http://titania.badc.rl.ac.uk/cgi-bin/cf-checker.pl
For reference, the CF conventions are described at:
http://cfconventions.org/
A useful section of this website gives a concise listing of the requirements and recommendations for
any given CF version:
http://cfconventions.org/requirements-and-recommendations.html
For the above example of a CCCma netCDF file, CF version 1.4 was used since this was standard at
the time of CMIP5. However, note that CF versions are designed to be backward compatible with
previous versions. The CMOR software, which can be used to produce CF-compliant netCDF files, is
described at:
http://cmor.llnl.gov/
For the CCMi-1 protocol, the CMOR tables are available at:
https://github.com/ccmi1-test/ccmi1-cmor-tables
These tables represent the definitive version of the CCMi-1 protocol, which is similar in many ways to
the CMIP5 protocol (and can be considered an extension of it in most respects). As noted earlier, the
CMIP5 standard is preferred, but CCMi-1 is acceptable if using it is much more feasible for a group
(i.e. if they are already set up to provide this format but not the CMIP5 format).

5.3.2 QBOi-specific labels


Standard labels help to avoid unnecessary variations in terminology. For CMIP this is referred to as a
Control Vocabulary (CV); for example,
http://cmor.llnl.gov/mydoc_cmor3_CV/
describes the CMIP6 standards, including standard experiment labels ("historical", "piControl", etc).
Here we recommend some standard usage for labels used in QBOi filenames and directories.
The "<model>" label is to be decided by each modelling centre, but it should uniquely identify the
model that was used. For example, if CCCma uploads runs from two versions of the model, both
model versions cannot be called "CMAM". It is understood that this label is taken to be unique only
within the QBOi project (e.g. a different "CMAM" would have contributed to the CCMi project). The
label does not need to be descriptive; it is simply required that the model configuration
corresponding to the label can be unambiguously identified.
For the "<experiment"> label, the following values are recommended:

QBOi experiment
Experiment 1 (AMIP)

<experiment> label

QBOiExp1

Experiment 2 (1xCO2)

QBOiExp2

Experiment 3 (2xCO2, +2K SST)

QBOiExp3

Experiment 4 (4xCO2, +4K SST)

QBOiExp4

Experiment 5 (hindcast)

QBOiExp5 (see note below)


For Experiment 5 the initialization date should be indicated by a "<sub-experiment>" label that is
added to the "<ensemble member>" label:
May1993-r1i1p1
Nov1993-r1i1p1
and so forth. This is to avoid ambiguity in the filename between the initialization date and the time
range of the data (which is indicated by the "<temporal subset>" label). For example, a filename
ending in
..._QBOiExp5_r1i1p1_199311-199410.nc
might indicate one year of output for a hindcast initialized in Nov 1993, but if the hindcast
experiments are run for longer than a year (as could be done to examine decadal predictability) then
this filename might also indicate the Nov 1993 to Oct 1994 period of a hindcast that was initialized
earlier. The ambiguity is avoided by incorporating the "<sub-experiment>" label into the "<ensemble
member>" part of the filename:
ua_Amon_CMAM_QBOiExp5_Nov1993-r1i1p1_199311-199410.nc

The full path of the above file then will then appear appear as:
/group_workspaces/jasmin2/qboi/CCCma/CMAM/QBOiExp5/mon/atmos/ua/ Nov1993r1i1p1/ua_Amon_CMAM_QBOiExp5_Nov1993-r1i1p1_199311-199410.nc


5.3.3. Data storage: BADC (JASMIN) access
Registration is required to access JASMIN at the BADC. An overview of the required steps is given at:
http://jasmin.ac.uk/how-to-use-jasmin/workflow/
Users may also find the following summary of the required steps to be useful:
http://s-rip.ees.hokudai.ac.jp/resources/data.html
under the section "How to Access JASMIN". This summary was prepared by the SPARC S-RIP Activity
in response to user difficulties with the BADC registration process. While the BADC's own
information should take precedence, the S-RIP page may help to clarify the process. Also on the S-RIP
page, the "What You Can Do With JASMIN" section may help you to get started once you have
access.
Note that part of the above process involves registering with the BADC as a QBOi participant. The
QBOi group workspace is isolated from that of other projects hosted by the BADC, so that only
registered QBOi participants can access the data in the group workspace.In particular, note that data
can be uploaded using the Linux rsync command:
rsync -tpu [local filenames or dirs] [username]@jasminxfer1.ceda.ac.uk:/group_workspaces/jasmin2/qboi/[your modelling
centre]/[your model]/[path to data]

For example, to upload one netCDF file of CCCma's model output (monthly-mean zonal wind for
QBOi Experiment 1) to the QBOi group workspace:
rsync -tpur ua_Amon_CMAM_QBOiExp1_r1i1p1_197901-200912.nc janstey@jasminxfer1.ceda.ac.uk:/group_workspaces/jasmin2/qboi/CCCma/CMAM/QBOiExp1/mon/atm
os/ua/r1i1p1/

The software packages installed on JASMIN are listed at


https://github.com/cedadev/jasmin_scivm/wiki/Packages
Note that IDL is not listed, but can be accessed by typing "module add idl", after which then the "idl"
command will invoke IDL. The BADC asks that users who wish to run more than one instance of idl
simultaneously please use run-time licences in order to be economical with the more limited pool of
development licences. For information on this, see
http://proj.badc.rl.ac.uk/cedaservices/wiki/JASMIN/AnalysisServers
under "Making efficient use of IDL development licences" and "Example usage of IDL Runtime
Licences".

5.4 Table of requested output variables

Below is a table listing those variables to be saved for both standard and high-frequency (i.e. 6hourly) diagnostics. It is anticipated that modelling groups may locally store a more comprehensive
diagnostic set than which is requested below. The diagnostics list chosen also aligns closely with
requests from CCMi and the DYNVAR Diagnostic MIP.
Notes re. the following diagnostics table:
1. Monthly mean & daily mean indicates that both monthly mean and daily mean variables
are to be saved.
2. The dimensions are denoted XYT for lon-lat-time, XYPT for lon-lat-pressure-time, etc.
3. The horizontal grid for all diagnostics is not prescribed (as discussed in Sec 5.1, above). It
should be appropriate for the model, and ideally close to the models native horizontal
resolution, although a reduced grid is acceptable if this produces impractically large data
files. If the horizontal grid is reduced, it should be noted how this was done.
4. The vertical grid for all diagnostics is the prescribed set of standard pressure levels, with the
exceptions for 6-hourly data and daily-mean 3D data; see Sec 5.1 for further details.
5. Please note that the output periods for the different variables are given in Sec. 5.2

Variability

Monthly mean & daily mean XYT, XYPT (est1: 49.8GB monthly, 474GB daily)
output variable name long name [units]
dimension
psl
Sea Level Pressure [Pa]
2D
prc
Convective Precipitation Flux [kg s-1 m-2]
2D
pr
Total Precipitation Flux [kg s-1 m-2]
2D
tas
Near-Surface Air Temperature [K]
2D
uas
Eastward Near-Surface Wind [m s-1]
2D
-1
vas
Northward Near-Surface Wind [m s ]
2D
ta
Air Temperature [K]
3D*
ua
Eastward Wind [m s-1]
3D*
zg
Geopotential Height [m]
3D*
*Daily data should be on the 8-level pressure levels set: 1000, 850, 700, 500, 250, 100, 50, 10 hPa.
Monthly data should be on the set of levels given in Sec. 5.1.

Equatorial wave spectra


6-hourly instantaneous XYPT (est: 266GB)
ta
Air Temperature [K]
3D*
ua
Eastward Wind [m s-1]
3D*
-1
va
Northward Wind [m s ]
3D*
wa
Vertical Wind [m s-1]
3D*
*on the reduced set of latitudes red and range of model (or equivalent pressure) levels Pred. Data
required for conversion between model and pressure levels should be supplied if data are given on
model levels.

1
Estimates are based on the minimum suggested ensemble size.

Dynamics
Monthly mean & daily mean, zonal mean YPT (est: 23.7GB daily, 780MB monthly)
output variable name long name [units]
dimension
ua
Eastward Wind [m s-1]
2D
ta
Air Temperature [K]
2D
zg
Geopotential Height [m]
2D
-1
vstar
Residual Northward Wind [m s ]
2D
wstar
Residual Upward Wind [m s-1]
2D
-1
fy
Northward EP-flux [N m ]
2D
fz
Upward EP-flux [N m-1]
2D
-2
utenddivf
u-Tendency by EP-flux divergence [m s ]
2D
utend
u-Tendency [m s-2]
2D
-2
utendogw
u-Tendency by orographic gravity waves [m s ]
2D
utendnogw
u-Tendency by non-orographic gravity waves [m s-2]
2D
-1 -1
psistar
Residual Stream Function [kg m s ]
2D

Monthly mean XYPT (est: 125GB)
utendogw
u-Tendency by orographic gravity waves [m s-2]
3D
utendnogw
u-Tendency by non-orographic gravity waves [m s-2]
3D
-2
Vtendogw
v-Tendency from orographic gravity waves [m s ]
3D
vtendnogw
v-Tendency from non-orographic gravity waves [m s-2]
3D
taunoge
Eastward Wind Stress of non-orographic waves [Pa]
3D
taunogs
Southward Wind Stress of non-orographic waves [Pa]
3D
taunogw
Westward Wind Stress of non-orographic waves [Pa]
3D
taunogn
Northward Wind Stress of non-orographic waves [Pa]
3D

Daily mean XYT (est: 94.9GB)
tauogu
Surface Eastward Wind Stress by orographic waves [Pa]
2D
tauogv
Surface Northward Wind Stress by orographic waves [Pa]
2D
taunoge
Launch Eastward Wind Stress by non-orographic waves [Pa]
2D
taunogs
Launch Southward Wind Stress of non-orographic waves [Pa]
2D
taunogw
Launch Westward Wind Stress of non-orographic waves [Pa]
2D
taunogn
Launch Northward Wind Stress by non-orographic waves [Pa]
2D
only if non-isotropic and/or non-stationary at launch-level (e.g. coupled to convection or fronts)

Thermodynamics
Monthly mean & daily mean, zonal mean YPT (est: 32.5MB monthly, 9.88GB daily)
output variable name long name [units]
hus
Specific Humidity [kg / kg]
zmtnt
Diabatic Heating Rate [K s-1]
tntlw
Longwave Heating Rate [K s-1]
tntsw
Shortwave Heating Rate [K s-1]
o3
Mole Fraction of Ozone in Air [mole mole-1]

Only if model has prognostic ozone

dimension
2D
2D
2D
2D
2D

All TEM diagnostics are as defined in Andrews et al. (1987). For those diagnostics requiring assumed
parameters such as the pressure scale height or reference density, please provide the values that

were used in the calculations. It would be most useful if these parameters were provided in the
metadata of the netcdf files.

5. Project Participation & Acknowledgements


Groups interested in actively participating in the project, including running the core experiments,
uploading data and leading or participating in analyses, should contact the Project Coordinators. You
should identify the name of your model and modelling group, the experiments undertaken and those
individuals to be listed as PIs. A list of participating groups will be updated on the project webpages.
It is both anticipated and encouraged that publications will arise from analysis of the project data.
Please inform the Project Coordinators and PIs of the status of any planned work (including start and
submission dates and further relevant details). It envisaged that all participating modelling group PIs
should be offered co-authorship on publications for an identified period of time. Following this
period, the offer of co-authorship will be relaxed, but there should still be due acknowledgement.
Details for the embargo period and suggested wording for publication acknowledgements will be
issued in due course.

References:
D. G. Andrews, J. R. Holton and C. B. Leovy, 1987: Middle Atmosphere Dynamics. Academic Press, San
Diego
J. Anstey, K. Hamilton, S. Osprey, N. Butchart, L. Gray, 2015: Report on the 1st QBO Modelling and
Reanalyses Workshop, 16-18 March 2015, Victoria, BC, Canada, SPARC Newsletter, 45, July 2015.
J. Anstey, S. Osprey, N. Butchart, K. Hamilton, L. Gray, M. Baldwin 2017: Report on the SPARC QBO
Workshop: The QBO and its Global Influence - Past, Present and Future, 26-30 Sept 2016, Oxford, UK,
SPARC Newsletter, 48, January 2017.
K. Hamilton, S. Osprey, N. Butchart, 2015: Modeling the stratospheres heartbeat, Eos, 96, 2 July,
doi:10.1029/2015EO032301
Horinouchi, T., S. Pawson, K. Shibata, E. Manzini, M.A. Giorgetta, F. Sassi, R. J. Wilson, K. Hamilton, J.
DeGrandpe and A.A. Scaife, 2003: Tropical cumulus convection and upward propagating waves in
middle-atmospheric GCMs, J. Atmos. Sci. , 60, 2765-2782.
Kim, Y.-H. and Chun, H.-Y.: Momentum forcing of the quasi-biennial oscillation by equatorial waves in
recent reanalyses, Atmos. Chem. Phys., 15, 6577-6587, doi:10.5194/acp-15-6577-2015, 2015.
Lott, F. S. Denvil , N. Butchart, C. Cagnazzo , M. Giorgetta, S. Hardiman, E. Manzini, T. T. Krishmer , J.-P.
Duvel, P. Maury, J. Scinocca, S. Watanabe, S. Yukimoto, 2014: Kelvin and Rossby gravity wave packets
in the lower stratosphere of some high-top CMIP5 models, J. Geophys. Res., 119(5), 2156-2173, doi:
10.1002/2013JD020797

Вам также может понравиться